Re: submodules

2017-04-11 Thread Reid Barton
Hi Simon,

This happens because the locations of the submodules are specified
using relative paths from the main GHC repository, but Tritlo has only
made a fork of the main GHC repo, not all the submodules. I would do
this:

* Clone from the main GHC repo (including submodules) however you
usually do it (e.g., git clone --recursive
https://git.haskell.org/ghc.git)
* Add Tritlo's ghc repo as a remote: git remote add tritlo
g...@github.com:Tritlo/ghc.git
* Fetch from the new remote: git fetch tritlo
* Check out the branch you want: git checkout tritlo/

Here "tritlo" is just a name for the remote within your local ghc
checkout, so it can be anything you choose.

Regards,
Reid Barton

On Tue, Apr 11, 2017 at 11:53 AM, Simon Peyton Jones via ghc-devs
 wrote:
> Devs
>
> I want to build a GHC from someone else repo; this one actually
> g...@github.com:Tritlo/ghc.git.
>
> But when I clone it, and then do git submodule init; git submodule update, I
> get lots of
>
> git submodule update
>
> Cloning into '.arc-linters/arcanist-external-json-linter'...
>
> ERROR: Repository not found.
>
> fatal: Could not read from remote repository.
>
>
>
> Please make sure you have the correct access rights
>
> and the repository exists.
>
> Clone of 'g...@github.com:Tritlo/arcanist-external-json-linter.git' into
> submodule path '.arc-linters/arcanist-external-json-linter' failed
>
> simonpj@cam-05-unx:~/code/ghc-holes$
>
> What is the kosher way to do this?
>
> Thanks
>
> Simon
>
>
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Where do I start if I would like help improve GHC compilation times?

2017-04-09 Thread Reid Barton
Building modules from GHC itself is a little tricky and DynFlags is
extra tricky since it is involved in import cycles. Here is what I do:

* Copy DynFlags.hs somewhere outside the tree (for your present
purposes, it is no longer part of the compiler, but just some module
to be provided as input).
* Get rid of all the {-# SOURCE #-} pragmas on imports to turn them
into ordinary, non-boot file imports.
* Build with ".../ghc/inplace/bin/ghc-stage2 DynFlags -package ghc
-I.../ghc/compiler/stage2" plus whatever other options you want (e.g.,
probably "-fforce-recomp -O +RTS -s" at a minimum). By using "-package
ghc" you compile DynFlags against the version of ghc that you have
just built.
* This will result in some type errors, because DynFlags imports some
functions that expect arguments of type DynFlags. (This relates to the
import cycles that we broke earlier.) Since you are building against
the version of those functions from the ghc package, they expect the
type ghc:DynFlags.DynFlags, but they are now receiving a value of type
DynFlags from the main package. This is no big deal, just insert an
unsafeCoerce wherever necessary (mostly in front of occurrences of
"dflags") to get the compiler to stop complaining.

This is not 100% faithful to the way DynFlags would actually be
compiled during a GHC build, but the advantage of this method is that
you don't have to worry about GHC doing any recompilation checking
between the copy of DynFlags that you are testing on and the
compiler's own modules.

Regards,
Reid Barton


On Sun, Apr 9, 2017 at 5:37 AM, Alfredo Di Napoli
 wrote:
> Hey Ben,
>
> as promised I’m back to you with something more articulated and hopefully
> meaningful. I do hear you perfectly — probably trying to dive head-first
> into this without at least a rough understanding of the performance hotspots
> or the GHC overall architecture is going to do me more harm than good (I get
> the overall picture and I’m aware of the different stages of the GHC
> compilation pipeline, but it’s far from saying I’m proficient with the
> architecture as whole). I have also read a couple of years ago the GHC
> chapter on the “Architeture of Open Source Applications” book, but I don’t
> know how much that is still relevant. If it is, I guess I should refresh my
> memory.
>
> I’m currently trying to move on 2 fronts — please advice if I’m a fool
> flogging a dead horse or if I have any hope of getting anything done ;)
>
> 1. I’m trying to treat indeed the compiler as a black block (as you adviced)
> trying to build a sufficiently large program where GHC is not “as fast as I
> would like” (I know that’s a very lame definition of “slow”, hehe). In
> particular, I have built the stage2 compiler with the “prof” flavour as you
> suggested, and I have chosen 2 examples as a reference “benchmark” for
> performance; DynFlags.hs (which seems to have been mentioned multiple times
> as a GHC perf killer) and the highlighting-kate package as posted here:
> https://ghc.haskell.org/trac/ghc/ticket/9221 . The idea would be to compile
> those with -v +RTS -p -hc -RTS enabled, look at the output from the .prof
> file AND the `-v` flag, find any hotspot, try to change something,
> recompile, observe diff, rinse and repeat. Do you think I have any hope of
> making progress this way? In particular, I think compiling DynFlags.hs is a
> bit of a dead-end; I whipped up this buggy script which escalated into a
> Behemoth which is compiling pretty much half of the compiler once again :D
>
> ```
> #!/usr/bin/env bash
>
> ../ghc/inplace/bin/ghc-stage2 --make -j8 -v +RTS -A256M -qb0 -p -h \
> -RTS -DSTAGE=2 -I../ghc/includes -I../ghc/compiler -I../ghc/compiler/stage2
> \
> -I../ghc/compiler/stage2/build \
> -i../ghc/compiler/utils:../ghc/compiler/types:../ghc/compiler/typecheck:../ghc/compiler/basicTypes
> \
> -i../ghc/compiler/main:../ghc/compiler/profiling:../ghc/compiler/coreSyn:../ghc/compiler/iface:../ghc/compiler/prelude
> \
> -i../ghc/compiler/stage2/build:../ghc/compiler/simplStg:../ghc/compiler/cmm:../ghc/compiler/parser:../ghc/compiler/hsSyn
> \
> -i../ghc/compiler/ghci:../ghc/compiler/deSugar:../ghc/compiler/simplCore:../ghc/compile/specialise
> \
> -fforce-recomp -c $@
> ```
>
> I’m running it with `./dynflags.sh ../ghc/compiler/main/DynFlags.hs` but
> it’s taking a lot to compile (20+ mins on my 2014 mac Pro) because it’s
> pulling in half of the compiler anyway :D I tried to reuse the .hi files
> from my stage2 compilation but I failed (GHC was complaining about interface
> file mismatch). Short story short, I don’t think it will be a very agile way
> to proceed. Am I right? Do you have any recommendation in such sense? Do I
> have any hope to compile DynFlags.hs in a way which would make this perf
> investigatio

Re: PSA: perf.haskell.org/ghc temporarily out of order

2017-03-19 Thread Reid Barton
On Sun, Mar 19, 2017 at 2:16 PM, Joachim Breitner
 wrote:
> Hi,
>
> Am Sonntag, den 19.03.2017, 13:23 -0400 schrieb Reid Barton:
>> ght? On Sat, Mar 18, 2017 at 1:05 PM, Joachim Breitner
>> >  wrote:
>> > Hi,
>> >
>> > correct. It seems that 'make boot' tries to compile all of nofib, even
>> > those that are not to be run. So this ought to be revised.
>>
>> This appears to not actually be the case though, from local testing.
>> "make boot" never enters spectral/secretary, and it succeeds even
>> though it uses the inplace compiler for dependency generation.
>>
>> Moreover even if `make -C nofib boot` was failing, there should be
>> `.broken` log files uploaded to
>> https://github.com/nomeata/ghc-speed-logs/, right? But even those are
>> missing. So something seems to be more seriously broken.
>
> indeed, last upload 5 days ago. Let me have a look…
>
>
> It is busy building d357f526582e3c4cd4fbda5d73695fc81121b69a which
> seems to hang in the test suite. Killed it, hopefully that fixes it.

Thanks, it has started building again, and has almost gotten to the
commit which will fix nofib.

> Even if it does, it will take a while to catch up.

Yep, that's why I was eager to get it working again. At least the
commits where nofib was broken build a bit faster :)

Regards,
Reid Barton

> Greetings,
> Joachim
>
> --
> Joachim “nomeata” Breitner
>   m...@joachim-breitner.de • https://www.joachim-breitner.de/
>   XMPP: nome...@joachim-breitner.de • OpenPGP-Key: 0xF0FBF51F
>   Debian Developer: nome...@debian.org
>
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: PSA: perf.haskell.org/ghc temporarily out of order

2017-03-19 Thread Reid Barton
ght? On Sat, Mar 18, 2017 at 1:05 PM, Joachim Breitner
 wrote:
> Hi,
>
> correct. It seems that 'make boot' tries to compile all of nofib, even
> those that are not to be run. So this ought to be revised.

This appears to not actually be the case though, from local testing.
"make boot" never enters spectral/secretary, and it succeeds even
though it uses the inplace compiler for dependency generation.

Moreover even if `make -C nofib boot` was failing, there should be
`.broken` log files uploaded to
https://github.com/nomeata/ghc-speed-logs/, right? But even those are
missing. So something seems to be more seriously broken.

Regards,
Reid Barton

> Greetings,
> Joachim
>
> Am Samstag, den 18.03.2017, 01:56 -0400 schrieb Reid Barton:
>> Don't know whether it is the same issue, but perf.haskell.org seems
>> to
>> still have not built anything for the past 3 days, according to
>> https://github.com/nomeata/ghc-speed-logs/commits/master.
>>
>> Regards,
>> Reid Barton
>>
>> On Wed, Mar 15, 2017 at 5:49 PM, Ben Gamari 
>> wrote:
>> > Joachim Breitner  writes:
>> >
>> > > Hi,
>> > >
>> > > a recent change to nofib
>> > > (https://phabricator.haskell.org/rNOFIB313812d319e009d698bc1a4d2e
>> > > 8ac26d4dfe3c0a)
>> > > broke the perf.haskell.org builder, so we won’t be getting perf
>> > > warnings until that is fixed.
>> > >
>> >
>> > I've pushed the michalt's fix. Thanks for the quick turnaround,
>> > michalt!
>> >
>> > Cheers,
>> >
>> > - Ben
>> >
>> >
>> > ___
>> > ghc-devs mailing list
>> > ghc-devs@haskell.org
>> > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>> >
>>
>>
> --
> Joachim “nomeata” Breitner
>   m...@joachim-breitner.de • https://www.joachim-breitner.de/
>   XMPP: nome...@joachim-breitner.de • OpenPGP-Key: 0xF0FBF51F
>   Debian Developer: nome...@debian.org
>
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: PSA: perf.haskell.org/ghc temporarily out of order

2017-03-17 Thread Reid Barton
Don't know whether it is the same issue, but perf.haskell.org seems to
still have not built anything for the past 3 days, according to
https://github.com/nomeata/ghc-speed-logs/commits/master.

Regards,
Reid Barton

On Wed, Mar 15, 2017 at 5:49 PM, Ben Gamari  wrote:
> Joachim Breitner  writes:
>
>> Hi,
>>
>> a recent change to nofib
>> (https://phabricator.haskell.org/rNOFIB313812d319e009d698bc1a4d2e8ac26d4dfe3c0a)
>> broke the perf.haskell.org builder, so we won’t be getting perf
>> warnings until that is fixed.
>>
> I've pushed the michalt's fix. Thanks for the quick turnaround, michalt!
>
> Cheers,
>
> - Ben
>
>
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: [commit: ghc] master: Deserialize IfaceId more lazily (6446254)

2017-03-08 Thread Reid Barton
Done in commit fdb594ed3286088c1a46c95f29e277fcc60c0a01.

Regards,
Reid

On Wed, Mar 8, 2017 at 7:18 AM, Simon Peyton Jones
 wrote:
> Reid
>
> I beg you to add a comment to these carefully-placed used of laziness!
> The informative commit message does not appear in the code :-).
>
> Simon
>
> |  -Original Message-
> |  From: ghc-commits [mailto:ghc-commits-boun...@haskell.org] On Behalf
> |  Of g...@git.haskell.org
> |  Sent: 03 March 2017 21:36
> |  To: ghc-comm...@haskell.org
> |  Subject: [commit: ghc] master: Deserialize IfaceId more lazily
> |  (6446254)
> |
> |  Repository : ssh://g...@git.haskell.org/ghc
> |
> |  On branch  : master
> |  Link   :
> |  https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fghc.ha
> |  skell.org%2Ftrac%2Fghc%2Fchangeset%2F644625449a9b6fbeb9a81f1a7d0e7d184
> |  24fb707%2Fghc&data=02%7C01%7Csimonpj%40microsoft.com%7C9b1a8ffea4684b8
> |  f5e7608d4627d690f%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C6362417
> |  38152433434&sdata=H81TTDPPgdp%2BYQzqRFUtiyyfm%2Fn6YRQT%2BoOpJuehsOU%3D
> |  &reserved=0
> |
> |  >-------
> |
> |  commit 644625449a9b6fbeb9a81f1a7d0e7d18424fb707
> |  Author: Reid Barton 
> |  Date:   Fri Mar 3 15:49:38 2017 -0500
> |
> |  Deserialize IfaceId more lazily
> |
> |  This change sped up the total validate --build-only time by 0.8%
> |  on my test system; hopefully a representative result.
> |
> |  I didn't bother making the other constructors lazy because for
> |  IfaceData and IfaceClass we need to pull on some of the fields
> |  in loadDecl, and all the others seem much more rare than IfaceId.
> |
> |  Test Plan: validate, perf
> |
> |  Reviewers: austin, bgamari
> |
> |  Reviewed By: bgamari
> |
> |  Subscribers: thomie
> |
> |  Differential Revision: https://phabricator.haskell.org/D3269
> |
> |
> |  >---
> |
> |  644625449a9b6fbeb9a81f1a7d0e7d18424fb707
> |   compiler/iface/IfaceSyn.hs | 8 ++--
> |   1 file changed, 2 insertions(+), 6 deletions(-)
> |
> |  diff --git a/compiler/iface/IfaceSyn.hs b/compiler/iface/IfaceSyn.hs
> |  index d73a738..1c30476 100644
> |  --- a/compiler/iface/IfaceSyn.hs
> |  +++ b/compiler/iface/IfaceSyn.hs
> |  @@ -1565,9 +1565,7 @@ instance Binary IfaceDecl where
> |   put_ bh (IfaceId name ty details idinfo) = do
> |   putByte bh 0
> |   putIfaceTopBndr bh name
> |  -put_ bh ty
> |  -put_ bh details
> |  -put_ bh idinfo
> |  +lazyPut bh (ty, details, idinfo)
> |
> |   put_ bh (IfaceData a1 a2 a3 a4 a5 a6 a7 a8 a9) = do
> |   putByte bh 2
> |  @@ -1657,9 +1655,7 @@ instance Binary IfaceDecl where
> |   h <- getByte bh
> |   case h of
> |   0 -> do name<- get bh
> |  -ty  <- get bh
> |  -details <- get bh
> |  -idinfo  <- get bh
> |  +~(ty, details, idinfo) <- lazyGet bh
> |   return (IfaceId name ty details idinfo)
> |   1 -> error "Binary.get(TyClDecl): ForeignType"
> |   2 -> do a1  <- getIfaceTopBndr bh
> |
> |  ___
> |  ghc-commits mailing list
> |  ghc-comm...@haskell.org
> |  https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmail.h
> |  askell.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fghc-
> |  commits&data=02%7C01%7Csimonpj%40microsoft.com%7C9b1a8ffea4684b8f5e760
> |  8d4627d690f%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C6362417381524
> |  33434&sdata=L1dvXY%2BW%2Brv4gMqeWm8BGfIPifKK0DBndoJVF%2FCfu0c%3D&reser
> |  ved=0
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


ghc speed log build script

2017-02-10 Thread Reid Barton
Hi Joachim,

Is the script used to do the builds for perf.haskell.org/ghc/ available
somewhere? I couldn't find it poking around your GitHub repositories.

I assume from the line "Try to match validate settings" that the build
script creates a custom build.mk or similar. I'd like to know exactly what
it does so that I can experiment with custom build settings without
worrying about them being overridden by your build script.

Regards,
Reid Barton
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


More history in perf.haskell.org graphs

2017-02-01 Thread Reid Barton
Hi Joachim,

The graphs like https://perf.haskell.org/ghc/#graph/buildtime/make are
very helpful, but I haven't found a way to adjust the range of commits
shown from the default of the last 50 commits. For example, suppose I
wanted to see the history of build time over the past 12 months and
then narrow in on commits that caused regressions. Is there a way to
do that using the https://perf.haskell.org/ghc/ website?

Failing that, is the data that goes into these graphs available for
download in some form?

Regards,
Reid Barton
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Lazy ST vs concurrency

2017-01-30 Thread Reid Barton
I wrote a lazy ST microbenchmark (http://lpaste.net/351799) that uses
nothing but lazy ST monad operations in the inner loop. With various
caveats, it took around 3 times as long to run under +RTS -N2 after
applying https://phabricator.haskell.org/D3038. The biggest caveat is
that the cost of the `threadPaused` in `noDuplicate#` seems to be
potentially proportional to the thread's stack depth, and I'm not sure
how representative my microbenchmark is in that regard.

I'm actually surprised the `noDuplicate#` version isn't an order of
magnitude or so slower than that. Still, a 3x factor is a large price
to pay. I don't yet understand what's going on here clearly enough to
be sure that the `noDuplicate#` is necessary, or that we can't
implement `noDuplicate#` more cheaply in the common case of no
contention. My feeling is that if it turns out that we can implement
the correct behavior cheaply, then it will be better to have left it
broken for a little while than to first have a correct but slow
implementation and then later replaced it with a correct and fast
implementation. The latter is disruptive to two groups of people,
those who are affected by the bug and also those who cannot afford to
have their lazy ST code run 3 times slower; of which the former group
is affected already, and we can advertise the existence of the bug
until we have a workable solution. So I'm reluctant to go down this
`noDuplicate#` path until we have exhausted our other options.

In an ideal world with no users, it would be better to start with
correct but slow, of course.

Regards,
Reid Barton

On Mon, Jan 30, 2017 at 11:18 AM, David Feuer  wrote:
> I forgot to CC ghc-devs the first time, so here's another copy.
>
> I was working on #11760 this weekend, which has to do with concurrency
> breaking lazy ST. I came up with what I thought was a pretty decent solution (
> https://phabricator.haskell.org/D3038 ). Simon Peyton Jones, however, is quite
> unhappy about the idea of sticking this weird unsafePerformIO-like code
> (noDup, which I originally implemented as (unsafePerformIO . evaluate), but
> which he finds ugly regardless of the details) into fmap and (>>=).  He's also
> concerned that the noDuplicate# applications will kill performance in the
> multi-threaded case, and suggests he would rather leave lazy ST broken, or
> even remove it altogether, than use a fix that will make it slow sometimes,
> particularly since there haven't been a lot of reports of problems in the
> wild.
>
> My view is that leaving it broken, even if it only causes trouble
> occasionally, is simply not an option. If users can't rely on it to always
> give correct answers, then it's effectively useless. And for the sake of
> backwards compatibility, I think it's a lot better to keep it around, even if
> it runs slowly multithreaded, than to remove it altogether.
>
> Note to Simon PJ: Yes, it's ugly to stick that noDup in there. But lazy ST has
> always been a bit of deep magic. You can't *really* carry a moment of time
> around in your pocket and make its history happen only if necessary. We can
> make it work in GHC because its execution model is entirely based around graph
> reduction, so evaluation is capable of driving execution. Whereas lazy IO is
> extremely tricky because it causes effects observable in the real world, lazy
> ST is only *moderately* tricky, causing effects that we have to make sure
> don't lead to weird interactions between threads. I don't think it's terribly
> surprising that it needs to do a few more weird things to work properly.
>
> David
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Arc doesn't work

2017-01-20 Thread Reid Barton
On Fri, Jan 20, 2017 at 12:50 PM, Simon Peyton Jones
 wrote:
> Yes that worked!  THanks
> https://phabricator.haskell.org/D2995
>
> Will you make that change?

I have done so, in commit 5ff812c14594f507c48121f16be4752eee6e3c88.

Regards,
Reid Barton

> S
>
> |  -Original Message-
> |  From: Reid Barton [mailto:rwbar...@gmail.com]
> |  Sent: 20 January 2017 17:23
> |  To: Simon Peyton Jones 
> |  Cc: ghc-devs@haskell.org
> |  Subject: Re: Arc doesn't work
> |
> |  From the python 3 reference:
> |
> |  New in version 3.3: The 'rb' prefix of raw bytes literals has been added as
> |  a synonym of 'br'.
> |
> |  Simon, can you try replacing that occurrent of rb by br and see whether 
> that
> |  fixes it? Just the one on the line it complained about.
> |
> |  Regards,
> |  Reid Barton
> |
> |  On Fri, Jan 20, 2017 at 10:50 AM, Simon Peyton Jones via ghc-devs  |  d...@haskell.org> wrote:
> |  > I can’t use arc.  At the end of ‘arc diff’ it says
> |  >
> |  > Exception
> |  >
> |  > Some linters failed:
> |  >
> |  > - CommandException: Command failed with error #1!
> |  >
> |  >   COMMAND
> |  >
> |  >   python3 .arc-linters/check-cpp.py 'compiler/basicTypes/Id.hs'
> |  >
> |  >
> |  >
> |  >   STDOUT
> |  >
> |  >   (empty)
> |  >
> |  >
> |  >
> |  >   STDERR
> |  >
> |  > File ".arc-linters/check-cpp.py", line 28
> |  >
> |  >   r = re.compile(rb'ASSERT\s+\(')
> |  >
> |  >^
> |  >
> |  >   SyntaxError: invalid syntax
> |  >
> |  >
> |  >
> |  > (Run with `--trace` for a full exception trace.)
> |  >
> |  >
> |  >
> |  > simonpj@cam-05-unx:~/code/HEAD-3$ python3 --version
> |  >
> |  > python3 --version
> |  >
> |  > Python 3.2.3
> |  >
> |  > Alas.
> |  >
> |  > Simon
> |  >
> |  >
> |  > ___
> |  > ghc-devs mailing list
> |  > ghc-devs@haskell.org
> |  > https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmail.h
> |  > askell.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fghc-
> |  
> devs&data=02%7C01%7Csimonpj%40microsoft.com%7Cff94b190c13e4417c34808d44158fa
> |  
> c2%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636205297797264731&sdata=xYh
> |  IGsBacpdYRuWbYB%2BYTc8Uh%2B0KfufpQbXM7gXfI4Q%3D&reserved=0
> |  >
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Arc doesn't work

2017-01-20 Thread Reid Barton
From the python 3 reference:

New in version 3.3: The 'rb' prefix of raw bytes literals has been
added as a synonym of 'br'.

Simon, can you try replacing that occurrent of rb by br and see
whether that fixes it? Just the one on the line it complained about.

Regards,
Reid Barton

On Fri, Jan 20, 2017 at 10:50 AM, Simon Peyton Jones via ghc-devs
 wrote:
> I can’t use arc.  At the end of ‘arc diff’ it says
>
> Exception
>
> Some linters failed:
>
> - CommandException: Command failed with error #1!
>
>   COMMAND
>
>   python3 .arc-linters/check-cpp.py 'compiler/basicTypes/Id.hs'
>
>
>
>   STDOUT
>
>   (empty)
>
>
>
>   STDERR
>
> File ".arc-linters/check-cpp.py", line 28
>
>   r = re.compile(rb'ASSERT\s+\(')
>
>^
>
>   SyntaxError: invalid syntax
>
>
>
> (Run with `--trace` for a full exception trace.)
>
>
>
> simonpj@cam-05-unx:~/code/HEAD-3$ python3 --version
>
> python3 --version
>
> Python 3.2.3
>
> Alas.
>
> Simon
>
>
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Large tuple strategy

2017-01-05 Thread Reid Barton
OK, I filed https://ghc.haskell.org/trac/ghc/ticket/13072 for this.

Regards,
Reid Barton

On Thu, Jan 5, 2017 at 10:28 AM, Simon Peyton Jones
 wrote:
> |  It occurred to me that rather than moving just these instances to a new
> |  module, we could move the large tuples themselves to a new module
> |  Data.LargeTuple and put the instances there.
>
> Yes, that's what I intended to suggest.  Good plan.
>
> Simon
>
> |  -Original Message-
> |  From: ghc-devs [mailto:ghc-devs-boun...@haskell.org] On Behalf Of Reid
> |  Barton
> |  Sent: 05 January 2017 15:27
> |  To: ghc-devs@haskell.org
> |  Subject: Large tuple strategy
> |
> |  Hi all,
> |
> |  https://phabricator.haskell.org/D2899 proposes adding Generic instances for
> |  large tuples (up to size 62). Currently GHC only provides Generic instances
> |  for tuples of size up to 7. There's been some concern about the effect that
> |  all these instances will have on compilation time for anyone who uses
> |  Generics, even if they don't actually use the new instances.
> |
> |  There was a suggestion to move these new instances to a separate module, 
> but
> |  as these instances would then be orphans, I believe GHC would have to read
> |  the interface file for that module anyways once Generic comes into scope,
> |  which would defeat the purpose of the split.
> |
> |  It occurred to me that rather than moving just these instances to a new
> |  module, we could move the large tuples themselves to a new module
> |  Data.LargeTuple and put the instances there. The Prelude would reexport the
> |  large tuples, so there would be no user-visible change.
> |  According to my experiments, GHC should never have to read the
> |  Data.LargeTuple interface file unless a program actually mentions a large
> |  tuple type, which is presumably rare. We could then also extend the 
> existing
> |  instances for Eq, Show, etc., which are currently only provided through 15-
> |  tuples.
> |
> |  A nontrivial aspect of this change is that tuples are wired-in types, and
> |  they currently all live in the ghc-prim package. I'm actually not sure why
> |  they need to be wired-in rather than ordinary types with a funny-looking
> |  name. In any case I need to look into this further, but the difficulties
> |  here don't seem to be insurmountable.
> |
> |  Does this seem like a reasonable plan? Anything important I have missed?
> |
> |  Regards,
> |  Reid Barton
> |  ___
> |  ghc-devs mailing list
> |  ghc-devs@haskell.org
> |  
> https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmail.haskell
> |  .org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fghc-
> |  
> devs&data=02%7C01%7Csimonpj%40microsoft.com%7Cd3262b1204df407f65ce08d4357f4b
> |  
> d8%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636192268229980651&sdata=NV%
> |  2BaCgD7Xo5EIwuL5cZaTrCBHihNxAiZvT0VCNQl6Z8%3D&reserved=0
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Large tuple strategy

2017-01-05 Thread Reid Barton
Hi all,

https://phabricator.haskell.org/D2899 proposes adding Generic
instances for large tuples (up to size 62). Currently GHC only
provides Generic instances for tuples of size up to 7. There's been
some concern about the effect that all these instances will have on
compilation time for anyone who uses Generics, even if they don't
actually use the new instances.

There was a suggestion to move these new instances to a separate
module, but as these instances would then be orphans, I believe GHC
would have to read the interface file for that module anyways once
Generic comes into scope, which would defeat the purpose of the split.

It occurred to me that rather than moving just these instances to a
new module, we could move the large tuples themselves to a new module
Data.LargeTuple and put the instances there. The Prelude would
reexport the large tuples, so there would be no user-visible change.
According to my experiments, GHC should never have to read the
Data.LargeTuple interface file unless a program actually mentions a
large tuple type, which is presumably rare. We could then also extend
the existing instances for Eq, Show, etc., which are currently only
provided through 15-tuples.

A nontrivial aspect of this change is that tuples are wired-in types,
and they currently all live in the ghc-prim package. I'm actually not
sure why they need to be wired-in rather than ordinary types with a
funny-looking name. In any case I need to look into this further, but
the difficulties here don't seem to be insurmountable.

Does this seem like a reasonable plan? Anything important I have missed?

Regards,
Reid Barton
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Compiling on OpenBSD-current

2016-12-01 Thread Reid Barton
https://phabricator.haskell.org/D2673 is responsible. It adds
CONF_LD_LINKER_OPTS_STAGE0 to $1_$2_$3_ALL_LD_OPTS, which is
documented as "Options for passing to plain ld", which is okay. But
just below that the same variable $1_$2_$3_ALL_LD_OPTS is added (with
-optl prefixes attached) to $1_$2_$3_GHC_LD_OPTS ("Options for passing
to GHC when we use it for linking"), which is wrong because GHC uses
gcc to do the link, not ld.

Regards,
Reid Barton


On Thu, Dec 1, 2016 at 6:58 AM, Karel Gardas  wrote:
>
> I've been hit by this during 8.0.2 rc1 binary preparation so if nobody else
> nor you find a time to fix that sooner I'll hopefully find some time during
> this weekend to have a look into it. I'm pretty sure this is fairly recent
> breakage on OpenBSD...
>
> Cheers,
> Karel
>
> On 12/ 1/16 12:21 PM, Adam Steen wrote:
>>
>> Hi
>>
>> When Compiling on OpenBSD-Current I get the follow error, what do i need
>> to do to fix this?
>>
>> Cheers
>> Adam
>>
>> ===--- building phase 0
>> gmake --no-print-directory -f ghc.mk <http://ghc.mk> phase=0
>> phase_0_builds
>> gmake[1]: Nothing to be done for 'phase_0_builds'.
>> ===--- building phase 1
>> gmake --no-print-directory -f ghc.mk <http://ghc.mk> phase=1
>> phase_1_builds
>>
>> "/usr/local/bin/ghc" -o utils/hsc2hs/dist/build/tmp/hsc2hs -hisuf hi
>> -osuf  o -hcsuf hc -static  -O0 -H64m -Wall   -package-db
>> libraries/bootstrapping.conf  -hide-all-packages -i -iutils/hsc2hs/.
>> -iutils/hsc2hs/dist/build -Iutils/hsc2hs/dist/build
>> -iutils/hsc2hs/dist/build/hsc2hs/autogen
>> -Iutils/hsc2hs/dist/build/hsc2hs/autogen -optP-include
>> -optPutils/hsc2hs/dist/build/hsc2hs/autogen/cabal_macros.h -package-id
>> base-4.9.0.0 -package-id containers-0.5.7.1 -package-id
>> directory-1.2.6.2 -package-id filepath-1.4.1.0 -package-id
>> process-1.4.2.0 -XHaskell2010  -no-user-package-db -rtsopts   -odir
>> utils/hsc2hs/dist/build -hidir utils/hsc2hs/dist/build -stubdir
>> utils/hsc2hs/dist/build-optl-z -optlwxneeded -static  -O0 -H64m
>> -Wall   -package-db libraries/bootstrapping.conf  -hide-all-packages -i
>> -iutils/hsc2hs/. -iutils/hsc2hs/dist/build -Iutils/hsc2hs/dist/build
>> -iutils/hsc2hs/dist/build/hsc2hs/autogen
>> -Iutils/hsc2hs/dist/build/hsc2hs/autogen -optP-include
>> -optPutils/hsc2hs/dist/build/hsc2hs/autogen/cabal_macros.h -package-id
>> base-4.9.0.0 -package-id containers-0.5.7.1 -package-id
>> directory-1.2.6.2 -package-id filepath-1.4.1.0 -package-id
>> process-1.4.2.0 -XHaskell2010  -no-user-package-db -rtsopts
>> utils/hsc2hs/dist/build/Main.o utils/hsc2hs/dist/build/C.o
>> utils/hsc2hs/dist/build/Common.o utils/hsc2hs/dist/build/CrossCodegen.o
>> utils/hsc2hs/dist/build/DirectCodegen.o utils/hsc2hs/dist/build/Flags.o
>> utils/hsc2hs/dist/build/HSCParser.o
>> utils/hsc2hs/dist/build/UtilsCodegen.o
>> utils/hsc2hs/dist/build/Paths_hsc2hs.o
>>
>> : error:
>>  Warning: Couldn't figure out linker information!
>>   Make sure you're using GNU ld, GNU gold or the built in OS
>> X linker, etc.
>> cc: wxneeded: No such file or directory
>> `cc' failed in phase `Linker'. (Exit code: 1)
>> compiler/ghc.mk:580 <http://ghc.mk:580>:
>> compiler/stage1/build/.depend-v.haskell: No such file or directory
>> gmake[1]: *** [utils/hsc2hs/ghc.mk:15 <http://ghc.mk:15>:
>> utils/hsc2hs/dist/build/tmp/hsc2hs] Error 1
>> gmake: *** [Makefile:125: all] Error 2
>>
>>
>>
>> ___
>> ghc-devs mailing list
>> ghc-devs@haskell.org
>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>>
>
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: 177 unexpected test failures on a new system -- is this yet another linker issue?

2016-11-10 Thread Reid Barton
On Thu, Nov 10, 2016 at 11:12 PM, Ömer Sinan Ağacan
 wrote:
> I'm trying to validate on a new system (not sure if related, but it has gcc
> 6.2.1 and ld 2.27.0), and I'm having 177 unexpected failures, most (maybe
> even
> all) of them are similar to this one:
>
> => T5976(ext-interp) 1 of 1 [0, 0, 0]
> cd "./th/T5976.run" &&  "/home/omer/haskell/ghc/inplace/test
> spaces/ghc-stage2" -c T5976.hs -dcore-dno-debug-output -XTemplateHaskell
> -package template-haskell -fexternal-interpreter -v0
> Actual stderr output differs from expected:
> --- ./th/T5976.run/T5976.stderr.normalised  2016-11-10
> 23:01:39.351997560 -0500
> +++ ./th/T5976.run/T5976.comp.stderr.normalised 2016-11-10
> 23:01:39.351997560 -0500
> @@ -1,7 +1,4 @@
> -
> -T5976.hs:1:1:
> -Exception when trying to run compile-time code:
> -  bar
> -CallStack (from HasCallStack):
> -  error, called at T5976.hs:: in :Main
> -Code: error ((++) "foo " error "bar")
> +ghc-iserv.bin: internal loadArchive: GNU-variant filename offset not
> found while reading filename f

Did this line get truncated? It might help to have the rest of it.

Regards,
Reid Barton
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Allow top-level shadowing for imported names?

2016-10-04 Thread Reid Barton
On Tue, Oct 4, 2016 at 7:12 AM, Yuras Shumovich  wrote:
> On Tue, 2016-10-04 at 04:48 -0400, Edward Kmett wrote:
>
>> It makes additions of names to libraries far less brittle. You can
>> add a
>> new export with a mere minor version bump, and many of the situations
>> where
>> that causes breakage can be fixed by this simple rule change.
>
> It would be true only if we also allow imports to shadow each other.
> Otherwise there will be a big chance for name clash yet.

Could you give a concrete example of what you are worried about? It's
already legal to have a clash between imported names as long as you
don't refer to the colliding name. For example if one of my imports A
exports a name `foo` which I don't use, and then another import B
starts to export the same name `foo`, there won't be any error as long
as I continue to not use `foo`.

Regards,
Reid Barton
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Do we have free bits in the info pointer itself?

2016-09-07 Thread Reid Barton
Good point. Also

* the high bits are free now, but they may not be free forever (though
that's not a great reason to avoid using them in the meantime)

* masking off the high bits when entering is rather expensive, at
least in code size, compared to a simple "jmp *%rax" or especially
"jmp *symb", which appears at the end of almost every Haskell function

* modifying the low bits of a pointer will leave it pointing at the
same cache line, so the hardware prefetcher may decide to prefetch the
contents (hopefully at an appropriate time) while modifying the high
bits will make it not look like a pointer at all. But it's hard to
know how much we currently gain from hardware prefetching, if
anything.

Certainly these downsides are not necessarily deal-breakers, as
demonstrated by NaN-boxing as used in major JavaScript engines.

What did you intend to use the high bits for?

Regards,
Reid Barton

On Wed, Sep 7, 2016 at 11:16 PM, Edward Kmett  wrote:
> Mostly just that GHC still works on 32 bit platforms.
>
> -Edward
>
> On Wed, Sep 7, 2016 at 5:32 PM, Ryan Newton  wrote:
>>
>> Our heap object header is one word -- an info table pointer.
>>
>> Well, a 64 bit info table pointer leaves *at least* 16 high bits inside
>> the object header for other purposes, right?
>>
>> Is there any problem with using these other than having to mask the info
>> table pointer each time it is dereferenced?
>>
>> Thanks,
>>   -Ryan
>>
>>
>> ___
>> ghc-devs mailing list
>> ghc-devs@haskell.org
>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>>
>
>
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Linux (ELF) Support for "ghc -static -shared"

2016-06-08 Thread Reid Barton
On Sat, Jun 4, 2016 at 1:48 AM, Travis Whitaker 
wrote:

> Suppose I have some module Foo with foreign exports. On some platforms I
> can do something like:
>
> ghc -static -shared Foo.o ...
>
> The resulting shared library would have the base libraries and the RTS
> statically linked in. From what I understand this is possible on BSDs
> because generating PIC is the default there (for making PIEs I'd imagine),
> and possible on Windows because the dynamic loading process involves some
> technique that doesn't require PIC. On Linux (at least x86_64) this doesn't
> work by default since libHSbase, libHSrts et al. are not built with -fPIC
> unless one specifically asks for it when building GHC. As far as I know
> this is the only way to get -static -shared to work on this platform.
>

I believe that's all correct. Incidentally there was just a related post on
reddit yesterday:
https://www.reddit.com/r/haskell/comments/4my2cn/a_story_of_how_i_built_static_haskell_libraries/


> While the use cases for such stand-alone shared libraries might be small
> niches, I was curious whether or not there was any discussion about
> potential strategies for making it easier to build them for Linux. At the
> very least, perhaps a single switch for the configure script or build.mk
> to make it easier to build GHC+libs with -fPIC on Linux.
>

That's certainly a good idea. Mind filing a ticket?

Another step up might be providing *_PIC.a objects for the base libraries,
> so that the non-PIC objects are still available for the majority of cases
> in which PIC is not required.
>

I think we don't do this mainly because it would inflate the size of the
binary distribution significantly for something that is, as you say, rather
a niche use case.

Regards,
Reid Barton
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Why upper bound version numbers?

2016-06-08 Thread Reid Barton
On Tue, Jun 7, 2016 at 9:31 AM, Ben Lippmeier  wrote:

>
> > On 7 Jun 2016, at 7:02 am, Dominick Samperi  wrote:
> >
> > Why would a package developer want to upper bound the version number
> > for packages like base? For example, the clash package requires
> >
> > base >= 4.2 && base <= 4.3
>
> I put an upper bound on all my libraries as a proxy for the GHC version.
> Each time a new GHC version is released sometimes my libraries work with it
> and sometimes not. I remember a “burning bridges” event in recent history,
> when the definition of the Monad class changed and broke a lot of things.
>
>  Suppose you maintain a library that is used by a lot of first year uni
> students (like gloss). Suppose the next GHC version comes around and your
> library hasn’t been updated yet because you’re waiting on some dependencies
> to get fixed before you can release your own. Do you want your students to
> get a “cannot install on this version” error, or some confusing build error
> which they don’t understand?
>

This is a popular but ultimately silly argument. First, cabal dependency
solver error messages are terrible; there's no way a new user would figure
out from a bunch of solver output about things like "base-4.7.0.2" and
"Dependency tree exhaustively searched" that the solution is to build with
an older version of GHC. A configuration error and a build error will both
send the same message: "something is broken". Second, this argument ignores
the much more likely case that the package would have just worked with the
new GHC, but the upper bound results in an unnecessary (and again,
terrible) error message and a bad user experience. The best case is that
the user somehow learns about --allow-newer=base, but cabal's error message
doesn't even suggest trying this and it's still an unnecessary hoop to jump
through.

Experienced users are also only harmed by these upper bounds, since it's
generally obvious when a program fails to build due to a change in base and
the normal reaction to a version error with base is just to retry with
--allow-newer=base anyways.

Of course the best thing is to stick to the part of the language that is
unlikely to be broken by future versions of base; sadly this seems to be
impossible in the current climate...

Regards,
Reid Barton
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Issue using StrictData

2016-03-19 Thread Reid Barton
Hi,

`extensions: StrictData` turns on the StrictData extension for all modules
in the program. So every field of every data type defined in every module
is made strict. Is that really what you wanted? For a large, complicated
program like Agda, It seems about as likely to work as just passing the
program into an ML compiler unmodified. Your errors are a typical example:
note they are runtime errors from a generated happy parser, which probably
does something like initializing a record with (error "Internal Happy
error") and then trying to update it with record update syntax.

I'm guessing you meant `other-extensions: StrictData`.

Regards,
Reid Barton

On Sat, Mar 19, 2016 at 10:16 PM, Andrés Sicard-Ramírez 
wrote:

> Hi,
>
> I know this isn't a convenient issue report because the "test case"
> isn't easily reproducible. Since I don't understand the issue, I don't
> know how to create a smaller test case, sorry.
>
> My OS is Ubuntu 12-04 (64 bits) and I'm using the following programs:
>
> Agda master branch on commit
>
>
> https://github.com/agda/agda/commit/181a954a40b137c8deb1df801a8ee55fdbc19116
>
> GHC ghc-8.0 branch on commit
>
>
> https://git.haskell.org/ghc.git/commit/a96933017470d03a1c9414c9c90dfd5c0f0903ed
>
>   $ cabal --version
>   cabal-install version 1.23.0.0
>   compiled using version 1.23.1.0 of the Cabal library
>
> (compiled with GHC 7.10.3)
>
>   $ alex --version
>   Alex version 3.1.7, (c) 2003 Chris Dornan and Simon Marlow
>
>   $ happy --version
>   Happy Version 1.19.5 Copyright (c) 1993-1996 Andy Gill, Simon Marlow
> (c) 1997-2005 Simon Marlow
>
> After adding `extensions: StrictData` to Agda.cabal, I'm getting the
> following errors:
>
>   $ cabal install
>   ...
>   Installing library in
>
> /home/asr/.cabal/lib/x86_64-linux-ghc-8.0.0.20160316/Agda-2.5.0-8eDdWzYvIFd4CKBTUUP6kU
>   Installing executable(s) in /home/asr/.cabal/bin
>   Generating Agda library interface files...
>   agda: Internal Happy error
>
>   CallStack (from HasCallStack):
> error, called at templates/GenericTemplate.hs:288:17 in
> Agda-2.5.0-8eDdWzYvIFd4CKBTUUP6kU:Agda.Syntax.Parser.Parser
>   WARNING: Failed to typecheck Agda.Primitive!
>   agda: Internal Happy error
>
>   CallStack (from HasCallStack):
> error, called at templates/GenericTemplate.hs:288:17 in
> Agda-2.5.0-8eDdWzYvIFd4CKBTUUP6kU:Agda.Syntax.Parser.Parser
>   WARNING: Failed to typecheck Agda.Builtin.Bool!
>   agda: Internal Happy error
>
>   CallStack (from HasCallStack):
> error, called at templates/GenericTemplate.hs:288:17 in
> Agda-2.5.0-8eDdWzYvIFd4CKBTUUP6kU:Agda.Syntax.Parser.Parser
>   WARNING: Failed to typecheck Agda.Builtin.Char!
>   agda: Internal Happy error
>
>   CallStack (from HasCallStack):
> error, called at templates/GenericTemplate.hs:288:17 in
> Agda-2.5.0-8eDdWzYvIFd4CKBTUUP6kU:Agda.Syntax.Parser.Parser
>   WARNING: Failed to typecheck Agda.Builtin.Coinduction!
>   agda: Internal Happy error
>
>   CallStack (from HasCallStack):
> error, called at templates/GenericTemplate.hs:288:17 in
> Agda-2.5.0-8eDdWzYvIFd4CKBTUUP6kU:Agda.Syntax.Parser.Parser
>   WARNING: Failed to typecheck Agda.Builtin.Equality!
>   agda: Internal Happy error
>
>   CallStack (from HasCallStack):
> error, called at templates/GenericTemplate.hs:288:17 in
> Agda-2.5.0-8eDdWzYvIFd4CKBTUUP6kU:Agda.Syntax.Parser.Parser
>   WARNING: Failed to typecheck Agda.Builtin.Float!
>   agda: Internal Happy error
>
>   CallStack (from HasCallStack):
> error, called at templates/GenericTemplate.hs:288:17 in
> Agda-2.5.0-8eDdWzYvIFd4CKBTUUP6kU:Agda.Syntax.Parser.Parser
>   WARNING: Failed to typecheck Agda.Builtin.FromNat!
>   agda: Internal Happy error
>
>   CallStack (from HasCallStack):
> error, called at templates/GenericTemplate.hs:288:17 in
> Agda-2.5.0-8eDdWzYvIFd4CKBTUUP6kU:Agda.Syntax.Parser.Parser
>   WARNING: Failed to typecheck Agda.Builtin.FromNeg!
>   agda: Internal Happy error
>
>   CallStack (from HasCallStack):
> error, called at templates/GenericTemplate.hs:288:17 in
> Agda-2.5.0-8eDdWzYvIFd4CKBTUUP6kU:Agda.Syntax.Parser.Parser
>   WARNING: Failed to typecheck Agda.Builtin.FromString!
>   agda: Internal Happy error
>
>   CallStack (from HasCallStack):
> error, called at templates/GenericTemplate.hs:288:17 in
> Agda-2.5.0-8eDdWzYvIFd4CKBTUUP6kU:Agda.Syntax.Parser.Parser
>   WARNING: Failed to typecheck Agda.Builtin.IO!
>   agda: Internal Happy error
>
>   CallStack (from HasCallStack):
> error, called at templates/GenericTemplate.hs:288:17 in
> Agda-2.5.0-8eDdWzYvIFd4CKBTUUP6kU:Agda.Syntax.Parser.Parser
>   WARNING: Failed

Re: Impredicative types in 8.0, again

2016-03-04 Thread Reid Barton
This looks very similar to https://ghc.haskell.org/trac/ghc/ticket/11319,
but might be worth including as a separate example there. Note that it does
compile if you swap the order of the case alternatives.

Regards,
Reid Barton


On Fri, Mar 4, 2016 at 8:43 AM, Kosyrev Serge <_deepf...@feelingofgreen.ru>
wrote:

> Good day!
>
> I realise that ImpredicativeTypes is a problematic extension, but I have
> found something that looks like an outright bug -- no polymorphism
> involved:
>
> ,
> | {-# LANGUAGE ImpredicativeTypes #-}
> |
> | module Foo where
> |
> | foo :: IO (Maybe Int)
> | foo = do
> |   pure $ case undefined :: Maybe String of
> | Nothing
> |   -> Nothing
> | Just _
> |   -> (undefined :: Maybe Int)
> `
>
> produces the following errors:
>
> ,
> | foo.hs:7:3: error:
> | • Couldn't match type ‘forall a. Maybe a’ with ‘Maybe Int’
> |   Expected type: IO (Maybe Int)
> | Actual type: IO (forall a. Maybe a)
> | • In a stmt of a 'do' block:
> | pure
> | $ case undefined :: Maybe String of {
> | Nothing -> Nothing
> | Just _ -> (undefined :: Maybe Int) }
> |   In the expression:
> | do { pure
> |  $ case undefined :: Maybe String of {
> |  Nothing -> Nothing
> |  Just _ -> (undefined :: Maybe Int) } }
> |   In an equation for ‘foo’:
> |   foo
> | = do { pure
> |$ case undefined :: Maybe String of {
> |Nothing -> Nothing
> |Just _ -> (undefined :: Maybe Int) } }
> |
> | foo.hs:11:19: error:
> | • Couldn't match type ‘a’ with ‘Int’
> |   ‘a’ is a rigid type variable bound by
> | a type expected by the context:
> |   forall a. Maybe a
> | at foo.hs:11:19
> |   Expected type: forall a. Maybe a
> | Actual type: Maybe Int
> | • In the expression: (undefined :: Maybe Int)
> |   In a case alternative: Just _ -> (undefined :: Maybe Int)
> |   In the second argument of ‘($)’, namely
> | ‘case undefined :: Maybe String of {
> |Nothing -> Nothing
> |Just _ -> (undefined :: Maybe Int) }’
> `
>
> --
> с уважениeм / respectfully,
> Косырев Сергей
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Missing definitions of associated types

2016-02-18 Thread Reid Barton
Well, I see your point; but you also can't define a

On Thu, Feb 18, 2016 at 12:00 PM, David Feuer  wrote:

> It seems to be that a missing associated type definition should be an
> error, by default, rather than a warning. The current behavior under those
> circumstances strikes me as very strange, particularly for data families
> and particularly in the presence of overlapping.
>

> This compiles with just a warning because Assoc Char *falls through* to
> the general case. WAT? This breaks all my intuition about what associated
> types are supposed to be about.
>
>
Well, I see your point; but you also can't give a definition for Assoc Char
in the Foo Char instance, because open data family instances are not
allowed to overlap. So if failing to give a definition for an associated
data family is an error, then it's impossible to use overlapping instances
with classes that have associated data families. Is that your intention?

I don't have a strong opinion here. I'm mildly inclined to say that people
using overlapping instances have already signed themselves up for weird
things happening, and we may as well let them do whatever other weird
things they want.

Regards,
Reid Barton
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: GHC build time graphs

2016-02-12 Thread Reid Barton
On Tue, Feb 9, 2016 at 11:20 AM, Karel Gardas 
wrote:

> On 01/28/16 11:34 PM, Ben Gamari wrote:
>
>> Joachim Breitner  writes:
>>
>> Hi Oleg,
>>>
>>> Am Freitag, den 29.01.2016, 00:22 +0200 schrieb Oleg Grenrus:
>>>
>>>> Is the same compiler used to build HEAD and 7.10,1?
>>>>
>>>
>>> Good call. In fact, no: 7.10.1 is built with 7.6.3, while HEAD is built
>>> with 7.10.3.
>>>
>>> Anthony’s link, i.e.
>>>
>>> https://perf.haskell.org/ghc/#compare/ca00def1d7093d6b5b2a937ddfc8a01c152038eb/a496f82d5684f3025a60877600e82f0b29736e85
>>> has links to the build logs of either build; there I could find that
>>> information.
>>>
>>> That might be (part) of the problem. But if it is, it is even worse, as
>>> it would mean not only building the compiler got slower, but the
>>> compiler itself...
>>>
>>> I can verify that the build itself is indeed slower. Validating the
>> current state of ghc-7.10 takes 19 minutes, whereas ghc-8.0 takes 25.5
>> minutes. This isn't entirely unexpected but the change is quite a bit
>> larger than I had thought. It would be nice to know which commits are
>> responsible.
>>
>
> btw, just recent experience on ARM64 (X-gene board):
>
> bootstrapping 7.10.1 with 7.6.x took: ~120 minutes
> bootstrapping 8.0.1 RC2 with 7.10.1 took: ~446 minutes
>
> both run as: ./configure; time make -j8
>

It would be interesting to have the time for bootstrapping 7.10.1 with
7.10.1 too, for comparison.

Regards,
Reid Barton
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Expected a type

2016-02-08 Thread Reid Barton
On Mon, Feb 8, 2016 at 2:36 PM, Wojtek Narczyński 
wrote:

> Dear Devs,
>
> I've tried to ask this in the ($) thread, but it was totally offtopic
> there and I was ignored just as I deserved :-)
>
> Consider the following example.
>
> wojtek@Desktop2016:~/src/he$ cat kinds.hs
> {-# LANGUAGE DataKinds #-}
> {-# LANGUAGE KindSignatures #-}
>
> data K = A | B
>
> f :: (A :: K) -> (B :: K)
> f _ = undefined
>
> wojtek@Desktop2016:~/src/he$ /opt/ghc/head/bin/ghc kinds.hs
> [1 of 1] Compiling Main ( kinds.hs, kinds.o )
>
> kinds.hs:6:6: error:
> • Expected a type, but ‘'A’ has kind ‘K’
> • In the type signature:
> f :: (A :: K) -> (B :: K)
>
> kinds.hs:6:18: error:
> • Expected a type, but ‘'B’ has kind ‘K’
> • In the type signature:
> f :: (A :: K) -> (B :: K)
>
> As Roman kindly (!) explained to me some time ago, GHC really means
> "Expected a type of kind '*' (or '#')..."
>
> Now that GHC is apparently undergoing a major overhaul of its internals,
> would it be possible to allow types of various kinds in functions? Would it
> make sense? May I file a ticket?


Normally the reason to define a function is so that you can apply it to
something. But there are no values of the promoted type A to apply f to,
aside from perhaps undefined. What would be the purpose of allowing this?

Regards,
Reid Barton
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Unexpected lack of change in ghcspeed results

2016-01-28 Thread Reid Barton
On Wed, Jan 27, 2016 at 5:35 PM, Reid Barton  wrote:

> Oh! I guess the name ghcspeed was too memorable... and browser bar
> autocompletion did the rest. Sorry for the noise!
>

I noticed that https://perf.haskell.org/ghc/ still says "GHC Speed" in the
page title and the page header, so now I don't feel quite so silly for
making this mistake. If http://ghcspeed-nomeata.rhcloud.com/ is not
supported any more, how about adding a notice to that page pointing people
to https://perf.haskell.org/ghc/?

Regards,
Reid Barton



> On Wed, Jan 27, 2016 at 5:23 PM, Joachim Breitner <
> m...@joachim-breitner.de> wrote:
>
>> Dear Reid,
>>
>> Am Mittwoch, den 27.01.2016, 14:50 -0500 schrieb Reid Barton:
>> > I was interested to see what effect the recent commit "Restore
>> > original alignment for info tables" (0dc7b36c) would have on
>> > performance. However, when I look at http://ghcspeed-nomeata.rhcloud.
>> > com/changes/?rev=0dc7b36c3c261b3eccf8460581fcd3d71f6e6ff6, I don't
>> > see the expected binary size increase (about 1%) that I got in local
>> > testing. Instead, the size increase appears to be attached to commit
>> > 0d92d9cb6d65fd00f9910c3f6f85bc6c68f5543b.
>> >
>> > I notice that these two commits, along with three others, were
>> > committed at exactly the same time (Wed Jan 27 11:32:15 2016 +0100),
>> > presumably in a rebase. Could this be confusing ghcspeed?
>> >
>>
>> heh, I’m surprised: Both that the ghcspeed server still runs, and that
>> people are still using it :-)
>>
>> Indeed, you observe correctly, ghcspeed does not handle git rebases
>> well. That was one of the reasons why I reimplemented the server from
>> scratch. It now runs under perf.haskell.org, and there the expected
>> changes are attributed to the right commit:
>>
>> https://perf.haskell.org/ghc/#revision/0dc7b36c3c261b3eccf8460581fcd3d71f6e6ff6
>>
>> Is ghcspeed still linked somewhere, or was it an old bookmark from you
>> that led you there?
>>
>> Greetings,
>> Joachim
>>
>> --
>> Joachim “nomeata” Breitner
>>   m...@joachim-breitner.de • http://www.joachim-breitner.de/
>>   Jabber: nome...@joachim-breitner.de  • GPG-Key: 0xF0FBF51F
>>   Debian Developer: nome...@debian.org
>>
>>
>> ___
>> ghc-devs mailing list
>> ghc-devs@haskell.org
>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>>
>>
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Unexpected lack of change in ghcspeed results

2016-01-27 Thread Reid Barton
Oh! I guess the name ghcspeed was too memorable... and browser bar
autocompletion did the rest. Sorry for the noise!

Regards,
Reid Barton

On Wed, Jan 27, 2016 at 5:23 PM, Joachim Breitner 
wrote:

> Dear Reid,
>
> Am Mittwoch, den 27.01.2016, 14:50 -0500 schrieb Reid Barton:
> > I was interested to see what effect the recent commit "Restore
> > original alignment for info tables" (0dc7b36c) would have on
> > performance. However, when I look at http://ghcspeed-nomeata.rhcloud.
> > com/changes/?rev=0dc7b36c3c261b3eccf8460581fcd3d71f6e6ff6, I don't
> > see the expected binary size increase (about 1%) that I got in local
> > testing. Instead, the size increase appears to be attached to commit
> > 0d92d9cb6d65fd00f9910c3f6f85bc6c68f5543b.
> >
> > I notice that these two commits, along with three others, were
> > committed at exactly the same time (Wed Jan 27 11:32:15 2016 +0100),
> > presumably in a rebase. Could this be confusing ghcspeed?
> >
>
> heh, I’m surprised: Both that the ghcspeed server still runs, and that
> people are still using it :-)
>
> Indeed, you observe correctly, ghcspeed does not handle git rebases
> well. That was one of the reasons why I reimplemented the server from
> scratch. It now runs under perf.haskell.org, and there the expected
> changes are attributed to the right commit:
>
> https://perf.haskell.org/ghc/#revision/0dc7b36c3c261b3eccf8460581fcd3d71f6e6ff6
>
> Is ghcspeed still linked somewhere, or was it an old bookmark from you
> that led you there?
>
> Greetings,
> Joachim
>
> --
> Joachim “nomeata” Breitner
>   m...@joachim-breitner.de • http://www.joachim-breitner.de/
>   Jabber: nome...@joachim-breitner.de  • GPG-Key: 0xF0FBF51F
>   Debian Developer: nome...@debian.org
>
>
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Unexpected lack of change in ghcspeed results

2016-01-27 Thread Reid Barton
Hi Joachim,

I was interested to see what effect the recent commit "Restore original
alignment for info tables" (0dc7b36c) would have on performance. However,
when I look at
http://ghcspeed-nomeata.rhcloud.com/changes/?rev=0dc7b36c3c261b3eccf8460581fcd3d71f6e6ff6,
I don't see the expected binary size increase (about 1%) that I got in
local testing. Instead, the size increase appears to be attached to commit
0d92d9cb6d65fd00f9910c3f6f85bc6c68f5543b.

I notice that these two commits, along with three others, were committed at
exactly the same time (Wed Jan 27 11:32:15 2016 +0100), presumably in a
rebase. Could this be confusing ghcspeed?

Regards,
Reid Barton
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Confused about specified type variables using -XTypeApplications

2016-01-07 Thread Reid Barton
On Thu, Jan 7, 2016 at 1:39 PM, Andres Loeh  wrote:

> I find it particularly confusing that GHCi prints the types of c and d
> in exactly the same way, yet treats explicit type application on them
> differently.
>

Try with :set -fprint-explicit-foralls. Maybe it should be the default when
TypeApplications is enabled?

Regards,
Reid Barton
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Kinds of type synonym arguments

2015-12-21 Thread Reid Barton
On Mon, Dec 21, 2015 at 5:13 AM, Simon Peyton Jones 
wrote:

> newtype T = MkT Int#
>
>
>
> Provided T :: # (i.e. unlifted), I don’t think this would be too hard.
> That is, you can give a new name (via newtype) to an unlifted type like
> Int#, Float#, Double# etc.
>
>
>
> Worth a wiki page and a ticket.
>

There is already a ticket at least,
https://ghc.haskell.org/trac/ghc/ticket/1311.

Regards,
Reid Barton
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: question about coercions between primitive types in STG level

2015-12-07 Thread Reid Barton
Note that int2Float# converts an Int# to the Float# with the same numeric
value (e.g. 72 -> 72.0), not the one with the same bit representation
(which doesn't really make sense anyways since Int# and Float# may be
different sizes). So I think it's not what you want.

At least on x86_64, it's rather expensive to move a bit representation
between a general-purpose register and a floating-point (xmm) register. As
far as I know, the only way is to go through memory. This may have design
implications for your work. For example, if you have an unboxed sum of two
Double#s, it would certainly be better to store the data part in a
floating-point register than a general-purpose register. If you have a sum
that contains both integral and floating-point variants, it may be better
depending on the situation to store its data in integer registers,
floating-point registers, or a combination (using extra space). I doubt you
want to give the programmer that much control though... One option would
be, at least for a first version, treat Int# and Double# and Float# as
three incompatible kinds of memory/registers that cannot alias each other.

As for your assembly code, can you provide the Cmm code that compiles to
it? But in any case "movq 16(%xmm1),%rax" is certainly wrong, it should be
offseting 16 bytes from a register like Sp or R1.

Regards,
Reid Barton

On Mon, Dec 7, 2015 at 11:21 AM, Ömer Sinan Ağacan 
wrote:

> Thanks Simon, primops worked fine, but not I'm getting assembler
> errors(even
> though -dcore-lint, -dstg-lint and -dcmm-lint are all passing).
>
> The error is caused by this STG expression:
>
> case (#,#) [ds_gX8 ds_gX9] of _ {
>   (#,#) tag_gWR ubx_gWS ->
>   case tag_gWR of tag_gWR {
> __DEFAULT -> GHC.Err.undefined;
> 1# ->
> let {
>   sat_sWD :: [GHC.Types.Char] =
>   \u srt:SRT:[roK :-> GHC.Show.$fShowInt] []
>   let { sat_sWC :: GHC.Types.Int = NO_CCS
> GHC.Types.I#! [ubx_gWS];
>   } in  GHC.Show.show GHC.Show.$fShowInt sat_sWC;
> } in
> let {
>   sat_sWB :: [GHC.Types.Char] =
>   \u srt:SRT:[0k :-> GHC.CString.unpackCString#] []
>   GHC.CString.unpackCString# "Left "#;
> } in  GHC.Base.++ sat_sWB sat_sWD;
> 2# ->
> let {
>   co_gWT :: GHC.Prim.Float# =
>   sat-only \s [] int2Float# [ubx_gWS]; } in
> let {
>   sat_sWH :: [GHC.Types.Char] =
>   \u srt:SRT:[rd2 :-> GHC.Float.$fShowFloat] []
>   let { sat_sWG :: GHC.Types.Float = NO_CCS
> GHC.Types.F#! [co_gWT];
>   } in  GHC.Show.show GHC.Float.$fShowFloat
> sat_sWG; } in
> let {
>   sat_sWF :: [GHC.Types.Char] =
>   \u srt:SRT:[0k :-> GHC.CString.unpackCString#] []
>   GHC.CString.unpackCString# "Right "#;
> } in  GHC.Base.++ sat_sWF sat_sWH;
>   };
> };
>
> In the first case(when the tag is 1#) I'm not doing any coercions, second
> argument of the tuple is directly used. In the second case(when the tag is
> 2#),
> I'm generating this let-binding:
>
> let {
>   co_gWT :: GHC.Prim.Float# =
>   sat-only \s [] int2Float# [ubx_gWS]; }
>
> And then in the RHS of case alternative I'm using co_gWT instead of
> ubx_gWS,
> but for some reason GHC is generating invalid assembly for this expression:
>
> /tmp/ghc2889_0/ghc_2.s: Assembler messages:
>
> /tmp/ghc2889_0/ghc_2.s:125:0: error:
>  Error: `16(%xmm1)' is not a valid base/index expression
> `gcc' failed in phase `Assembler'. (Exit code: 1)
>
> The assembly seems to be:
>
>  Asm code 
> .section .text
> .align 8
> .quad 4294967296
> .quad 18
> co_gWT_info:
> _cY7:
> _cY9:
> movq 16(%xmm1),%rax
> cvtsi2ssq %rax,%xmm0
> movss %xmm0,%xmm1
> jmp *(%rbp)
> .size co_gWT_info, .-co_gWT_info
>
> Do you have any ideas why this may be happening?
>
> 2015-12-07 7:23 GMT-05:00 Simon Peyton Jones :
> > If memory serves, there are primops for converting between unboxed
> values of different widths.
> >
> > Certainly converting between a float and a non-float will require an
> instruction on some architectures, since they use different register sets.
> >
> > Re (2) I have no idea.  You'll need to get more information... ppr

Re: How do I use CallStack?

2015-12-06 Thread Reid Barton
On Sun, Dec 6, 2015 at 11:56 PM, Richard Eisenberg 
wrote:

> That looks like exactly what I want. Thanks.
>
> There remain two mysteries:
> - I thought that CallStacks were a new feature that would come with GHC
> 8.0. Yet it seems the datatype is present in base-4.8.x. Even though the
> docs even say (wrongly, evidently) that it's in base since 4.9.
>

Somehow some CallStack-related things snuck in between 7.10.1 and 7.10.2.
Compare https://hackage.haskell.org/package/base-4.8.0.0/docs/GHC-Stack.html
and https://hackage.haskell.org/package/base-4.8.1.0/docs/GHC-Stack.html.

Regards,
Reid Barton
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: stg_upd_frame_info still broken

2015-10-27 Thread Reid Barton
I got lucky and found an error in the first place I looked. I have no way
to test it, but I expect that https://phabricator.haskell.org/D1382 will
fix the build on Windows, or at least make it closer to correct :)

Regards,
Reid

On Tue, Oct 27, 2015 at 11:25 AM, Reid Barton  wrote:

> Unfortunately the DYNAMIC_GHC_PROGRAMS=NO build on Linux did produce a
> working ghci, so I guess that leaves reviewing the likely culprit patch(es)
> very carefully...
>
> Regards,
> Reid Barton
>
>
> On Tue, Oct 27, 2015 at 10:57 AM, Simon Peyton Jones <
> simo...@microsoft.com> wrote:
>
>>  I'm pretty sure this error is being produced by ghc's own runtime
>> linker, which has a built-in symbol table (essentially just a C array of
>> structs of { "foo", &foo }). This array is built from a bunch of macros
>> such as SymI_HasProto(stg_upd_frame_info), which used to be present in
>> rts/Linker.c but were moved to rts/RtsSymbols.c in commit abc214b77d. I
>> guess that commit or a related one was not correct. Windows is the only
>> (major?) platform on which the ghc executable is built statically by
>> default, and therefore uses ghc's own runtime linker.
>>
>>
>>
>> Ah. That sounds very plausible Thanks
>>
>>
>>
>> S
>>
>>
>>
>> *From:* Reid Barton [mailto:rwbar...@gmail.com]
>> *Sent:* 27 October 2015 14:57
>> *To:* Ben Gamari
>> *Cc:* Simon Peyton Jones; ghc-devs@haskell.org
>> *Subject:* Re: stg_upd_frame_info still broken
>>
>>
>>
>> On Tue, Oct 27, 2015 at 10:46 AM, Ben Gamari  wrote:
>>
>> Simon Peyton Jones  writes:
>> > I cloned an entirely new GHC repository.
>> > Then 'sh validate'.
>> > Same result as before: any attempt to run GHCi fails with an unresolved
>> symbol.
>> >
>> > bash$ c:/code/HEAD-1/inplace/bin/ghc-stage2 --interactive
>> >
>> > GHCi, version 7.11.20151026: http://www.haskell.org/ghc/
>> <https://na01.safelinks.protection.outlook.com/?url=http%3a%2f%2fwww.haskell.org%2fghc%2f&data=01%7c01%7csimonpj%40064d.mgd.microsoft.com%7ce0433e9616f94207642908d2deded4ff%7c72f988bf86f141af91ab2d7cd011db47%7c1&sdata=RYlwVuvgZKZDInxJ2iDfFNuxK0UX2FGcSeplXjv4Yg0%3d>
>> :? for help
>> >
>> > ghc-stage2.exe: unable to load package `ghc-prim-0.4.0.0'
>> >
>> > ghc-stage2.exe:
>> C:\code\HEAD-1\libraries\ghc-prim\dist-install\build\HSghc-prim-0.4.0.0.o:
>> unknown symbol `_stg_upd_frame_info'
>> >
>> > How could I actually find what the problem is? Trying random things
>> > and hoping the problem goes away clearly is not working.
>> >
>> I would first try to find the object file which is supposed to provide
>> this symbol and figure out whether the problem is one of the RTL
>> (which is what I would put my money on) or some part of the build
>> toolchain.
>>
>>
>>
>>  I'm pretty sure this error is being produced by ghc's own runtime
>> linker, which has a built-in symbol table (essentially just a C array of
>> structs of { "foo", &foo }). This array is built from a bunch of macros
>> such as SymI_HasProto(stg_upd_frame_info), which used to be present in
>> rts/Linker.c but were moved to rts/RtsSymbols.c in commit abc214b77d. I
>> guess that commit or a related one was not correct. Windows is the only
>> (major?) platform on which the ghc executable is built statically by
>> default, and therefore uses ghc's own runtime linker.
>>
>> I'll try building a Linux ghc with GHC_DYNAMIC=NO and if it exhibits the
>> same problem I should be able to provide a quick fix.
>>
>> Regards,
>>
>> Reid Barton
>>
>
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: stg_upd_frame_info still broken

2015-10-27 Thread Reid Barton
Unfortunately the DYNAMIC_GHC_PROGRAMS=NO build on Linux did produce a
working ghci, so I guess that leaves reviewing the likely culprit patch(es)
very carefully...

Regards,
Reid Barton


On Tue, Oct 27, 2015 at 10:57 AM, Simon Peyton Jones 
wrote:

>  I'm pretty sure this error is being produced by ghc's own runtime linker,
> which has a built-in symbol table (essentially just a C array of structs of
> { "foo", &foo }). This array is built from a bunch of macros such as
> SymI_HasProto(stg_upd_frame_info), which used to be present in rts/Linker.c
> but were moved to rts/RtsSymbols.c in commit abc214b77d. I guess that
> commit or a related one was not correct. Windows is the only (major?)
> platform on which the ghc executable is built statically by default, and
> therefore uses ghc's own runtime linker.
>
>
>
> Ah. That sounds very plausible Thanks
>
>
>
> S
>
>
>
> *From:* Reid Barton [mailto:rwbar...@gmail.com]
> *Sent:* 27 October 2015 14:57
> *To:* Ben Gamari
> *Cc:* Simon Peyton Jones; ghc-devs@haskell.org
> *Subject:* Re: stg_upd_frame_info still broken
>
>
>
> On Tue, Oct 27, 2015 at 10:46 AM, Ben Gamari  wrote:
>
> Simon Peyton Jones  writes:
> > I cloned an entirely new GHC repository.
> > Then 'sh validate'.
> > Same result as before: any attempt to run GHCi fails with an unresolved
> symbol.
> >
> > bash$ c:/code/HEAD-1/inplace/bin/ghc-stage2 --interactive
> >
> > GHCi, version 7.11.20151026: http://www.haskell.org/ghc/
> <https://na01.safelinks.protection.outlook.com/?url=http%3a%2f%2fwww.haskell.org%2fghc%2f&data=01%7c01%7csimonpj%40064d.mgd.microsoft.com%7ce0433e9616f94207642908d2deded4ff%7c72f988bf86f141af91ab2d7cd011db47%7c1&sdata=RYlwVuvgZKZDInxJ2iDfFNuxK0UX2FGcSeplXjv4Yg0%3d>
> :? for help
> >
> > ghc-stage2.exe: unable to load package `ghc-prim-0.4.0.0'
> >
> > ghc-stage2.exe:
> C:\code\HEAD-1\libraries\ghc-prim\dist-install\build\HSghc-prim-0.4.0.0.o:
> unknown symbol `_stg_upd_frame_info'
> >
> > How could I actually find what the problem is? Trying random things
> > and hoping the problem goes away clearly is not working.
> >
> I would first try to find the object file which is supposed to provide
> this symbol and figure out whether the problem is one of the RTL
> (which is what I would put my money on) or some part of the build
> toolchain.
>
>
>
>  I'm pretty sure this error is being produced by ghc's own runtime linker,
> which has a built-in symbol table (essentially just a C array of structs of
> { "foo", &foo }). This array is built from a bunch of macros such as
> SymI_HasProto(stg_upd_frame_info), which used to be present in rts/Linker.c
> but were moved to rts/RtsSymbols.c in commit abc214b77d. I guess that
> commit or a related one was not correct. Windows is the only (major?)
> platform on which the ghc executable is built statically by default, and
> therefore uses ghc's own runtime linker.
>
> I'll try building a Linux ghc with GHC_DYNAMIC=NO and if it exhibits the
> same problem I should be able to provide a quick fix.
>
> Regards,
>
> Reid Barton
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: stg_upd_frame_info still broken

2015-10-27 Thread Reid Barton
On Tue, Oct 27, 2015 at 10:46 AM, Ben Gamari  wrote:

> Simon Peyton Jones  writes:
> > I cloned an entirely new GHC repository.
> > Then 'sh validate'.
> > Same result as before: any attempt to run GHCi fails with an unresolved
> symbol.
> >
> > bash$ c:/code/HEAD-1/inplace/bin/ghc-stage2 --interactive
> >
> > GHCi, version 7.11.20151026: http://www.haskell.org/ghc/  :? for help
> >
> > ghc-stage2.exe: unable to load package `ghc-prim-0.4.0.0'
> >
> > ghc-stage2.exe:
> C:\code\HEAD-1\libraries\ghc-prim\dist-install\build\HSghc-prim-0.4.0.0.o:
> unknown symbol `_stg_upd_frame_info'
> >
> > How could I actually find what the problem is? Trying random things
> > and hoping the problem goes away clearly is not working.
> >
> I would first try to find the object file which is supposed to provide
> this symbol and figure out whether the problem is one of the RTL
> (which is what I would put my money on) or some part of the build
> toolchain.
>

 I'm pretty sure this error is being produced by ghc's own runtime linker,
which has a built-in symbol table (essentially just a C array of structs of
{ "foo", &foo }). This array is built from a bunch of macros such as
SymI_HasProto(stg_upd_frame_info), which used to be present in rts/Linker.c
but were moved to rts/RtsSymbols.c in commit abc214b77d. I guess that
commit or a related one was not correct. Windows is the only (major?)
platform on which the ghc executable is built statically by default, and
therefore uses ghc's own runtime linker.

I'll try building a Linux ghc with GHC_DYNAMIC=NO and if it exhibits the
same problem I should be able to provide a quick fix.

Regards,
Reid Barton
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: MIN_VERSION macros

2015-09-25 Thread Reid Barton
On Fri, Sep 25, 2015 at 4:09 PM, Edward Z. Yang  wrote:

> Excerpts from Reid Barton's message of 2015-09-25 12:36:48 -0700:
> > GHC could provide MIN_VERSION_* macros for packages that have had their
> > versions specified with -package or similar flags (which is how Cabal
> > invokes GHC). That would go only a small way towards the original goals
> > though.
>
> This is exactly what the MIN_VERSION_* macros should do, and you can
> generalize it to work even without -package: you get macros for EXPOSED
> packages which are available for import.  This says *nothing* about
> the transitive dependencies of the packages you're depending on, but
> it's more reasonable to have "one package, one version" invariant,
> because having multiple versions of the package exposed would cause
> a module name to be ambiguous (and unusable.)


Oh, I see. I had always assumed that GHC had some kind of solver to try to
pick compatible versions of packages, but having done some experiments, I
see that it always picks the newest exposed version of each direct
dependency. So we can indeed define MIN_VERSION_* macros in accordance with
the newest exposed version of each package.

There are still some edge cases, notably: if package foo reexports the
contents of some modules from package bar, and the API of these modules
changes between two versions of package bar, then you cannot reliably use
MIN_VERSION_bar to detect these API changes in a module that imports the
reexports from package foo (since the newest installed foo might not be
built against the newest installed bar). In the more restrictive Cabal
model, you can reliably do this of course. So it could break in an existing
project. However this kind of situation (where the API of a package depends
on the version of its dependencies) should hopefully be fairly rare in
practice.

Regards,
Reid Barton
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: MIN_VERSION macros

2015-09-25 Thread Reid Barton
On Fri, Sep 25, 2015 at 12:18 PM, Eric Seidel  wrote:

> I've been meaning to ask about this as well. It also forces tools like
> ghc-mod and hdevtools to be cabal-aware, which is an unnecessary source
> of complexity IMO.
>

This would certainly be nice, but...

GHC certainly has enough information to generate these macros, as it
> knows which packages (and versions) it's compiling against.
>

It knows at some point, but it doesn't necessarily know before parsing the
module, at which point it is too late. I can have two versions of a package
A, and two other packages B and C that depend on different versions of A,
and depending on whether a module M uses package B or package C, M will see
different versions of package A automatically. This is all slightly
magical, and I have to say I don't entirely understand how GHC decides
which versions to expose in general, but that's how GHC works today and
it's quite convenient.

GHC could provide MIN_VERSION_* macros for packages that have had their
versions specified with -package or similar flags (which is how Cabal
invokes GHC). That would go only a small way towards the original goals
though.

(Also, I wonder how MIN_VERSION_* fits into a Backpack world...)

Regards,
Reid Barton
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Cannot have GHC in ARMv6 architecture

2015-09-09 Thread Reid Barton
On Wed, Sep 9, 2015 at 11:46 AM, Karel Gardas 
wrote:

> On 09/ 9/15 04:21 PM, jmcf...@openmailbox.org wrote:
>
>> So ghc-stage1 is working. Good! Now just to find why your base is broken,
>>> please rebuild ghc completely and this time does not use any -j 5 option.
>>> It'll use just one core, but will stop on the first error. Let's see how
>>> far
>>> you get.
>>>
>> Ah. Alright, it took a while longer.
>>
>>   $ ./configure --target=arm-linux-gnueabihf
>> --with-gcc=arm-linux-gnueabihf-gcc-sysroot --enable-unregisterised && make
>> (...)
>> "inplace/bin/ghc-stage1" -hisuf hi -osuf  o -hcsuf hc -static  -H64m -O0
>>   -this-package-key ghcpr_8TmvWUcS1U1IKHT0levwg3 -hide-all-packages -i
>> -ilibraries/ghc-prim/. -ilibraries/ghc-prim/dist-install/build
>> -ilibraries/ghc-prim/dist-install/build/autogen
>> -Ilibraries/ghc-prim/dist-install/build
>> -Ilibraries/ghc-prim/dist-install/build/autogen -Ilibraries/ghc-prim/.
>> -optP-include
>> -optPlibraries/ghc-prim/dist-install/build/autogen/cabal_macros.h
>> -package-key rts -this-package-key ghc-prim -XHaskell2010 -O -fllvm
>> -no-user-package-db -rtsopts  -odir
>> libraries/ghc-prim/dist-install/build -hidir
>> libraries/ghc-prim/dist-install/build -stubdir
>> libraries/ghc-prim/dist-install/build   -c
>> libraries/ghc-prim/./GHC/CString.hs -o
>> libraries/ghc-prim/dist-install/build/GHC/CString.o
>> You are using a new version of LLVM that hasn't been tested yet!
>> We will try though...
>>
>
> ^ OK you can see this.
>
> opt: /tmp/ghc23881_0/ghc_1.ll:7:6: error: unexpected type in metadata
>> definition
>> !0 = metadata !{metadata !"top", i8* null}
>>   ^
>> libraries/ghc-prim/ghc.mk:4: recipe for target
>> 'libraries/ghc-prim/dist-install/build/GHC/CString.o' failed
>> make[1]: *** [libraries/ghc-prim/dist-install/build/GHC/CString.o] Error 1
>> Makefile:71: recipe for target 'all' failed
>> make: *** [all] Error 2
>>
>> This is weird, I think I'm not even using LLVM.
>>
>
> This is not weird at all! GHC does not provide ARM NCG and so it is using
> LLVM if you compile ARM registerised build.
>

But "./configure [...] --enable-unregisterised" should mean using the C
backend, not LLVM, right? So this still looks strange. Also there is an
explicit "-fllvm" on the failing ghc-stage1 command line.

What is in your build.mk? Maybe you are using one of the build flavors that
sets -fllvm explicitly?

That said you can also try installing the supported version of LLVM for ghc
7.10, which is LLVM 3.5.

Regards,
Reid Barton
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Shared data type for extension flags

2015-09-03 Thread Reid Barton
On Thu, Sep 3, 2015 at 12:41 PM, Herbert Valerio Riedel 
wrote:

> On 2015-09-02 at 10:00:40 +0200, Matthew Pickering wrote:
> > Surely the easiest way here (including for other tooling - ie
> > haskell-src-exts) is to create a package which just provides this
> > enumeration. GHC, cabal, th, haskell-src-exts and so on then all
> > depend on this package rather than creating their own enumeration.
>
> I'm not sure this is such a good idea having a package many packages
> depend on if `ghc` is one of them, as this forces every install-plan
> which ends up involving the ghc package to be pinned to the very same
> version the `ghc` package was compiled against.
>
> This is a general problem affecting packages `ghc` depends upon (and as
> a side-note starting with GHC 7.10, we were finally able to cut the
> package-dependency between `ghc` and `Cabal`)
>

Surely this argument does not apply to a package created to hold data types
that would otherwise live in the template-haskell or ghc packages.

Regards,
Reid Barton
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: arc patch

2015-08-05 Thread Reid Barton
It's actually "arc upgrade". I added a mention of this command to the wiki:
https://ghc.haskell.org/trac/ghc/wiki/Phabricator#HelpImgettingastrangeerrorwhenrunningarcthatIdidntgetyesterday

Regards,
Reid Barton

On Wed, Aug 5, 2015 at 12:09 PM, Thomas Miedema 
wrote:

> Try running 'arc update' anytime you get such kind of error.
>
> Austin upgrades GHC's Phabricator instance every now and then. Sometimes
> this requires also an update to `arc` for things to work again.
>
> On Wed, Aug 5, 2015 at 6:02 PM, Simon Peyton Jones 
> wrote:
>
>> Friends
>>
>> I wanted to build a Phab ticket, so I tried
>>
>> arc patch D1069
>>
>> but it failed, as below. What do I do now?
>>
>> Thanks
>>
>> Simon
>>
>> simonpj@cam-05-unx:~/code/HEAD-5$ arc patch D1069
>>
>> You have untracked files in this working copy.
>>
>>
>>
>>   Working copy: /home/simonpj/code/HEAD-5/
>>
>>
>>
>>   Untracked files in working copy:
>>
>> Foo
>>
>> compiler/basicTypes/T7287.stderr
>>
>> foo
>>
>> libraries/integer-gmp2/GNUmakefile
>>
>> libraries/integer-gmp2/ghc.mk
>>
>> spj-patch
>>
>> testsuite/tests/deriving/should_fail/T2604.hs
>>
>> testsuite/tests/deriving/should_fail/T2604.stderr
>>
>> testsuite/tests/deriving/should_fail/T5863a.hs
>>
>> testsuite/tests/deriving/should_fail/T5863a.stderr
>>
>> testsuite/tests/deriving/should_fail/T7800.hs
>>
>> testsuite/tests/deriving/should_fail/T7800.stderr
>>
>> testsuite/tests/typecheck/should_compile/T.hs
>>
>> typeable-msg
>>
>>
>>
>> Since you don't have '.gitignore' rules for these files and have not
>> listed
>>
>> them in '.git/info/exclude', you may have forgotten to 'git add' them to
>> your
>>
>> commit.
>>
>>
>>
>>
>>
>> Do you want to add these files to the commit? [y/N] N
>>
>> N
>>
>>
>>
>> Created and checked out branch arcpatch-D1069.
>>
>> Exception
>>
>> ERR-CONDUIT-CALL: API Method "differential.query" does not define these
>> parameters: 'arcanistProjects'.
>>
>> (Run with --trace for a full exception trace.)
>>
>> ___
>> ghc-devs mailing list
>> ghc-devs@haskell.org
>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>>
>>
>
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Typechecker / OverloadedStrings question 7.8 vs. 7.10

2015-08-03 Thread Reid Barton
On Mon, Aug 3, 2015 at 12:43 AM, Phil Ruffwind  wrote:

> I think the error message could be made clearer simply by emphasizing the
> fact
> that type ambiguity over the lack of instances.
>
> Ambiguous type variable 't0' arising from a use of
>   elem :: a -> t0 a -> Bool
> caused by the lack of an instance 'Data.String.IsString (t0 Char)'
> Either add a type annotation to dictate what 't0' should be
> based on one of the potential instances:
>   instance Foldable (Either a) -- Defined in ‘Data.Foldable’
>   instance Foldable Data.Proxy.Proxy -- Defined in ‘Data.Foldable’
>   instance GHC.Arr.Ix i => Foldable (GHC.Arr.Array i)
> -- Defined in ‘Data.Foldable’
>   ...plus three others)
> or define the required instance 'Data.String.IsString (t0 Char)'.
>

I like this style of error message since it points to the most likely fix
first.

If there are no "potential instances" (instances for specializations of the
type we need an instance for) in scope, then we can produce the old
"No instance for C t0" error, which suggests that the user write (or import)
such an instance. If there is at least one "potential instance" in scope,
then (assuming that the user wants to keep their existing instances,
and not use overlapping instances) they in fact must specify the type
variable somehow.

The only case that may still cause confusion is when there is exactly one
"potential instance" in scope. Then the user is likely to wonder why the
type is ambiguous. It might help to phrase the error message text in a
way that implies that the list of instances it displays is not necessarily
exhaustive.

Regards,
Reid Barton
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Typechecker / OverloadedStrings question 7.8 vs. 7.10

2015-08-02 Thread Reid Barton
On Sun, Aug 2, 2015 at 12:58 PM, Daniel Bergey  wrote:

> On 2015-07-31 at 08:59, Simon Peyton Jones  wrote:
> > Daniel Bergey wrote:
> > |  How hard would it be to give a different error message instead of "No
> > |  instance ..." when the type variable is ambiguous?  I always find this
> > |  error slightly misleading, since it seems to me that there are
> > |  multiple valid instances, not that there is "no instance".
> >
> > What would you like it to say?  I think it likely we could make it say
> that!
>
> Great!  I'd like it to say "Multiple instances for ..."  or "No unique
> instance for ...".  I have a slight preference for the former.
>

It may be worth noting that the existing error message is actually
technically
correct, in the sense that what would be needed for the program to compile
is exactly an instance of the form "instance Foldable t where ...". Then the
compiler would know that the ambiguity in the type variable t0 doesn't
matter.
It doesn't make any difference whether there are zero, one, or multiple
instances
of Foldable for more specific types. (Except in that if there is at least
one
such instance, then there can't also be an "instance Foldable t" assuming
that OverlappingInstances is not enabled.) Once you understand this, the
error
message makes perfect sense. But it is often confusing to beginners.

"Multiple instances for (C t)" seems bad because there might not be any
instances for C at all. "No unique instance for (C t)" is better most of
the time,
but it doesn't exactly get to the core of the issue, since there could be
just one
instance of C, for a specific type, and then it is no better than "No
instance for
(C t)". If I were to explain the situation, I would say "there is no single
instance
(C t) that applies for every type t", but it seems a bit wordy for a
compiler error...

Regards,
Reid Barton
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: build system

2015-07-21 Thread Reid Barton
On Tue, Jul 21, 2015 at 4:21 PM, Simon Peyton Jones 
wrote:

>  Friends
>
> With the new build system I get this kind of output
>
>   HC [stage 1] compiler/stage2/build/SPARC/AddrMode.o
>
>   HC [stage 1] compiler/stage2/build/CmmContFlowOpt.o
>
>   HC [stage 1] compiler/stage2/build/CmmImplementSwitchPlans.o
>
>   AR
> libraries/Cabal/Cabal/dist-install/build/libHSCabal-1.23.0.0-752LrSyTT7YLYxOzpNXfM5.a
>
> C:\fp\HP-2014-2.0.0.0\mingw\bin\ar.exe: creating
> libraries/Cabal/Cabal/dist-install/build/libHSCabal-1.23.0.0-752LrSyTT7YLYxOzpNXfM5.a
>
>   LD
> libraries/Cabal/Cabal/dist-install/build/HSCabal-1.23.0.0-752LrSyTT7YLYxOzpNXfM5.o
>
>   HC [stage 1] utils/ghc-cabal/dist-install/build/Main.o
>
> *WARNING*: file compiler\specialise\Specialise.hs, line 724
>
I assume this is when you run validate?

>  But I have no idea which module caused the WARNING, nor do I have a
> command-line to copy/paste to reproduce it.  (With the old module-at-a-time
> system I could copy/paste the command line for the specific module.)
>
> Is there a way to
>
> ·make things sequential so I can tell which warnings from which
> module
>
make -j1, which is make's default, but validate invokes make with -j2 or
higher (depending on how many CPUs it thinks your system has).

> ·get a command line to copy/paste to compile that module?
>
validate sets the GHC build system variable V=0 in mk/are-validating.mk.
You can override it from the make command line with make V=1.

So, you can run "make V=1" to restart the build serially and with the
command to build each file displayed. Note that serial make may build
modules in a different order than parallel make, so it may take a while for
make to get around to building the module that failed.

Regards,
Reid Barton
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Problem about `DerivedConstants.h` and `tmp.c` in `includes/dist-derivedconstants/header`, GHC 7.8.3

2015-06-16 Thread Reid Barton
On Sat, Jun 13, 2015 at 9:44 AM, Zhen Zhang  wrote:

> Hi everyone,
>
> I am having trouble porting some 6.8.2 GHC code to 7.8.3 GHC. The main
> trouble I met is that, in 6.8.2, there is a includes/mkDerivedConstants.c and
> some constants about RTS are declared here.
>
> While in 7.8.3, there is only a
> similar  `includes/dist-derivedconstants/header` directory containing a
> bunch of code. Some seems generated like `DerivedConstants.h`, and it seems
> like `tmp.c` generated this.
>
> However, when I added some entries in `tmp.c` and compiled it, then it
> became the original version ... So I doubted that if there is another file
> which is equivalent to the includes/mkDerivedConstants.c in 6.8.2?
>

Hi Zhen,

Take a look at utils/deriveConstants/DeriveConstants.hs. That program
generates tmp.c, then compiles it with the C compiler and inspects the
sizes of symbols in the resulting object file and writes the information it
gathered to DerivedConstants.h. We do it this way now to support
cross-compilation (in that case, the C compiler generates object files for
the target platform so we can't simply run them on the system that is
building GHC).

Regards,
Reid Barton
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: ghc-7.10 branch regression

2015-04-15 Thread Reid Barton
On Wed, Apr 15, 2015 at 2:04 AM, Erik de Castro Lopo 
wrote:

>
> At this point we need to decide whether:
>
>a) Require llvm-3.6 for 7.10.2 and later.
>

Surely we're not going to do this.


> or
>
>b) Revert commit 07da52ce2d in the ghc-7.10 branch and continue to use
>   llvm-3.5 for the 7.10 series.
>

Why do we need to revert anything, can't we just make a one-character fix
of 3.6 to 3.5 on the ghc-7.10 branch?

Regards,
Reid Barton
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Use of GHC in Open Embedded Environments

2015-04-06 Thread Reid Barton
Hi Dave,

Have you seen https://ghc.haskell.org/trac/ghc/wiki/Building/CrossCompiling?

In GHC's build system the build, host and target are relative to the stage1
compiler only, which will be a cross-compiler when host /= target.

Since the GHC ABI and interface file format can change arbitrarily between
different versions of GHC, programs built by the stage1 compiler must be
linked against libraries that were also built by the stage1 compiler (since
by definition the stage1 compiler is the first build of the new GHC
version). In order to build those libraries, the build system needs to be
able to run the stage1 compiler, which runs on host. So, either build must
equal host (which is what the GHC build system expects), or the build
system would have to somehow communicate with a second system of platform
host to build the libraries there.

However, if you also build the stage2 compiler, since it was built by the
stage1 compiler, which targets target, the stage2 compiler will be a native
compiler that runs on and targets target, and it will be capable of dynamic
code loading (ghci and Template Haskell). This is the most common thing to
want when building a compiler that runs on a platform other than the build
platform, though other configurations are (at least theoretically) possible.

We use haskell in two different ways in this system. We personally want
> to use OpenXT as a research platform and we have special purpose mini
> domains for doing things like measurement and attestation and we have
> components of these written in Haskell to do formal verification of the
> domains. What OpenXT is using Haskell for directly is as part of its
> management engine for the platform. We have a haskell and ocaml(just for
> glue) based versions of metadata storage and platform management APIs.
> Now when we build the platform we want to remove the dependency on the
> host platform GHC version. We try to do this by building what would
> essentially be a stage 1 compiler which will then be used to build the
> runtime and tools used in the final platform VMs. The issue is that the
> GHC build does not recognize the use case of host and build machines
> being different. They expect host and build to be the same and target to
> be different. Because of this even if we specify each component
> individually on the configure line for the base GHC build when the build
> gets to the point of building the libraries it seems to have this
> information completely vanish. I think this is because it is using cabal
> to build the libraries and cabal isn't taking into account that GHC is
> built for a second platform and we want to build those libraries for
> that same platform.
>

I have to say I don't follow what you are trying to do here. If your
question isn't already answered by now, could you be more specific, e.g. "a
stage-n compiler that is built on X, runs on Y and targets Z"? Even if X, Y
and Z are just opaque strings it would be helpful.

Since you mention removing the dependency on the host GHC version, maybe
you want to do an extra bootstrap stage, where instead of building (maybe
cross-compiling) the eventually desired version V with bootstrap compiler
B, you first build a native V compiler, then use that to bootstrap the
cross-compile. However, in theory the stage2 compiler should not depend at
all on the choice of bootstrap compiler.

Regards,
Reid Barton
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: What `reify` sees in Template Haskell

2015-02-13 Thread Reid Barton
On Fri, Feb 13, 2015 at 12:43 PM, Francesco Mazzoli  wrote:

> Hi Simon,
>
> On 13 February 2015 at 18:12, Simon Peyton Jones 
> wrote:
> > I don’t think it would be difficult to recover the old behaviour, but
> it's not clear to me that it would be a Good Thing.  A program that works
> today, and then does not work tomorrow because of some incidental change to
> the way type inference works, would not be a happy state of affairs.
>
> In much the same way, programs that rely on the memory representation
> that GHC uses for objects can be written with the provided unsafe
> functions.  In my view, if you make this danger clear, having those
> functions is much better than not having them.  And it seems like the
> Haskell environment generally agree with this view.
>

Right, but the users of such features understand that their programs may
break under future versions of GHC, and don't expect to have any particular
recourse if this happens.

And this is essentially what happened here. It doesn't make sense to ask
about the type of a variable in a TH splice when the result of that splice
might affect what type the variable has! Admittedly it was not documented
that the behavior of reify was undefined in this case, but I imagine that's
because nobody had considered this scenario (if they had, we'd have had the
7.8 design from the start).

I don't like it more than anyone else when GHC breaks user programs, but
when those programs were dependent on undefined behavior, I think it's
incumbent on the user to find a way to rewrite their program so as to not
depend on undefined behavior. This might include requesting a new GHC
feature with well-defined semantics. Adding the old undefined behavior
should be a last resort, and then in the future the undefined behavior
might stop giving you the answer you want anyways.


> In any case, if I think of some reasonable but more permissive
> restriction, I'll write it up.
>

Have you tried using Typed TH splices? Those interact differently with the
type checker, because the type of the expression resulting from a splice is
determined by the type of the splice action, and cannot depend upon its
value. So, it seems to me that it would be fine to allow reify to ask about
the type of a local variable from within a typed splice, and it may work
that way already (I haven't tried it).

Regards,
Reid Barton
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: 7.10.1RC2 compile problem? ghc: internal error: PAP object entered!

2015-01-31 Thread Reid Barton
On Sat, Jan 31, 2015 at 11:09 AM, Herbert Valerio Riedel  wrote:

> On 2015-01-31 at 14:34:35 +0100, George Colpitts wrote:
> > Maybe this is something I shouldn't be doing, but I thought it was worth
> > mentioning in case I have found a compiler bug.
> > Should I file a bug for this?
> >
> > cabal install *--allow-newer=base*  accelerate
> > ...
> > [10 of 10] Compiling Data.Label.Base  ( src/Data/Label/Base.hs,
> > dist/build/Data/Label/Base.o )
> > ghc: internal error: PAP object entered!
> > (GHC version 7.10.0.20150123 for x86_64_apple_darwin)
> > Please report this as a GHC bug:
> http://www.haskell.org/ghc/reportabug
>
> I suspect this has to do with `--allow-newer=base` allowing
> template-haskell-2.9.0.0 to be re-installed (which then becomes a
> build-dependency of `fclabels`). GHC 7.10, however, comes with
> template-haskell-2.10.0.0
>

Ah yes, you're exactly right. I didn't encounter this when I tried to
reproduce the issue because I have a bunch of lines like "constraint:
template-haskell installed" in my .cabal/config file.

Regards,
Reid Barton
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: 7.10.1RC2 compile problem? ghc: internal error: PAP object entered!

2015-01-31 Thread Reid Barton
On Sat, Jan 31, 2015 at 10:47 AM, Reid Barton  wrote:

> On Sat, Jan 31, 2015 at 10:16 AM, Brandon Allbery 
> wrote:
>
>>
>> On Sat, Jan 31, 2015 at 8:34 AM, George Colpitts <
>> george.colpi...@gmail.com> wrote:
>>
>>> cabal install *--allow-newer=base*  accelerate
>>>
>>
>> Never safe, because base contains the runtime and the runtime and the
>> compiler are very tightly tied together. Crashes are not surprising.
>>
>
> Actually it should always be safe: --allow-newer=base is essentially the
> equivalent of removing the upper bound on base from the .cabal file (of
> every package that was installed during that run).
>
> However, I'm quite confused about something, namely that as far as I can
> tell, neither accelerate nor any of its dependencies contain a module
> Data.Label.Base. What package was GHC trying to build when it crashed?
>

Oops, I was running the wrong command: it's in fclabels. Please file a bug
report and attach the output of `cabal install --ghc-options=-v fclabels`,
thanks!

Regards,
Reid Barton
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: 7.10.1RC2 compile problem? ghc: internal error: PAP object entered!

2015-01-31 Thread Reid Barton
On Sat, Jan 31, 2015 at 10:16 AM, Brandon Allbery 
wrote:

>
> On Sat, Jan 31, 2015 at 8:34 AM, George Colpitts <
> george.colpi...@gmail.com> wrote:
>
>> cabal install *--allow-newer=base*  accelerate
>>
>
> Never safe, because base contains the runtime and the runtime and the
> compiler are very tightly tied together. Crashes are not surprising.
>

Actually it should always be safe: --allow-newer=base is essentially the
equivalent of removing the upper bound on base from the .cabal file (of
every package that was installed during that run).

However, I'm quite confused about something, namely that as far as I can
tell, neither accelerate nor any of its dependencies contain a module
Data.Label.Base. What package was GHC trying to build when it crashed?

Regards,
Reid Barton
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: Failing tests: literals T5681 annotations

2014-12-26 Thread Reid Barton
On Tue, Dec 2, 2014 at 5:58 PM, Joachim Breitner 
wrote:

> Hi,
>
>
> Am Sonntag, den 30.11.2014, 20:01 +0100 schrieb Joachim Breitner:
> > I’m still seeing this failure:
> >
> > Compile failed (status 256) errors were:
> > /tmp/ghc16123_0/ghc16123_5.s: Assembler messages:
> >
> > /tmp/ghc16123_0/ghc16123_5.s:26:0:
> >  Error: can't resolve `.rodata' {.rodata section} -
> `Main_zdwwork_info$def' {.text section}
> >
> > /tmp/ghc16123_0/ghc16123_5.s:46:0:
> >  Error: can't resolve `.rodata' {.rodata section} -
> `Main_work_info$def' {.text section}
> >
> > /tmp/ghc16123_0/ghc16123_5.s:66:0:
> >  Error: can't resolve `.rodata' {.rodata section} -
> `Main_main1_info$def' {.text section}
> >
> > /tmp/ghc16123_0/ghc16123_5.s:86:0:
> >  Error: can't resolve `.rodata' {.rodata section} -
> `Main_main_info$def' {.text section}
> >
> > /tmp/ghc16123_0/ghc16123_5.s:106:0:
> >  Error: can't resolve `.rodata' {.rodata section} -
> `Main_main2_info$def' {.text section}
> >
> > /tmp/ghc16123_0/ghc16123_5.s:126:0:
> >  Error: can't resolve `.rodata' {.rodata section} -
> `ZCMain_main_info$def' {.text section}
> >
> > *** unexpected failure for T5681(optllvm)
> >
> >
> > https://s3.amazonaws.com/archive.travis-ci.org/jobs/42557559/log.txt
> >
> > Any ideas?
>
> is it possible that this is due the llvm version used? Do we support 3.4
> in GHC HEAD?
>
>Using LLVM tools
>   llc   : /usr/local/clang-3.4/bin/llc
>   opt   : /usr/local/clang-3.4/bin/opt
>

This appears to affect all programs built with llvm-3.4. I filed a ticket (
http://ghc.haskell.org/trac/ghc/ticket/9929).

Regards,
Reid Barton
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: Build time regressions

2014-10-01 Thread Reid Barton
On Tue, Sep 30, 2014 at 7:44 PM, John Lato  wrote:

> Hi Edward,
>
> This is possibly unrelated, but the setup seems almost identical to a very
> similar problem we had in some code, i.e. very long compile times (6+
> minutes for 1 module) and excessive memory usage when compiling generic
> serialization instances for some data structures.
>
> In our case, I also thought that INLINE functions were the cause of the
> problem, but it turns out they were not.  We had a nested data structure,
> e.g.
>
> > data Foo { fooBar :: !Bar, ... }
>
> with Bar very large (~150 records).
>
> even when we explicitly NOINLINE'd the function that serialized Bar, GHC
> still created a very large helper function of the form:
>
> > serialize_foo :: Int# -> Int#  -> ...
>
> where the arguments were the unboxed fields of the Bar structure, along
> with the other fields within Foo.
>

This sounds very much like the bug Richard fixed in
https://ghc.haskell.org/trac/ghc/ticket/9233. (See "g/F.hs" from my
"minimized.tar.gz".) If so then I think it is actually caused simply by
creating the worker function, and doesn't have to do with unpacking, only
the strictness of the Bar field.

Regards,
Reid Barton
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: HEADS UP: Running cabal install with the latest GHC

2014-09-27 Thread Reid Barton
On Fri, Aug 8, 2014 at 8:00 AM, Edward Z. Yang  wrote:

> Hey all,
>
> SPJ pointed out to me today that if you try to run:
>
> cabal install --with-ghc=/path/to/inplace/bin/ghc-stage2
>
> with the latest GHC HEAD, this probably will not actually work, because
> your system installed version of Cabal is probably too old to deal with
> the new package key stuff in HEAD.  So, how do you get a version
> of cabal-install (and Cabal) which is new enough to do what you need
> it to?
>
> The trick is to compile Cabal using your /old/ GHC. Step-by-step, this
> involves cd'ing into libraries/Cabal/Cabal and running `cabal install`
> (or install it in a sandbox, if you like) and then cd'ing to
> libraries/Cabal/cabal-install and cabal install'ing that.
>

Hi all,

The new cabal-install I built last month following the instructions above
started failing with recent GHC HEAD with messages like

 ghc: ghc no longer supports single-file style package databases
(dist/package.conf.inplace) use 'ghc-pkg init' to create the database with
the correct format.

I found that repeating these steps with the latest libraries/Cabal
submodule gave me a cabal-install that, so far, appears to be working with
GHC HEAD. So if your cabal-install has stopped working with HEAD, try
building the latest version as outlined in Edward's email.

Cabal wizards, any gotchas with current Cabal & GHC HEAD I should be aware
of?

Regards,
Reid Barton
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: Proposal: run GHC API tests on fast

2014-08-23 Thread Reid Barton
I have seen this too just running "make THREADS=8". Looks like it's because
the other tests in this directory are cleaning too aggressively. From the
Makefile:

...
clean:
rm -f *.o *.hi

T6145: clean
'$(TEST_HC)' $(TEST_HC_OPTS) --make -v0 -package ghc T6145
./T6145 "`'$(TEST_HC)' $(TEST_HC_OPTS) --print-libdir | tr -d '\r'`"
...

so ghcApi.o is getting removed before the final link step, I would guess.

Regards,
Reid Barton
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: HEADS-UP: diagrams does not compile with HEAD (regressions)

2014-06-10 Thread Reid Barton
I expect this is a result of https://ghc.haskell.org/trac/ghc/ticket/8883,
but even so it's not clear to me whether the error is correct. It would be
nice if GHC printed the type it inferred for succ' in this kind of
situation.

Regards,
Reid Barton


On Tue, Jun 10, 2014 at 7:55 AM, Gabor Greif  wrote:

> Devs,
>
> as of recently GHC HEAD stopped building the diagrams library. Several
> prerequisite libs also fail to compile. A specific error message
> appears in an attoparsec issue
> <https://github.com/bos/attoparsec/issues/67>, for which I have
> submitted a workaround. But as Herbert cautiously points out, this
> could be a recent GHC bug surfacing.
>
> The symptoms of the bug seem to be that GHC complains about the usage
> of (~) type equality operation, but there is no reference to that type
> operator in the source code. To work around the problem it suffices to
> add -XGADTs or -XTypeFamilies.
>
> Does this ring any bells? Can automatized tests catch such things in the
> future?
>
> Cheers,
>
> Gabor
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://www.haskell.org/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: segfault in RTS - can anyone help me tracking this bug down?

2014-05-28 Thread Reid Barton
There are a couple of recent GC-related bug fixes (#9045 and #9001). Before
trying to track this down any further I suggest you try using the tip of
the ghc-7.8 branch with commit fc0ed8a730 cherry-picked on top.

Regards,
Reid Barton


On Wed, May 28, 2014 at 6:04 AM, Ömer Sinan Ağacan wrote:

> Hi all,
>
> I'm suffering from a RTS bug(probably GC related) that makes making
> progress in my GSoC project impossible. I have very limited knowledge
> of GHC internals and I currently have no idea how to produce a minimal
> program that demonstrates the bug. I wrote how to reproduce it and gdb
> backtrace when segfault happens in a short blog post:
> http://osa1.net/posts/2014-05-27-worst-bug.html . As also written in
> the blog post, changing generation count of generational GC will makes
> the bug disappear in some cases, but it's not a solution.
>
> I also pasted backtrace output below for those who don't want to click
> links.
>
> GHC version used is 7.8.2.
>
> If anyone give me some pointers to understand what's going wrong or
> how can I produce a simple program that demonstrates the bug, I'd like
> to work on that. I'm basically stuck and I can't make any progress
> with this bug.
>
> Thanks,
> Ömer
>
> [  5 of 202] Compiling GHC.Unicode[boot] ( GHC/Unicode.hs-boot,
> dist/build/GHC/Unicode.js_p_o-boot )
> Detaching after fork from child process 3382.
> [  6 of 202] Compiling GHC.IO[boot] ( GHC/IO.hs-boot,
> dist/build/GHC/IO.js_p_o-boot )
> Detaching after fork from child process 3383.
> [  7 of 202] Compiling GHC.Exception[boot] ( GHC/Exception.lhs-boot,
> dist/build/GHC/Exception.js_p_o-boot )
> Detaching after fork from child process 3384.
> [ 51 of 202] Compiling GHC.Fingerprint[boot] (
> GHC/Fingerprint.hs-boot, dist/build/GHC/Fingerprint.js_p_o-boot )
> Detaching after fork from child process 3385.
> [ 55 of 202] Compiling GHC.IO.Exception[boot] (
> GHC/IO/Exception.hs-boot, dist/build/GHC/IO/Exception.js_p_o-boot )
> Detaching after fork from child process 3386.
> [ 75 of 202] Compiling Foreign.C.Types  ( Foreign/C/Types.hs,
> dist/build/Foreign/C/Types.js_p_o )
>
> Program received signal SIGSEGV, Segmentation fault.
> 0x0425d5c4 in LOOKS_LIKE_CLOSURE_PTR (p=0x0) at
> includes/rts/storage/ClosureMacros.h:258
> 258 includes/rts/storage/ClosureMacros.h: No such file or directory.
> (gdb) bt
> #0  0x0425d5c4 in LOOKS_LIKE_CLOSURE_PTR (p=0x0) at
> includes/rts/storage/ClosureMacros.h:258
> #1  0x0425f776 in scavenge_mutable_list1 (bd=0x7fffe5c02a00,
> gen=0x4d1fd48) at rts/sm/Scav.c:1400
> #2  0x0425fa13 in scavenge_capability_mut_Lists1
> (cap=0x4cfe5c0 ) at rts/sm/Scav.c:1493
> #3  0x04256b66 in GarbageCollect (collect_gen=0,
> do_heap_census=rtsFalse, gc_type=2,
> cap=0x4cfe5c0 ) at rts/sm/GC.c:342
> #4  0x042454a3 in scheduleDoGC (pcap=0x7fffc198,
> task=0x4d32b60, force_major=rtsFalse)
> at rts/Schedule.c:1650
> #5  0x04243de4 in schedule (initialCapability=0x4cfe5c0
> , task=0x4d32b60)
> at rts/Schedule.c:553
> #6  0x04246436 in scheduleWaitThread (tso=0x76708d60,
> ret=0x0, pcap=0x7fffc2c0) at rts/Schedule.c:2346
> #7  0x0423e9b4 in rts_evalLazyIO (cap=0x7fffc2c0,
> p=0x477f850, ret=0x0) at rts/RtsAPI.c:500
> #8  0x04241666 in real_main () at rts/RtsMain.c:63
> #9  0x04241759 in hs_main (argc=237, argv=0x7fffc448,
> main_closure=0x477f850, rts_config=...)
> at rts/RtsMain.c:114
> #10 0x00408ea7 in main ()
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://www.haskell.org/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: Releasing containers 0.5.3.2 -- before GHC 7.8?

2014-01-14 Thread Reid Barton
On Tue, Jan 14, 2014 at 2:19 PM, Ryan Newton  wrote:

> On Tue, Jan 14, 2014 at 12:01 PM, Roman Cheplyaka wrote:
>
>> * Ryan Newton  [2014-01-14 11:41:48-0500]
>> > Replacing containers seems like a real pain for end users
>>
>> Is it a real pain? Why?
>>
>
> One thing I ran into is that cabal sandboxes want consistent dependencies.
>  And when users get to this point where they need to grab our latest
> containers, they've got a bunch of core/haskell platform packages that
> depend on the old containers.
>
> I didn't mean that there was anything difficult about containers itself,
> just that almost everything else depends on it.
>

In addition to the general pain of updating packages at the base of the
dependency hierarchy, there is also the fact that the template-haskell
package depends on containers. As far as I know upgrading template-haskell
is impossible, or at least a Very Bad Idea, so any library that wants to
use an updated version of containers can't use template-haskell, or even be
linked into an application that uses template-haskell directly or through
another library.

As far as I am concerned as a GHC user, versions of containers that aren't
the one that came with my GHC might as well not exist. For example if I see
that a package has a constraint "containers >= 0.10", I just assume I
cannot use the library with GHC 7.4. Thus I'm strongly in favor of
synchronizing containers releases with releases of GHC.

Regards,
Reid Barton
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: Windows build failures in FD.hs

2013-10-01 Thread Reid Barton
Looks like this is caused by the addition of "default-language:
Haskell2010" (which is not the default default, apparently!) in base.cabal
in commit dfb52c3d58 from Saturday.

See
http://www.haskell.org/ghc/docs/latest/html/users_guide/bugs-and-infelicities.html#haskell-standards-divergence,
section 14.1.1.2.

Regards,
Reid Barton


On Tue, Oct 1, 2013 at 6:08 PM, Simon Peyton-Jones wrote:

> I have not seen that and I build on Windows all the time.  The relevant
> big in FD.hs is
>
> #ifndef mingw32_HOST_OS
> getUniqueFileInfo _ dev ino = return (fromIntegral dev, fromIntegral ino)
> #else
> getUniqueFileInfo fd _ _ = do
>   with 0 $ \devptr -> do
>   with 0 $ \inoptr -> do
>   c_getUniqueFileInfo fd devptr inoptr
>   liftM2 (,) (peek devptr) (peek inoptr)
> #endif
>
>
> Maybe copy/paste the command that compiles FD.hs and use -E to see the
> output of the C pre-processor?
>
> S
>
> | -Original Message-
> | From: ghc-devs [mailto:ghc-devs-boun...@haskell.org] On Behalf Of Edsko
> | de Vries
> | Sent: 01 October 2013 17:19
> | To: ghc-devs@haskell.org
> | Subject: Windows build failures in FD.hs
> |
> | Hi all,
> |
> | I'm still trying to get a Windows build (Win 7, tried both 32-bit and
> | 64-bit, same result) -- using THREADS=1. When running 'sh validate',
> | compilation eventually stops with
> |
> | "inplace/bin/ghc-stage1.exe" -hisuf hi -osuf  o -hcsuf hc -static
> | -H32m -O -Werror -Wall -H64m -O0-package-name base-4.7.0.0
> | -hide-all-packages -i -ilibraries/base/.
> | -ilibraries/base/dist-install/build
> | -ilibraries/base/dist-install/build/autogen
> | -Ilibraries/base/dist-install/build
> | -Ilibraries/base/dist-install/build/autogen -Ilibraries/base/include
> | -optP-DOPTIMISE_INTEGER_GCD_LCM -optP-include
> | -optPlibraries/base/dist-install/build/autogen/cabal_macros.h -package
> | ghc-prim-0.3.1.0 -package integer-gmp-0.5.1.0 -package rts-1.0
> | -package-name base -XHaskell2010 -O2 -O -dcore-lint -fno-warn-amp
> | -fno-warn-deprecated-flags  -no-user-package-db -rtsopts  -odir
> | libraries/base/dist-install/build -hidir
> | libraries/base/dist-install/build -stubdir
> | libraries/base/dist-install/build  -dynamic-too -c
> | libraries/base/./GHC/IO/FD.hs -o
> | libraries/base/dist-install/build/GHC/IO/FD.o -dyno
> | libraries/base/dist-install/build/GHC/IO/FD.dyn_o
> |
> | libraries\base\GHC\IO\FD.hs:281:23: Empty 'do' block
> |
> | libraries\base\GHC\IO\FD.hs:282:23: Empty 'do' block
> |
> | libraries\base\GHC\IO\FD.hs:283:26: Not in scope: `devptr'
> |
> | libraries\base\GHC\IO\FD.hs:283:33: Not in scope: `inoptr'
> |
> | libraries\base\GHC\IO\FD.hs:284:20: Not in scope: `devptr'
> |
> | libraries\base\GHC\IO\FD.hs:284:34: Not in scope: `inoptr'
> | make[1]: *** [libraries/base/dist-install/build/GHC/IO/FD.o] Error 1
> | make: *** [all] Error 2
> |
> | Since nobody else seems to be experiencing this, there must be
> | something wrong with my setup? I've tried to follow
> | http://ghc.haskell.org/trac/ghc/wiki/Building/Preparation/Windows best
> | I could, but I'm not a Windows user so I might have got something
> | wrong.
> |
> | Any suggestions welcome!
> |
> | Edsko
> | ___
> | ghc-devs mailing list
> | ghc-devs@haskell.org
> | http://www.haskell.org/mailman/listinfo/ghc-devs
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://www.haskell.org/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs