Re: Call for help on testing integer-gmp2 on non-Linux archs

2014-07-22 Thread Niklas Larsson
I can do both 32 and 64-bit builds.I started with 32 bits.

I got
"inplace/bin/ghc-stage2.exe" -hisuf hi -osuf  o -hcsuf hc -static  -H64m
-O0 -fasm-package-name vector-0.10.9.1 -hide-all-packages
-i -ilibraries/vector/. -ilibraries/vector/dist-install/build
-ilibraries/vector/dist-install/build/autogen
-Ilibraries/vector/dist-install/build
-Ilibraries/vector/dist-install/build/autogen
-Ilibraries/vector/include -Ilibraries/vector/internal
-optP-DVECTOR_BOUNDS_CHECKS -optP-include
-optPlibraries/vector/dist-install/build/autogen/cabal_macros.h -package
base-4.7.1.0 -package deepseq-1.3.0.2 -package ghc-prim-0.3.1.0
-package primitive-0.5.2.1 -O2 -XHaskell98 -XCPP -XDeriveDataTypeable
-O -fasm  -no-user-package-db -rtsopts  -odir
libraries/vector/dist-install/build -hidir libraries/vector
/dist-install/build -stubdir libraries/vector/dist-install/build   -c
libraries/vector/./Data/Vector/Fusion/Stream/Monadic.hs -o
libraries/vector/dist-install/build/Data/Vector/Fusion/Stream/Monadic.o
"/usr/bin/ar" q
libraries/primitive/dist-install/build/libHSprimitive-0.5.2.1.a
@libraries/primitive/dist
-install/build/libHSprimitive-0.5.2.1.a.contents
/usr/bin/ar: creating
libraries/primitive/dist-install/build/libHSprimitive-0.5.2.1.a
"rm" -f
libraries/primitive/dist-install/build/libHSprimitive-0.5.2.1.a.contents
Loading package ghc-prim ... linking ... done.
Loading package integer-gmp2 ... linking ... ghc-stage2.exe: unable to load
package `integer-gmp2'
ghc-stage2.exe:
D:\Niklas\scratch\ghc-build\msys\home\niklas\ghc\libraries\integer-gmp2\dist-install\build
\HSinteger-gmp2-0.0.1.0.o: unknown symbol `_scalbn'

I built it´with gmp-6.0.0.



2014-07-22 14:02 GMT+02:00 Herbert Valerio Riedel :

> On 2014-07-22 at 13:33:04 +0200, Niklas Larsson wrote:
> > I can test on Windows.
>
> great! Are you using the 32bit or 64bit compiler?
>
> All you'd need to do is 'git checkout' the wip/T9281 branch, add the line
>
>   INTEGER_LIBRARY=integer-gmp2
>
> at the end of mk/build.mk (and 'BuildFlavour=quick' should suffice) and
> try to build GHC with that. If you end up with a working stage2
> compiler, and 'inplace/bin/ghc-stage2 --interactive' reports loading the
> package 'integer-gmp2' then everything went better than expected :)
>
> Then running the testsuite via
>
>   cd testsuite/ && make WAY=normal SKIP_PERF_TESTS=YES
>
> should only fail with a few testcases due to the strings "integer-gmp2"
> vs. "integer-gmp" being different in the output.
>
> Thanks,,
>   hvr
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: Call for help on testing integer-gmp2 on non-Linux archs

2014-07-22 Thread Christiaan Baaij
The testsuite results are here: http://paste.ubuntu.com/7836630/

On Jul 22, 2014, at 2:11 PM, Christiaan Baaij  
wrote:

> Starting a build on my MAC:
> 
> OS: 10.8.5
> XCode: XCode 4 CLI-only (so _no_ full Xcode, that is, xcode-select fails)
> GCC: i686-apple-darwin11-llvm-gcc-4.2 (GCC) 4.2.1 (Based on Apple Inc. build 
> 5658) (LLVM build 2336.11.00)
> GHC: 7.8.3
> 
> On Jul 22, 2014, at 2:02 PM, Herbert Valerio Riedel  wrote:
> 
>> On 2014-07-22 at 13:33:04 +0200, Niklas Larsson wrote:
>>> I can test on Windows.
>> 
>> great! Are you using the 32bit or 64bit compiler?
>> 
>> All you'd need to do is 'git checkout' the wip/T9281 branch, add the line
>> 
>> INTEGER_LIBRARY=integer-gmp2
>> 
>> at the end of mk/build.mk (and 'BuildFlavour=quick' should suffice) and
>> try to build GHC with that. If you end up with a working stage2
>> compiler, and 'inplace/bin/ghc-stage2 --interactive' reports loading the
>> package 'integer-gmp2' then everything went better than expected :)
>> 
>> Then running the testsuite via
>> 
>> cd testsuite/ && make WAY=normal SKIP_PERF_TESTS=YES 
>> 
>> should only fail with a few testcases due to the strings "integer-gmp2"
>> vs. "integer-gmp" being different in the output.
>> 
>> Thanks,,
>> hvr
>> ___
>> ghc-devs mailing list
>> ghc-devs@haskell.org
>> http://www.haskell.org/mailman/listinfo/ghc-devs
> 

___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: GHC contribution guidelines and infrastructure talk on 6th September at HIW?

2014-07-22 Thread Joachim Breitner
Hi,


Am Dienstag, den 22.07.2014, 12:12 +0200 schrieb Jost Berthold:
> (Sorry for joining this late... I figured we would be in dialogue off 
> the list eventually)
> 
> Joachim wrote and posted a proposal, and I think this proposal is indeed 
> a good idea (and one of the purposes of HIW, definite yes).
> 
> We shall make room for it in the programme, possibly in the last 
> session, which can turn into the "Haskell release discussion evening".

Jost added my proposal to EasyChair, but it turns out that I scheduled
by return flight a bit too early (taking off at 19:10) and might not be
able to attend the last session.

Who is able to fill in for me if the infrastructure talk is scheduled
here? Maybe Simon M, or Austin, or Herbert?  Of some coalition thereof

Thanks,
Joachim


-- 
Joachim Breitner
  e-Mail: m...@joachim-breitner.de
  Homepage: http://www.joachim-breitner.de
  Jabber-ID: nome...@joachim-breitner.de



signature.asc
Description: This is a digitally signed message part
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: a little phrustrated

2014-07-22 Thread Richard Eisenberg

On Jul 22, 2014, at 9:58 AM, Austin Seipp  wrote:

> Hi Richard,
> 
> Sorry for missing this email - it slid out of my queue...

No worries on the delay. I wouldn't be surprised if there is a Best Practices 
document somewhere which advises waiting at least several days to respond to a 
work-related email with a human emotion in the subject line. :) I appreciate 
your thorough answers below.

> 
>> 2) I develop and build in the same tree. This means that I often have a few
>> untracked files in the outer, ghc.git repo that someone hasn't yet added to
>> .gitignore. Thus, I need to say `--allow-untracked` to get `arc diff` to
>> work. I will likely always need `--allow-untracked`, so I looked for a way
>> to get this to be configured automatically. I found
>> https://secure.phabricator.com/book/phabricator/article/arcanist/#configuration
>> , but the details there are sparse. Any advice?
> 
> No, it doesn't look like it I'm afraid. I asked upstream about it this
> morning (it was very easy to write a patch for), and unfortunately
> they do not want to allow this feature (it's very easy to add it as a
> config option, but I digress).
> 
> In the mean time, you can use 'arc alias' to create a version of 'arc
> diff' like what you want:
> 
> $ arc alias udiff diff -- --allow-untracked
> 
> Then run:
> 
> $ arc udiff
> 
> instead.
> 
> I think this is really a short-term solution; in the long run we
> should commit .gitignore entries for everything since the reason for
> this is that having untracked files is generally a liability that
> should be caught.

Thanks for the `alias` tip. I think that having an always-updated .gitignore 
might be difficult from a practical standpoint, because each different 
architecture might produce different files. Of course, I could add entries 
myself, but I'm always quite scared of touching anything interacting with the 
build system.

> [snip]
> I personally suggest that we take the pain on these as an opportunity
> to remove things, per recent discussions. We can't remove it all in
> one swoop, but we should start being aggressive about enforcing style
> errors.
> 
> In short, I'd suggest you
> 
> - Add silly excuses for now
> - Land your changes
> - Commit fixes for the lint errors *after* that.
> - Commit lint fixes one file at a time.
> 
> If we keep doing this, we'll begin making a lot of headway on this,
> I'm sure. (The nice thing is that now, you can be lazy and fix
> violations, then let Phabricator or Travis-CI do builds for you.)

Not a bad plan. I've personally come around to the "let's just de-tab now and 
get on with it" camp, even though it will give me a painful merge. I think my 
(and others') painful merge is less painful than the status quo.

> 
>> 6) When I looked at my posted revision, it said that the revision was
>> "closed"... and that I had done it! slyfox on IRC informed me that this was
>> likely because I had pushed my commits to a wip/... branch. Is using wip
>> branches with Phab not recommended? Or, can Phab be configured not to close
>> revisions if the commit appears only in wip/... branches?
> 
> Joachim ran into this today.
> 
> In short, I fixed this by tweaking the repository settings.
> Phabricator will now autoclose commits ONLY if they occur on the
> master branch.
> 
> This means you should feel free to push to wip/* branches as much as
> you want without fear now. Sorry!

Great. Thanks!

> 
>> 7) How can I "re-open" my revision?
> 
> I'm afraid you can't.

Is this worth pushing upstream as a feature request? Even absent technical 
glitches like the wip/* stuff, I could see wanting to do this. Say there is a 
subtle revision that accumulates a bit of commentary. It lands after general 
consensus that the revision is good. Then, someone discovers that it was wrong, 
after all. It would be nice to continue the original conversation instead of 
starting afresh, I would think.

> 
>> 8) Some time after posting, phaskell tells me that my build failed. OK. This
>> is despite the fact that Travis was able to build the same commit
>> (https://travis-ci.org/ghc/ghc/builds/30066130). I go to find out why it
>> failed, and am directed to build log F3870
>> (https://phabricator.haskell.org/file/info/PHID-FILE-hz2r4sjamkkrbf7nsz6b/).
>> I can't view the file online, but instead have to download and then ungzip
>> it. Is it possible to view this file directly? Or not have it be compressed?
> 
> This is a bug in my script because it's a piece of crap, both the
> failure and the build logging. I'm working on a Much Better Version™
> not written in Shell script but Haskell that should fix all this,
> hopefully I can deploy it soon. It will also include more features
> that may or may not actually work. :)
> 
> I'd prefer to keep the log files compressed if that's OK. An
> uncompressed log from ./validate is over *ten* megabytes already, and
> it doesn't even correctly capture *all* of the logs! In comparison,
> the .gz version is a short 300kb. That's

Re: a little phrustrated

2014-07-22 Thread Austin Seipp
Hi Richard,

Sorry for missing this email - it slid out of my queue...

On Wed, Jul 16, 2014 at 8:54 AM, Richard Eisenberg  wrote:
> Hi all,
>
> I'm trying to use Phab for the first time this morning, and hitting a fair
> number of obstacles. I'm writing up my experiences here in order to figure
> out which of these are my fault, which can be fixed, and which are just
> things to live with; and also to help others who may go down the same path.
> If relevant, my diff is at https://phabricator.haskell.org/D73

> 1) I had some untracked files in a submodule repo. I couldn't find a way to
> get `arc diff` to ignore these, as they appeared to git to be a change in a
> tracked file (that is, a change to a submodule, which is considered
> tracked). `git stash` offered no help, so I had to delete the untracked
> files. This didn't cause real pain (the files were there in error), but it
> seems a weakness of the system if I can't make progress otherwise.

Yes, you can use:

$ git config --global diff.ignoreSubmodules dirty

to ignore this. If you don't pass --global, it will only take affect
in the repository you perform it in.

This should fix this problem.

> 2) I develop and build in the same tree. This means that I often have a few
> untracked files in the outer, ghc.git repo that someone hasn't yet added to
> .gitignore. Thus, I need to say `--allow-untracked` to get `arc diff` to
> work. I will likely always need `--allow-untracked`, so I looked for a way
> to get this to be configured automatically. I found
> https://secure.phabricator.com/book/phabricator/article/arcanist/#configuration
> , but the details there are sparse. Any advice?

No, it doesn't look like it I'm afraid. I asked upstream about it this
morning (it was very easy to write a patch for), and unfortunately
they do not want to allow this feature (it's very easy to add it as a
config option, but I digress).

In the mean time, you can use 'arc alias' to create a version of 'arc
diff' like what you want:

$ arc alias udiff diff -- --allow-untracked

Then run:

$ arc udiff

instead.

I think this is really a short-term solution; in the long run we
should commit .gitignore entries for everything since the reason for
this is that having untracked files is generally a liability that
should be caught.

> 3) The linter picks up and complains about tabs in any of my touched files.
> I can then write an excuse for every `arc diff` I do, or de-tab the files.
> In one case, I changed roughly one line in the file (MkCore.lhs) and didn't
> think it right to de-tab the whole file. Even if I did de-tab the whole
> file, then my eventual `arc land` would squash the whitespace commit in with
> my substantive commits, which we expressly don't want. I can imagine a fair
> amount of git fiddling which would push the whitespace commit to master and
> then rebase my substantive work on top so that the final, landed, squashed
> patch would avoid the whitespace changes, but this is painful. And advice on
> this? Just ignore the lint errors and write silly excuses? Or, is there a
> way Phab/arc can be smart enough to keep whitespace-only commits (perhaps
> tagged with the words "whitespace only" in the commit message) separate from
> other commits when squashing in `arc land`?

I'm afraid right now I don't have some fancy stuff to help automate
this or alleviate it.

I personally suggest that we take the pain on these as an opportunity
to remove things, per recent discussions. We can't remove it all in
one swoop, but we should start being aggressive about enforcing style
errors.

In short, I'd suggest you

 - Add silly excuses for now
 - Land your changes
 - Commit fixes for the lint errors *after* that.
 - Commit lint fixes one file at a time.

If we keep doing this, we'll begin making a lot of headway on this,
I'm sure. (The nice thing is that now, you can be lazy and fix
violations, then let Phabricator or Travis-CI do builds for you.)

> 4) For better or worse, we don't currently require every file to be
> tab-free, just some of them. Could this be reflected in Phab's lint settings
> to avoid the problem in (3)? (Of course, a way to de-tab and keep the
> history nice would be much better!)

We could exclude all the files that have tabs, but it would be a lot
still. See above though - I suggest we use this as an opportunity to
remove this stuff. Just be aggressive about cleaning it up after it
lands.

The average lifespan of a review is fairly short in practice. I think
it should be pretty easy to keep up.

The lint rules do still need some tweaking probably though, so if you
do see something bogus, please do so.

> 5) In writing my revision description, I had to add reviewers. I assumed
> these should be comma-separated. This worked and I have updated the Wiki.
> Please advise if I am wrong.

That's correct, but separated by spaces should work too - thanks!

> 6) When I looked at my posted revision, it said that the revision was
> "closed"... and that I had done i

RE: tcInferRho

2014-07-22 Thread Simon Peyton Jones
Yes that comment is a lie!

I would welcome a way to tighten this up.

Unifying with foralls is just fine, provided they behave rigidly like type 
constructors.  The unifier can even unify two foralls, and generate evidence.  
All good. 

BUT foralls are implicitly instantiated, and it is the implicitly-instantiated 
ones that must not be hidden.

One possibility, pioneered by QML 
(http://research.microsoft.com/en-us/um/people/crusso/qml/) is to have two 
kinds of foralls, implicitly instantiated and explicitly instantiated. GHC has 
been moving in that direction but only fitfully.  That's one reason that the 
entire ImpredicativeTypes extensions is currently in limbo.

Simon

| -Original Message-
| From: Richard Eisenberg [mailto:e...@cis.upenn.edu]
| Sent: 22 July 2014 14:27
| To: Simon Peyton Jones
| Cc: ghc-devs@haskell.org
| Subject: Re: tcInferRho
| 
| Ah -- it's all clear to me now.
| 
| To summarize: a TauTv *can* become a poly-type, but the solver won't
| ever discover so.
| 
| That would seem to contradict
| 
| >= TauTv -- This MetaTv is an ordinary unification variable
| >-- A TauTv is always filled in with a tau-type,
| which
| >-- never contains any ForAlls
| >
| 
| which appears in the declaration for MetaInfo in TcType.
| 
| Is that an accurate summary?
| 
| Thanks for helping to clear this up!
| Richard
| 
| 
| On Jul 22, 2014, at 9:19 AM, Simon Peyton Jones 
| wrote:
| 
| > Indeed.
| >
| > Unification variables *can* unify with polytypes, as you see.
| >
| > GHC does "on the fly" unification with in-place update, and only
| defers to the constraint solver if it can't readily unify on the fly.
| The squishiness is precisely that for this setting we *must* unify on
| the fly, so the "it's always ok to defer" rule doesn't hold.
| >
| > Simon
| >
| > | -Original Message-
| > | From: Richard Eisenberg [mailto:e...@cis.upenn.edu]
| > | Sent: 22 July 2014 13:22
| > | To: Simon Peyton Jones
| > | Cc: ghc-devs@haskell.org
| > | Subject: Re: tcInferRho
| > |
| > | OK -- that all makes sense.
| > |
| > | But why does it actually work, I wonder? It seems that to get the
| > | behavior that you describe below, and the behavior that we see in
| > | practice, a unification variable *does* have to unify with a
| > | non-tau- type, like (forall a. a -> a) -> Int. But doesn't defer_me
| > | in TcUnify.checkTauTvUpdate prevent such a thing from happening?
| > |
| > | To learn more, I tried compiling this code:
| > |
| > | > f :: Bool -> Bool -> (forall a. a -> a) -> () f = undefined
| > | >
| > | > g = (True `f` False) id
| > |
| > | I use infix application to avoid tcInferRho.
| > |
| > | With -ddump-tc-trace -dppr-debug, I see the following bit:
| > |
| > | > Scratch.hs:18:6:
| > | > u_tys
| > | >   untch 0
| > | >   (forall a{tv apE} [sk]. a{tv apE} [sk] -> a{tv apE} [sk]) -
| > ()
| > | >   ~
| > | >   t_aHO{tv} [tau[0]]
| > | >   a type equality (forall a{tv apE} [sk].
| > | >a{tv apE} [sk] -> a{tv apE} [sk])
| > | >   -> ()
| > | >   ~
| > | >   t_aHO{tv} [tau[0]]
| > | > Scratch.hs:18:6:
| > | > writeMetaTyVar
| > | >   t_aHO{tv} [tau[0]] := (forall a{tv apE} [sk].
| > | >  a{tv apE} [sk] -> a{tv apE} [sk])
| > | > -> ()
| > | >
| > |
| > | What's very strange to me here is that we see t_aHO, a **tau**
| type,
| > | being rewritten to a poly-type. I could clearly throw in more
| > | printing statements to see what is going on, but I wanted to check
| > | if this looks strange to you, too.
| > |
| > | Thanks,
| > | Richard
| > |
| > | On Jul 22, 2014, at 6:28 AM, Simon Peyton Jones
| > | 
| > | wrote:
| > |
| > | > Richard
| > | >
| > | > You are right; there is something squishy here.
| > | >
| > | > The original idea was that a unification variable only stands for
| > | > a
| > | *monotype* (with no for-alls).  But our basic story for the type
| > | inference engine is
| > | > tcExpr :: HsExpr -> TcType -> TcM HsExpr'
| > | > which checks that the expression has the given expected type. To
| > | > do
| > | inference we pass in a unification variable as the "expected type".
| > | BUT if the expression actually has a type like (forall a. a->a) ->
| > | Int, then the unification variable clearly isn't being unified with
| > | a monotype.  There are a couple of places where we must "zonk" the
| > | expected type, after calling tcExpr, to expose the foralls.  A
| major
| > | example is TcExpr.tcInferFun.
| > | >
| > | > I say this is squishy because *in principle* we could replace
| > | > every
| > | unification with generating an equality constraint, for later
| solving.
| > | (This does often happen, see TcUnify.uType_defer.)  BUT if we
| > | generate an equality constraint, the zonking won't work, and the
| > | foralls won't be exposed early enough.  I wish 

Re: tcInferRho

2014-07-22 Thread Richard Eisenberg
Ah -- it's all clear to me now.

To summarize: a TauTv *can* become a poly-type, but the solver won't ever 
discover so.

That would seem to contradict

>= TauTv -- This MetaTv is an ordinary unification variable
>-- A TauTv is always filled in with a tau-type, which
>-- never contains any ForAlls
> 

which appears in the declaration for MetaInfo in TcType.

Is that an accurate summary?

Thanks for helping to clear this up!
Richard


On Jul 22, 2014, at 9:19 AM, Simon Peyton Jones  wrote:

> Indeed.
> 
> Unification variables *can* unify with polytypes, as you see.
> 
> GHC does "on the fly" unification with in-place update, and only defers to 
> the constraint solver if it can't readily unify on the fly.  The squishiness 
> is precisely that for this setting we *must* unify on the fly, so the "it's 
> always ok to defer" rule doesn't hold.
> 
> Simon
> 
> | -Original Message-
> | From: Richard Eisenberg [mailto:e...@cis.upenn.edu]
> | Sent: 22 July 2014 13:22
> | To: Simon Peyton Jones
> | Cc: ghc-devs@haskell.org
> | Subject: Re: tcInferRho
> | 
> | OK -- that all makes sense.
> | 
> | But why does it actually work, I wonder? It seems that to get the
> | behavior that you describe below, and the behavior that we see in
> | practice, a unification variable *does* have to unify with a non-tau-
> | type, like (forall a. a -> a) -> Int. But doesn't defer_me in
> | TcUnify.checkTauTvUpdate prevent such a thing from happening?
> | 
> | To learn more, I tried compiling this code:
> | 
> | > f :: Bool -> Bool -> (forall a. a -> a) -> () f = undefined
> | >
> | > g = (True `f` False) id
> | 
> | I use infix application to avoid tcInferRho.
> | 
> | With -ddump-tc-trace -dppr-debug, I see the following bit:
> | 
> | > Scratch.hs:18:6:
> | > u_tys
> | >   untch 0
> | >   (forall a{tv apE} [sk]. a{tv apE} [sk] -> a{tv apE} [sk]) -> ()
> | >   ~
> | >   t_aHO{tv} [tau[0]]
> | >   a type equality (forall a{tv apE} [sk].
> | >a{tv apE} [sk] -> a{tv apE} [sk])
> | >   -> ()
> | >   ~
> | >   t_aHO{tv} [tau[0]]
> | > Scratch.hs:18:6:
> | > writeMetaTyVar
> | >   t_aHO{tv} [tau[0]] := (forall a{tv apE} [sk].
> | >  a{tv apE} [sk] -> a{tv apE} [sk])
> | > -> ()
> | >
> | 
> | What's very strange to me here is that we see t_aHO, a **tau** type,
> | being rewritten to a poly-type. I could clearly throw in more printing
> | statements to see what is going on, but I wanted to check if this looks
> | strange to you, too.
> | 
> | Thanks,
> | Richard
> | 
> | On Jul 22, 2014, at 6:28 AM, Simon Peyton Jones 
> | wrote:
> | 
> | > Richard
> | >
> | > You are right; there is something squishy here.
> | >
> | > The original idea was that a unification variable only stands for a
> | *monotype* (with no for-alls).  But our basic story for the type
> | inference engine is
> | >   tcExpr :: HsExpr -> TcType -> TcM HsExpr'
> | > which checks that the expression has the given expected type. To do
> | inference we pass in a unification variable as the "expected type".
> | BUT if the expression actually has a type like (forall a. a->a) -> Int,
> | then the unification variable clearly isn't being unified with a
> | monotype.  There are a couple of places where we must "zonk" the
> | expected type, after calling tcExpr, to expose the foralls.  A major
> | example is TcExpr.tcInferFun.
> | >
> | > I say this is squishy because *in principle* we could replace every
> | unification with generating an equality constraint, for later solving.
> | (This does often happen, see TcUnify.uType_defer.)  BUT if we generate
> | an equality constraint, the zonking won't work, and the foralls won't
> | be exposed early enough.  I wish that the story here was more solid.
> | >
> | > The original idea of tcInferRho was to have some special cases that
> | did not rely on this squishy "unify with polytype" story. It had a
> | number of special cases, perhaps not enough as you observe.  But it
> | does look as if the original goal (which I think was to deal with
> | function applications) doesn't even use it -- it uses tcInferFun
> | instead.
> | >
> | > So I think you may be right: tcInferRho may not be important.  There
> | is a perhaps-significant efficiency question though: it avoids
> | allocating an unifying a fresh unification variable each time.
> | >
> | > Simon
> | >
> | > | -Original Message-
> | > | From: Richard Eisenberg [mailto:e...@cis.upenn.edu]
> | > | Sent: 18 July 2014 22:00
> | > | To: Simon Peyton Jones
> | > | Subject: Re: tcInferRho
> | > |
> | > | I thought as much, but I can't seem to tickle the bug. For example:
> | > |
> | > | > {-# LANGUAGE RankNTypes #-}
> | > | >
> | > | > f :: Int -> Bool -> (forall a. a -> a) -> Int f = undefined
> | > | >
> | > | > x = (3 `f` True)
> | > | >
> | > |
> | > |
> 

RE: tcInferRho

2014-07-22 Thread Simon Peyton Jones
Indeed.

Unification variables *can* unify with polytypes, as you see.

GHC does "on the fly" unification with in-place update, and only defers to the 
constraint solver if it can't readily unify on the fly.  The squishiness is 
precisely that for this setting we *must* unify on the fly, so the "it's always 
ok to defer" rule doesn't hold.

Simon

| -Original Message-
| From: Richard Eisenberg [mailto:e...@cis.upenn.edu]
| Sent: 22 July 2014 13:22
| To: Simon Peyton Jones
| Cc: ghc-devs@haskell.org
| Subject: Re: tcInferRho
| 
| OK -- that all makes sense.
| 
| But why does it actually work, I wonder? It seems that to get the
| behavior that you describe below, and the behavior that we see in
| practice, a unification variable *does* have to unify with a non-tau-
| type, like (forall a. a -> a) -> Int. But doesn't defer_me in
| TcUnify.checkTauTvUpdate prevent such a thing from happening?
| 
| To learn more, I tried compiling this code:
| 
| > f :: Bool -> Bool -> (forall a. a -> a) -> () f = undefined
| >
| > g = (True `f` False) id
| 
| I use infix application to avoid tcInferRho.
| 
| With -ddump-tc-trace -dppr-debug, I see the following bit:
| 
| > Scratch.hs:18:6:
| > u_tys
| >   untch 0
| >   (forall a{tv apE} [sk]. a{tv apE} [sk] -> a{tv apE} [sk]) -> ()
| >   ~
| >   t_aHO{tv} [tau[0]]
| >   a type equality (forall a{tv apE} [sk].
| >a{tv apE} [sk] -> a{tv apE} [sk])
| >   -> ()
| >   ~
| >   t_aHO{tv} [tau[0]]
| > Scratch.hs:18:6:
| > writeMetaTyVar
| >   t_aHO{tv} [tau[0]] := (forall a{tv apE} [sk].
| >  a{tv apE} [sk] -> a{tv apE} [sk])
| > -> ()
| >
| 
| What's very strange to me here is that we see t_aHO, a **tau** type,
| being rewritten to a poly-type. I could clearly throw in more printing
| statements to see what is going on, but I wanted to check if this looks
| strange to you, too.
| 
| Thanks,
| Richard
| 
| On Jul 22, 2014, at 6:28 AM, Simon Peyton Jones 
| wrote:
| 
| > Richard
| >
| > You are right; there is something squishy here.
| >
| > The original idea was that a unification variable only stands for a
| *monotype* (with no for-alls).  But our basic story for the type
| inference engine is
| > tcExpr :: HsExpr -> TcType -> TcM HsExpr'
| > which checks that the expression has the given expected type. To do
| inference we pass in a unification variable as the "expected type".
| BUT if the expression actually has a type like (forall a. a->a) -> Int,
| then the unification variable clearly isn't being unified with a
| monotype.  There are a couple of places where we must "zonk" the
| expected type, after calling tcExpr, to expose the foralls.  A major
| example is TcExpr.tcInferFun.
| >
| > I say this is squishy because *in principle* we could replace every
| unification with generating an equality constraint, for later solving.
| (This does often happen, see TcUnify.uType_defer.)  BUT if we generate
| an equality constraint, the zonking won't work, and the foralls won't
| be exposed early enough.  I wish that the story here was more solid.
| >
| > The original idea of tcInferRho was to have some special cases that
| did not rely on this squishy "unify with polytype" story. It had a
| number of special cases, perhaps not enough as you observe.  But it
| does look as if the original goal (which I think was to deal with
| function applications) doesn't even use it -- it uses tcInferFun
| instead.
| >
| > So I think you may be right: tcInferRho may not be important.  There
| is a perhaps-significant efficiency question though: it avoids
| allocating an unifying a fresh unification variable each time.
| >
| > Simon
| >
| > | -Original Message-
| > | From: Richard Eisenberg [mailto:e...@cis.upenn.edu]
| > | Sent: 18 July 2014 22:00
| > | To: Simon Peyton Jones
| > | Subject: Re: tcInferRho
| > |
| > | I thought as much, but I can't seem to tickle the bug. For example:
| > |
| > | > {-# LANGUAGE RankNTypes #-}
| > | >
| > | > f :: Int -> Bool -> (forall a. a -> a) -> Int f = undefined
| > | >
| > | > x = (3 `f` True)
| > | >
| > |
| > |
| > | GHCi tells me that x's type is `x :: (forall a. a -> a) -> Int`, as
| > | we would hope. If we were somehow losing the higher-rank
| > | polymorphism without tcInferRho, then I would expect something like
| > | `(3 `f` True) $ not)` to succeed (or behave bizarrely), but we get
| a
| > | very sensible type error
| > |
| > | Couldn't match type 'a' with 'Bool'
| > |   'a' is a rigid type variable bound by
| > |   a type expected by the context: a -> a
| > |   at /Users/rae/temp/Bug.hs:6:5
| > | Expected type: a -> a
| > |   Actual type: Bool -> Bool
| > | In the second argument of '($)', namely 'not'
| > | In the expression: (3 `f` True) $ not
| > |
| > | So, instead of just adding more cases, I wonder if we can't
| *remove*

Re: [QuickCheck] Status of Haskell Platform 2014.2.0.0

2014-07-22 Thread Johan Tibell
On Mon, Jul 21, 2014 at 1:30 AM, Nick Smallbone  wrote:
> 1. We make sure that tf-random becomes stable and hope it can be
>included in the next version of the platform.
>
> 2. We add a simple TFGen-inspired generator directly to QuickCheck.
>
> 3. We fix StdGen by replacing it with a TFGen-inspired implementation.
>
> Number 3 would be best for everyone, but if it doesn't happen maybe
> option 2 is the most pragmatic one.

I agree that (2) looks like the most pragmatic one.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: Multi-instance packages status report

2014-07-22 Thread Simon Marlow

On 22/07/14 13:17, Edward Z. Yang wrote:

Excerpts from Simon Marlow's message of 2014-07-22 12:27:46 +0100:

(Replying to Edward)

It's not clear to me why identical IPID would imply identical package
key.  Can't two instances of a package compiled against different
dependencies still have identical ABIs?


No, because the package key is baked into the linker symbols
(and thus the ABI).  I guess maybe if you had a completely empty
package, the ABIs would be the same.


Aha, I see.  Thanks!

Simon

___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: tcInferRho

2014-07-22 Thread Richard Eisenberg
OK -- that all makes sense.

But why does it actually work, I wonder? It seems that to get the behavior that 
you describe below, and the behavior that we see in practice, a unification 
variable *does* have to unify with a non-tau-type, like (forall a. a -> a) -> 
Int. But doesn't defer_me in TcUnify.checkTauTvUpdate prevent such a thing from 
happening?

To learn more, I tried compiling this code:

> f :: Bool -> Bool -> (forall a. a -> a) -> ()
> f = undefined
> 
> g = (True `f` False) id

I use infix application to avoid tcInferRho.

With -ddump-tc-trace -dppr-debug, I see the following bit:

> Scratch.hs:18:6:
> u_tys 
>   untch 0
>   (forall a{tv apE} [sk]. a{tv apE} [sk] -> a{tv apE} [sk]) -> ()
>   ~
>   t_aHO{tv} [tau[0]]
>   a type equality (forall a{tv apE} [sk].
>a{tv apE} [sk] -> a{tv apE} [sk])
>   -> ()
>   ~
>   t_aHO{tv} [tau[0]]
> Scratch.hs:18:6:
> writeMetaTyVar
>   t_aHO{tv} [tau[0]] := (forall a{tv apE} [sk].
>  a{tv apE} [sk] -> a{tv apE} [sk])
> -> ()
> 

What's very strange to me here is that we see t_aHO, a **tau** type, being 
rewritten to a poly-type. I could clearly throw in more printing statements to 
see what is going on, but I wanted to check if this looks strange to you, too.

Thanks,
Richard

On Jul 22, 2014, at 6:28 AM, Simon Peyton Jones  wrote:

> Richard
> 
> You are right; there is something squishy here.
> 
> The original idea was that a unification variable only stands for a 
> *monotype* (with no for-alls).  But our basic story for the type inference 
> engine is
>   tcExpr :: HsExpr -> TcType -> TcM HsExpr'
> which checks that the expression has the given expected type. To do inference 
> we pass in a unification variable as the "expected type".  BUT if the 
> expression actually has a type like (forall a. a->a) -> Int, then the 
> unification variable clearly isn't being unified with a monotype.  There are 
> a couple of places where we must "zonk" the expected type, after calling 
> tcExpr, to expose the foralls.  A major example is TcExpr.tcInferFun.
> 
> I say this is squishy because *in principle* we could replace every 
> unification with generating an equality constraint, for later solving.  (This 
> does often happen, see TcUnify.uType_defer.)  BUT if we generate an equality 
> constraint, the zonking won't work, and the foralls won't be exposed early 
> enough.  I wish that the story here was more solid.
> 
> The original idea of tcInferRho was to have some special cases that did not 
> rely on this squishy "unify with polytype" story. It had a number of special 
> cases, perhaps not enough as you observe.  But it does look as if the 
> original goal (which I think was to deal with function applications) doesn't 
> even use it -- it uses tcInferFun instead.
> 
> So I think you may be right: tcInferRho may not be important.  There is a 
> perhaps-significant efficiency question though: it avoids allocating an 
> unifying a fresh unification variable each time.
> 
> Simon
> 
> | -Original Message-
> | From: Richard Eisenberg [mailto:e...@cis.upenn.edu]
> | Sent: 18 July 2014 22:00
> | To: Simon Peyton Jones
> | Subject: Re: tcInferRho
> | 
> | I thought as much, but I can't seem to tickle the bug. For example:
> | 
> | > {-# LANGUAGE RankNTypes #-}
> | >
> | > f :: Int -> Bool -> (forall a. a -> a) -> Int
> | > f = undefined
> | >
> | > x = (3 `f` True)
> | >
> | 
> | 
> | GHCi tells me that x's type is `x :: (forall a. a -> a) -> Int`, as we
> | would hope. If we were somehow losing the higher-rank polymorphism
> | without tcInferRho, then I would expect something like `(3 `f` True) $
> | not)` to succeed (or behave bizarrely), but we get a very sensible type
> | error
> | 
> | Couldn't match type 'a' with 'Bool'
> |   'a' is a rigid type variable bound by
> |   a type expected by the context: a -> a
> |   at /Users/rae/temp/Bug.hs:6:5
> | Expected type: a -> a
> |   Actual type: Bool -> Bool
> | In the second argument of '($)', namely 'not'
> | In the expression: (3 `f` True) $ not
> | 
> | So, instead of just adding more cases, I wonder if we can't *remove*
> | cases, as it seems that the gears turn fine without this function. This
> | continues to surprise me, but it's what the evidence indicates. Can you
> | make any sense of this?
> | 
> | Thanks,
> | Richard
> | 
> | 
> | On Jul 18, 2014, at 12:49 PM, Simon Peyton Jones 
> | wrote:
> | 
> | > You're right, its' an omission.  The reason for the special case is
> | described in the comment on tcInferRho.  Adding OpApp would be a Good
> | Thing.  A bit tiresome because we'd need to pass to tcInferApp the
> | function to use to reconstruct the result HsExpr (currently foldl
> | mkHsApp, in tcInferApp), so that in the OpApp case it'd reconstruct an
> | OpApp.
> | >
> | > Go ahead and 

Re: Multi-instance packages status report

2014-07-22 Thread Edward Z . Yang
Excerpts from Simon Marlow's message of 2014-07-22 12:27:46 +0100:
> (Replying to Edward)
> 
> It's not clear to me why identical IPID would imply identical package 
> key.  Can't two instances of a package compiled against different 
> dependencies still have identical ABIs?

No, because the package key is baked into the linker symbols
(and thus the ABI).  I guess maybe if you had a completely empty
package, the ABIs would be the same.

Cheers,
Edward
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: Call for help on testing integer-gmp2 on non-Linux archs

2014-07-22 Thread Christiaan Baaij
Starting a build on my MAC:

OS: 10.8.5
XCode: XCode 4 CLI-only (so _no_ full Xcode, that is, xcode-select fails)
GCC: i686-apple-darwin11-llvm-gcc-4.2 (GCC) 4.2.1 (Based on Apple Inc. build 
5658) (LLVM build 2336.11.00)
GHC: 7.8.3

On Jul 22, 2014, at 2:02 PM, Herbert Valerio Riedel  wrote:

> On 2014-07-22 at 13:33:04 +0200, Niklas Larsson wrote:
>> I can test on Windows.
> 
> great! Are you using the 32bit or 64bit compiler?
> 
> All you'd need to do is 'git checkout' the wip/T9281 branch, add the line
> 
>  INTEGER_LIBRARY=integer-gmp2
> 
> at the end of mk/build.mk (and 'BuildFlavour=quick' should suffice) and
> try to build GHC with that. If you end up with a working stage2
> compiler, and 'inplace/bin/ghc-stage2 --interactive' reports loading the
> package 'integer-gmp2' then everything went better than expected :)
> 
> Then running the testsuite via
> 
>  cd testsuite/ && make WAY=normal SKIP_PERF_TESTS=YES 
> 
> should only fail with a few testcases due to the strings "integer-gmp2"
> vs. "integer-gmp" being different in the output.
> 
> Thanks,,
>  hvr
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://www.haskell.org/mailman/listinfo/ghc-devs

___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: Call for help on testing integer-gmp2 on non-Linux archs

2014-07-22 Thread Herbert Valerio Riedel
On 2014-07-22 at 13:33:04 +0200, Niklas Larsson wrote:
> I can test on Windows.

great! Are you using the 32bit or 64bit compiler?

All you'd need to do is 'git checkout' the wip/T9281 branch, add the line
 
  INTEGER_LIBRARY=integer-gmp2

at the end of mk/build.mk (and 'BuildFlavour=quick' should suffice) and
try to build GHC with that. If you end up with a working stage2
compiler, and 'inplace/bin/ghc-stage2 --interactive' reports loading the
package 'integer-gmp2' then everything went better than expected :)

Then running the testsuite via

  cd testsuite/ && make WAY=normal SKIP_PERF_TESTS=YES 

should only fail with a few testcases due to the strings "integer-gmp2"
vs. "integer-gmp" being different in the output.

Thanks,,
  hvr
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: Windows breakage -- again

2014-07-22 Thread Johan Tibell
I suggest we continue the discussion on the ticket:
https://ghc.haskell.org/trac/ghc/ticket/9346

Summary so far is that LOCK is not a valid prefix to MOV, but the x86
code generator doesn't emit any LOCKs before MOVs so I'm not sure how
that instruction got there.

On Tue, Jul 22, 2014 at 12:41 PM, Niklas Larsson  wrote:
> That's true, I used mingw.
>
> I have created a ticket https://ghc.haskell.org/trac/ghc/ticket/9346#ticket.
>
>
> 2014-07-22 12:22 GMT+02:00 Páli Gábor János :
>
>> 2014-07-22 11:49 GMT+02:00 Johan Tibell :
>> > Is this on FreeBSD only or does it happen elsewhere?
>>
>> I would say it happens everywhere (on 32 bits).  I guess Niklas was
>> debugging the mingw32 version.
>
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: Call for help on testing integer-gmp2 on non-Linux archs

2014-07-22 Thread Niklas Larsson
I can test on Windows.

Niklas


2014-07-22 10:07 GMT+02:00 Herbert Valerio Riedel :

> Hello *,
>
> As some of you may have already noticed, there's an attempt[1] in the
> works to reimplement integer-gmp in such a way to avoid overriding GMP's
> internal memory allocator functions, and thus make it possible to link
> GHC/integer-gmp compiled programs with other components linked to libgmp
> which break if GMP's memory allocation goes via GHC's GC.  I also hope
> this will facilitate to ship GHC bindists for Windows with a dynamically
> linked (& unpatched!) GMP library, to reduce LGPL licencing concerns for
> resulting GHC compiled programs.
>
> So far, I've only been able to test the code on Linux/i386 and
> Linux/amd64 where it works correctly. Now it'd be interesting to know if
> integer-gmp2 in its current form works also on non-Linux archs, and if
> not, what's needed to make it work. Fwiw, I mostly suspect
> linker-related issues.
>
> Therefore, is anyone here interested to help out with making sure
> GHC+integer-gmp2 builds on Windows, OSX and so on? If so, please get
> into contact with me!
>
> Cheers,
>   hvr
>
>  [1]: https://ghc.haskell.org/trac/ghc/ticket/9281
>   https://phabricator.haskell.org/D82
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://www.haskell.org/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: Multi-instance packages status report

2014-07-22 Thread Simon Marlow

On 22/07/14 08:23, Joachim Breitner wrote:

[Replying to the list, in case it was sent to me in private by accident]


Hi Edward,

Am Montag, den 21.07.2014, 23:25 +0100 schrieb Edward Z.Yang:

Excerpts from Joachim Breitner's message of 2014-07-21 21:06:49 +0100:

maybe a stupid question, but how does the package key relate to the hash
that "ghc-pkg" shows for package?


Fine question---this is definitely something that is different from the
GSoC project.  The short answer is, the current hash shown in ghc-pkg is
the ABI hash associated with the InstalledPackageId, which is computed
after GHC is done compiling your code; whereas the package key is a
hash of the dependency graph, which can be done before compilation.

The longer answer is we now have three ID-like things, in order of
increasing specificity:

Package IDs: containers-0.9
 These are the "user visible" things that we expect users to talk
 about in Cabal file
Package Keys: md5("containers-0.9" + transitive deps)
 These are the identifiers the compiler cares about: they are used
 for type equality, and contain a bit more detail than we expect
 a user to normally need---however, a user might need to refer to
 this to disambiguate in some situations.
Installed Package IDs: ABI hash of compiled code
 This uniquely identifies an installed package in the database, up
 to ABI.

So, if two packages have the same IPID, their package keys are
guaranteed to be the same, but not vice versa. (And likewise for package
IDs.)


(Replying to Edward)

It's not clear to me why identical IPID would imply identical package 
key.  Can't two instances of a package compiled against different 
dependencies still have identical ABIs?


Reviewing your patches is next on my queue...

Cheers,
Simon

___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


RE: a little phrustrated

2014-07-22 Thread Simon Peyton Jones
Maybe add this useful lore to Git guidance or Phabricator guidance?

S

| -Original Message-
| From: ghc-devs [mailto:ghc-devs-boun...@haskell.org] On Behalf Of Simon
| Marlow
| Sent: 22 July 2014 12:18
| To: Edward Z. Yang; Richard Eisenberg
| Cc: ghc-devs@haskell.org
| Subject: Re: a little phrustrated
| 
| On 16/07/14 20:02, Edward Z. Yang wrote:
| > Hello Richard,
| >
| >> 1) I had some untracked files in a submodule repo. I couldn't find a
| way to get `arc diff` to ignore these, as they appeared to git to be a
| change in a tracked file (that is, a change to a submodule, which is
| considered tracked). `git stash` offered no help, so I had to delete
| the untracked files. This didn't cause real pain (the files were there
| in error), but it seems a weakness of the system if I can't make
| progress otherwise.
| >
| > Yes, this was fairly painful for me as well.  One way to make the
| pain
| > go away and help others out is improve the .gitignore files so these
| > files are not considered tracked.  Here is another thread discussing
| > this problem:
| >
| >  http://comments.gmane.org/gmane.comp.version-control.git/238173
| >
| > though I haven't read through it fully yet.
| 
| If you go into your .git/config file in the GHC repo, and add "ignore =
| untracked", like this:
| 
| [submodule "nofib"]
|   url = /home/simon/ghc-mirror/nofib.git
|  ignore = untracked
| 
| Then git won't consider untracked files in that submodule as making
| that submodule dirty, and you'll be able to happily "arc diff".
| 
| Cheers,
| Simon
| 
| ___
| ghc-devs mailing list
| ghc-devs@haskell.org
| http://www.haskell.org/mailman/listinfo/ghc-devs
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: a little phrustrated

2014-07-22 Thread Simon Marlow

On 16/07/14 20:02, Edward Z. Yang wrote:

Hello Richard,


1) I had some untracked files in a submodule repo. I couldn't find a way to get 
`arc diff` to ignore these, as they appeared to git to be a change in a tracked 
file (that is, a change to a submodule, which is considered tracked). `git 
stash` offered no help, so I had to delete the untracked files. This didn't 
cause real pain (the files were there in error), but it seems a weakness of the 
system if I can't make progress otherwise.


Yes, this was fairly painful for me as well.  One way to make the pain
go away and help others out is improve the .gitignore files so these
files are not considered tracked.  Here is another thread discussing
this problem:

 http://comments.gmane.org/gmane.comp.version-control.git/238173

though I haven't read through it fully yet.


If you go into your .git/config file in the GHC repo, and add "ignore = 
untracked", like this:


[submodule "nofib"]
url = /home/simon/ghc-mirror/nofib.git
ignore = untracked

Then git won't consider untracked files in that submodule as making that 
submodule dirty, and you'll be able to happily "arc diff".


Cheers,
Simon

___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: Windows breakage -- again

2014-07-22 Thread Niklas Larsson
That's true, I used mingw.

I have created a ticket https://ghc.haskell.org/trac/ghc/ticket/9346#ticket.


2014-07-22 12:22 GMT+02:00 Páli Gábor János :

> 2014-07-22 11:49 GMT+02:00 Johan Tibell :
> > Is this on FreeBSD only or does it happen elsewhere?
>
> I would say it happens everywhere (on 32 bits).  I guess Niklas was
> debugging the mingw32 version.
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


RE: tcInferRho

2014-07-22 Thread Simon Peyton Jones
Richard

You are right; there is something squishy here.

The original idea was that a unification variable only stands for a *monotype* 
(with no for-alls).  But our basic story for the type inference engine is
tcExpr :: HsExpr -> TcType -> TcM HsExpr'
which checks that the expression has the given expected type. To do inference 
we pass in a unification variable as the "expected type".  BUT if the 
expression actually has a type like (forall a. a->a) -> Int, then the 
unification variable clearly isn't being unified with a monotype.  There are a 
couple of places where we must "zonk" the expected type, after calling tcExpr, 
to expose the foralls.  A major example is TcExpr.tcInferFun.

I say this is squishy because *in principle* we could replace every unification 
with generating an equality constraint, for later solving.  (This does often 
happen, see TcUnify.uType_defer.)  BUT if we generate an equality constraint, 
the zonking won't work, and the foralls won't be exposed early enough.  I wish 
that the story here was more solid.

The original idea of tcInferRho was to have some special cases that did not 
rely on this squishy "unify with polytype" story. It had a number of special 
cases, perhaps not enough as you observe.  But it does look as if the original 
goal (which I think was to deal with function applications) doesn't even use it 
-- it uses tcInferFun instead.

So I think you may be right: tcInferRho may not be important.  There is a 
perhaps-significant efficiency question though: it avoids allocating an 
unifying a fresh unification variable each time.

Simon

| -Original Message-
| From: Richard Eisenberg [mailto:e...@cis.upenn.edu]
| Sent: 18 July 2014 22:00
| To: Simon Peyton Jones
| Subject: Re: tcInferRho
| 
| I thought as much, but I can't seem to tickle the bug. For example:
| 
| > {-# LANGUAGE RankNTypes #-}
| >
| > f :: Int -> Bool -> (forall a. a -> a) -> Int
| > f = undefined
| >
| > x = (3 `f` True)
| >
| 
| 
| GHCi tells me that x's type is `x :: (forall a. a -> a) -> Int`, as we
| would hope. If we were somehow losing the higher-rank polymorphism
| without tcInferRho, then I would expect something like `(3 `f` True) $
| not)` to succeed (or behave bizarrely), but we get a very sensible type
| error
| 
| Couldn't match type 'a' with 'Bool'
|   'a' is a rigid type variable bound by
|   a type expected by the context: a -> a
|   at /Users/rae/temp/Bug.hs:6:5
| Expected type: a -> a
|   Actual type: Bool -> Bool
| In the second argument of '($)', namely 'not'
| In the expression: (3 `f` True) $ not
| 
| So, instead of just adding more cases, I wonder if we can't *remove*
| cases, as it seems that the gears turn fine without this function. This
| continues to surprise me, but it's what the evidence indicates. Can you
| make any sense of this?
| 
| Thanks,
| Richard
| 
| 
| On Jul 18, 2014, at 12:49 PM, Simon Peyton Jones 
| wrote:
| 
| > You're right, its' an omission.  The reason for the special case is
| described in the comment on tcInferRho.  Adding OpApp would be a Good
| Thing.  A bit tiresome because we'd need to pass to tcInferApp the
| function to use to reconstruct the result HsExpr (currently foldl
| mkHsApp, in tcInferApp), so that in the OpApp case it'd reconstruct an
| OpApp.
| >
| > Go ahead and do this if you like
| >
| > S
| >
| > | -Original Message-
| > | From: Richard Eisenberg [mailto:e...@cis.upenn.edu]
| > | Sent: 17 July 2014 18:48
| > | To: Simon Peyton Jones
| > | Subject: tcInferRho
| > |
| > | Hi Simon,
| > |
| > | I'm in the process of rejiggering the functions in TcHsType to be
| more
| > | like those in TcExpr, in order to handle the richer type/kind
| language
| > | of my branch.
| > |
| > | I have a question about tcInferRho (TcExpr.lhs:115). It calls
| > | tcInfExpr, which handles three special cases of HsExpr, before
| > | deferring to tcExpr. The three cases are HsVar, HsPar, and HsApp.
| > | What's odd about this is that there are other cases that seem to
| belong
| > | in this group, like OpApp. After all, (x + y) and ((+) x y) should
| > | behave the same in all circumstances, right? I can't find a way to
| > | tickle the omission here, so there may not be a bug, but it certainly
| > | is a little strange. Can you shed any light?
| > |
| > | Thanks!
| > | Richard
| >

___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: Windows breakage -- again

2014-07-22 Thread Páli Gábor János
2014-07-22 11:49 GMT+02:00 Johan Tibell :
> Is this on FreeBSD only or does it happen elsewhere?

I would say it happens everywhere (on 32 bits).  I guess Niklas was
debugging the mingw32 version.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: GHC contribution guidelines and infrastructure talk on 6th September at HIW?

2014-07-22 Thread Rob Stewart
On 22 July 2014 11:12, Jost Berthold  wrote:

> We shall make room for it in the programme, possibly in the last session,
> which can turn into the "Haskell release discussion evening".

Fantastic, thank you Jost. Are HIW talks to be recorded? For those
budding GHC to-be contributors unable to attend Gothenburg, a
recording online would be very helpful. Malcolm Wallace has been very
helpful with this in the past.

--
Rob
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


RE: GHC contribution guidelines and infrastructure talk on 6th September at HIW?

2014-07-22 Thread Jost Berthold
(Sorry for joining this late... I figured we would be in dialogue off 
the list eventually)


Joachim wrote and posted a proposal, and I think this proposal is indeed 
a good idea (and one of the purposes of HIW, definite yes).


We shall make room for it in the programme, possibly in the last 
session, which can turn into the "Haskell release discussion evening".


Best regards
Jost

On 07/22/2014 11:06 AM, ghc-devs-requ...@haskell.org wrote:

Date: Tue, 22 Jul 2014 08:38:22 +
From: Simon Peyton Jones 
To: Mark Lentczner , "ghc-devs@haskell.org"

Subject: RE: GHC contribution guidelines and infrastructure talk on
6th September at HIW?
Message-ID:

<618be556aadd624c9c918aa5d5911bef10435...@db3prd3001mb020.064d.mgd.msft.net>

Content-Type: text/plain; charset="utf-8"

I think such a discussion would be a Good Thing, and just what HIW is for.

Simon

From: ghc-devs [mailto:ghc-devs-boun...@haskell.org] On Behalf Of Mark Lentczner
Sent: 22 July 2014 02:03
To: ghc-devs@haskell.org
Subject: Re: GHC contribution guidelines and infrastructure talk on 6th 
September at HIW?

On a related front... I don't have a talk to give (hence I didn't submit a proposal)... 
But I'd love it if some of us could have a group discussion about coordinating releases, 
and our approach to putting out "Haskell":

In short, we see it as several related peices (GHC, Cabal, Haddock, core libs, platform, etc...) 
but my guess is that most developers considering using Haskell see it as one thing: "Can I haz 
the Haskellz on my machine? kthxbai?" Therefore, I think we could put some thought into how we 
manage these pieces into a cohesive whole whose release more or less "just works".

Not sure if this should be a "session", a "workshop", a long hallway 
disucssion, a night of good food and beer, or what. I'm happy to put some effort into organizing, 
and setting the context for the discussion.

- Mark


___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: Windows breakage -- again

2014-07-22 Thread Johan Tibell
On Tue, Jul 22, 2014 at 9:50 AM, Niklas Larsson  wrote:
> AtomicPrimOps.hs flakes out for:
> fetchAndTest
> fetchNandTest
> fetchOrTest
> fetchXorTest
> casTest
>
> but not for fetchAddSubTest and readWriteTest.
>
> If I step through it, the segfault comes at line 166, it doesn't reach the
> .fetchXXXIntArray function that was called from the thread (at least ghci
> doesn't hit a breakpoint set at it).
>
> GDB says the bad instruction is:
> 4475:f0 8b 4c 24 40   lock mov 0x40(%esp),%ecx

Is this on FreeBSD only or does it happen elsewhere?
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: RFC: unsafeShrinkMutableByteArray#

2014-07-22 Thread Simon Marlow

On 13/07/14 14:15, Herbert Valerio Riedel wrote:

On 2014-07-12 at 17:40:07 +0200, Simon Marlow wrote:

Yes, this will cause problems in some modes, namely -debug and -prof
that need to be able to scan the heap linearly.


...and I assume we don't want to fallback to a non-zerocopy mode for
-debug & -prof in order avoid distorting the profiling measurements
either?


I suppose that would be doable.  Not ideal, but doable.  In profiling 
mode you could arrange for the extra allocation to be assigned to 
CCS_OVERHEAD, so that it gets counted as profiling overhead.  You'd 
still have the time overhead of the copy though.



Usually we invoke the
OVERWRITING_CLOSURE() macro which overwrites the original closure with
zero words, but this won't work in your case because you want to keep
the original contents.  So you'll need a version of
OVERWRITING_CLOSURE() that takes the size that you want to retain, and
doesn't overwrite that part of the closure.  This is probably a good
idea anyway, because it might save some work in other places where we
use OVERWRITING_CLOSURE().


I'm not sure I follow. What's the purpose of overwriting the original
closure payload with zeros while in debug/profile mode? (and on what
occasions that would be problematic for a MutableByteArray does it
happen?)


Certain features of the RTS need to be able to scan the contents of the 
heap by linearly traversing the memory.  When there are gaps between 
heap objects, there needs to be a way to find the start of the next heap 
object, so currently when we overwrite an object with a smaller one we 
clear the payload with zeroes.  There are more efficient ways, such as 
overwriting with a special "gap" object, but since the times we need to 
do this are not performance critical, we haven't optimised it. 
Currently we need to do this


 * in debug mode, for heap sanity checking
 * in profiling mode, for biographical profiling

The macro that does this, OVERWRITING_CLOSURE() currently overwrites the 
whole payload of the closure with zeroes, whereas you want to retain 
part of the closure, so you would need a different version of this macro.



I am worried about sizeofMutableByteArray# though.  It wouldn't be
safe to call sizeofMutableByteArray# on the original array, just in
case it was evaluated after the shrink.  You could make things
slightly safer by having unsafeShrinkMutableByteArray# return the new
array, so that you have a safe way to call sizeofMutableByteArray#
after the shrink.  This still doesn't seem very satisfactory to me
though.


...as a somewhat drastic obvious measure, one could change the type-sig
of sizeofMutableByteArray# to

   ::  MutableByteArray# s a -> State# s -> (# State# s, Int# #)

and fwiw, I could find only one use-site of sizeofMutableByteArray#
inside ghc.git, so I'm wondering if that primitive is used much anyway.


I think that would definitely be better, if it is possible without too 
much breakage.  Once we have operations that change the size of an 
array, the operation that reads the size should be stateful.



btw, is it currently safe to call/evaluate sizeofMutableByteArray# on
the original MBA after a unsafeFreezeByteArray# was performed?


Probably safe, but better to avoid doing it if you can.


Otoh, if we are to thread a MutableByteArray# through the call anyway,
can't we just combine shrinking and freezing in one primop (as suggested
below)?


I don't think this makes anything easier.  You still need to overwrite 
the unused part of the array, and sizeofMutableByteArray# is still 
dangerous.


Cheers,
Simon



[...]


PS: maybe unsafeShrinkMutableByteArray# could unsafe-freeze the
  ByteArray# while at it (thus be called something like
  unsafeShrinkAndFreezeMutableByteArray#), as once I know the final
  smaller size I would freeze it anyway right after shrinking.


___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


RE: GHC contribution guidelines and infrastructure talk on 6th September at HIW?

2014-07-22 Thread Simon Peyton Jones
I think such a discussion would be a Good Thing, and just what HIW is for.

Simon

From: ghc-devs [mailto:ghc-devs-boun...@haskell.org] On Behalf Of Mark Lentczner
Sent: 22 July 2014 02:03
To: ghc-devs@haskell.org
Subject: Re: GHC contribution guidelines and infrastructure talk on 6th 
September at HIW?

On a related front... I don't have a talk to give (hence I didn't submit a 
proposal)... But I'd love it if some of us could have a group discussion about 
coordinating releases, and our approach to putting out "Haskell":

In short, we see it as several related peices (GHC, Cabal, Haddock, core libs, 
platform, etc...) but my guess is that most developers considering using 
Haskell see it as one thing: "Can I haz the Haskellz on my machine? kthxbai?" 
Therefore, I think we could put some thought into how we manage these pieces 
into a cohesive whole whose release more or less "just works".

Not sure if this should be a "session", a "workshop", a long hallway 
disucssion, a night of good food and beer, or what. I'm happy to put some 
effort into organizing, and setting the context for the discussion.

- Mark
​
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: Multi-instance packages status report

2014-07-22 Thread Edward Z . Yang
Excerpts from Joachim Breitner's message of 2014-07-22 08:23:22 +0100:
> [Replying to the list, in case it was sent to me in private by accident]

Oops, thanks.

> thanks for the explanations, it makes it clear to me.
> 
> Do the package key contain the flags used to compile dependencies? In
> the example where it could matter the flag would change that package’s
> key, so maybe it is redundant

That is a good question. At the moment, flags are not incorporated, but
they could be.  I think it probably makes more sense to include them,
but it does require accommodation from the dependency solver which
doesn't exist at the moment.

> And just to confirm my understandn: If we had a completely reproducible
> environment, the same key would (conceptually, not practically) imply
> the same IPID, right?

I don't even think that's necessary conceptually true.  If I am working
on a package in development and I modify the type of one file, the
package key (as currently described) stays the same, but the ABI hash
changes.  I think the overwrite behavior can still be handy in
development situations since it avoids the "lots of old packages"
problem that the GSoC project had to deal with.  Conversely, I don't
think the package key should include something like the hash of the
sources of the source tree, because it is totally possible for differing
sources to be ABI compatible (and thus have the same IPID).  But what
this points to is the need to differentiate between ABI (not unique)
and "true IPID" (which is absolutely, completely unique).

Edward
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Call for help on testing integer-gmp2 on non-Linux archs

2014-07-22 Thread Herbert Valerio Riedel
Hello *,

As some of you may have already noticed, there's an attempt[1] in the
works to reimplement integer-gmp in such a way to avoid overriding GMP's
internal memory allocator functions, and thus make it possible to link
GHC/integer-gmp compiled programs with other components linked to libgmp
which break if GMP's memory allocation goes via GHC's GC.  I also hope
this will facilitate to ship GHC bindists for Windows with a dynamically
linked (& unpatched!) GMP library, to reduce LGPL licencing concerns for
resulting GHC compiled programs.

So far, I've only been able to test the code on Linux/i386 and
Linux/amd64 where it works correctly. Now it'd be interesting to know if
integer-gmp2 in its current form works also on non-Linux archs, and if
not, what's needed to make it work. Fwiw, I mostly suspect
linker-related issues.

Therefore, is anyone here interested to help out with making sure
GHC+integer-gmp2 builds on Windows, OSX and so on? If so, please get
into contact with me!

Cheers,
  hvr

 [1]: https://ghc.haskell.org/trac/ghc/ticket/9281
  https://phabricator.haskell.org/D82
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: Windows breakage -- again

2014-07-22 Thread Niklas Larsson
AtomicPrimOps.hs flakes out for:
fetchAndTest
fetchNandTest
fetchOrTest
fetchXorTest
casTest

but not for fetchAddSubTest and readWriteTest.

If I step through it, the segfault comes at line 166, it doesn't reach the
.fetchXXXIntArray function that was called from the thread (at least ghci
doesn't hit a breakpoint set at it).

GDB says the bad instruction is:
4475:f0 8b 4c 24 40   lock mov 0x40(%esp),%ecx


Niklas


2014-07-22 6:53 GMT+02:00 Páli Gábor János :

> 2014-07-21 21:31 GMT+02:00 Johan Tibell :
> > Great. Thanks all for your help!
>
> I am afraid we are not done with this yet.  Yesterday I have also
> committed the fix for the FreeBSD platform, but today I noticed that
> the corresponding test case ("AtomicPrimops") is failing due to
> SIGILL, that is, illegal instruction.  And it has been happening for
> all the 32-bit platforms, including Linux [1], SmartOS [2], and
> Solaris [3].
>
> I do not know yet why it goes wrong.
>
> [1]
> http://haskell.inf.elte.hu/builders/validator1-linux-x86-head/34/10.html
> [2] http://haskell.inf.elte.hu/builders/smartos-x86-head/73/21.html
> [3] http://haskell.inf.elte.hu/builders/solaris-x86-head/116/21.html
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://www.haskell.org/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: GHC contribution guidelines and infrastructure talk on 6th September at HIW?

2014-07-22 Thread Joachim Breitner
Hi Rob,

Am Montag, den 21.07.2014, 22:55 +0100 schrieb Rob Stewart:
> On 18 July 2014 09:01, Joachim Breitner  wrote:
> > Am Freitag, den 18.07.2014, 07:25 + schrieb Simon Peyton Jones:
> >> | On Saturday 6th September is the Haskell Implementers Workshop. There
> >> | has been plenty of discussion over the last 12 months about making
> >> | contributions to GHC less formidable. Is this story going to be told at
> >> | HIW? A talk about revised contribution guidelines and helpful tool
> >> | support might engage those sat on, or peering over, the fence.
> >>
> >> I think that's a great idea.  Maybe Simon M, or Joachim, or Austin,
> >> or Herbert?  Of some coalition thereof
> >
> > I agree, and I’d be available for it, or for joining a coalition.
> 
> I gentle nudge about the idea of a HIW talk on contributing to GHC
> development. I'm glad some people think that this is a good idea.
> However, given that the official deadline for talk proposals has
> already passed, at least an abstract would have to be submitted to the
> HIW committee very soon to be considered. The presentation content can
> of course be put together much closer to the time.

for some reason I assumed you were part of the committee (your mail
sounded to me like “I’m responsible for this event, and would like to
such a talk”), so I wasn’t paying close attention to the deadline. But I
see that’s not the case...

The registration is closed on Easy Chair. So I’ll make a submission
directly to Carter (who spoke in favor of this last Thursday – past the
deadline) and Jost (the chair). Maybe there is still a slot left.


==
Desperately late submission to HIW:

Contributing to GHC
~~~

The core component of the Haskell ecosystem, the Glasgow Haskell
Compiler (GHC) is not only open source, it is also a proper open source
project relying on the work of volunteers. Despite its age and its
apparent complexity, new contributors are not needed but actually useful
to the project.

Recently, the project has seen some changes that make it even easier
for you to start hacking on it, more convenient to get your changes
reviewed and harder to break anything: Our repositories have a less
custom setup; a tool called Phabricator is used for efficient and
meme-ridden code review; various quality assurances services detect
breakage and performance regressions early. This extends our existing
tools (trac, the mailing lists) and practices (notes, an extensive test
suite) that keep working on GHC manageable.

In this talk we give an overview of old and new practices and tools,
especially aiming at interested newcomers, lowering the entry barrier to
contributing to GHC.
==

(Side remark: I would have like to start the summary with “GHC is
attracting ever contributors”, but according to the graph at
https://github.com/ghc/ghc/graphs/contributors this is not obvious.
There have been higher spikes some years ago. But at least we seem to
have a higher stable base. Although less than in 2011 and 2012.)

 
Greetings,
Joachim


-- 
Joachim “nomeata” Breitner
  m...@joachim-breitner.de • http://www.joachim-breitner.de/
  Jabber: nome...@joachim-breitner.de  • GPG-Key: 0xF0FBF51F
  Debian Developer: nome...@debian.org



signature.asc
Description: This is a digitally signed message part
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: Multi-instance packages status report

2014-07-22 Thread Joachim Breitner
[Replying to the list, in case it was sent to me in private by accident]


Hi Edward,

Am Montag, den 21.07.2014, 23:25 +0100 schrieb Edward Z.Yang:
> Excerpts from Joachim Breitner's message of 2014-07-21 21:06:49 +0100:
> > maybe a stupid question, but how does the package key relate to the hash
> > that "ghc-pkg" shows for package?
> 
> Fine question---this is definitely something that is different from the
> GSoC project.  The short answer is, the current hash shown in ghc-pkg is
> the ABI hash associated with the InstalledPackageId, which is computed
> after GHC is done compiling your code; whereas the package key is a
> hash of the dependency graph, which can be done before compilation.
> 
> The longer answer is we now have three ID-like things, in order of
> increasing specificity:
> 
> Package IDs: containers-0.9
> These are the "user visible" things that we expect users to talk
> about in Cabal file
> Package Keys: md5("containers-0.9" + transitive deps)
> These are the identifiers the compiler cares about: they are used
> for type equality, and contain a bit more detail than we expect
> a user to normally need---however, a user might need to refer to
> this to disambiguate in some situations.
> Installed Package IDs: ABI hash of compiled code
> This uniquely identifies an installed package in the database, up
> to ABI.
> 
> So, if two packages have the same IPID, their package keys are
> guaranteed to be the same, but not vice versa. (And likewise for package
> IDs.)


thanks for the explanations, it makes it clear to me.

Do the package key contain the flags used to compile dependencies? In
the example where it could matter the flag would change that package’s
key, so maybe it is redundant

And just to confirm my understandn: If we had a completely reproducible
environment, the same key would (conceptually, not practically) imply
the same IPID, right?

Greetings,
Joachim



-- 
Joachim “nomeata” Breitner
  m...@joachim-breitner.de • http://www.joachim-breitner.de/
  Jabber: nome...@joachim-breitner.de  • GPG-Key: 0xF0FBF51F
  Debian Developer: nome...@debian.org



signature.asc
Description: This is a digitally signed message part
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs