Re: Broken Data.Data instances

2014-08-13 Thread Alan Kim Zimmerman
And I dipped my toes into the phabricator water, and uploaded a diff to
https://phabricator.haskell.org/D153

I left the lines long for now, so that it is clear that I simply added
parameters to existing type signatures.


On Tue, Aug 12, 2014 at 10:51 PM, Alan  Kim Zimmerman alan.z...@gmail.com
wrote:

 Status update

 I have worked through a proof of concept update to the GHC AST whereby the
 type is provided as a parameter to each data type. This was basically a
 mechanical process of changing type signatures, and required very little
 actual code changes, being only to initialise the placeholder types.

 The enabling types are


 type PostTcType = Type-- Used for slots in the abstract syntax
 -- where we want to keep slot for a type
 -- to be added by the type checker...but
 -- [before typechecking it's just bogus]
 type PreTcType = () -- used before typechecking


 class PlaceHolderType a where
   placeHolderType :: a

 instance PlaceHolderType PostTcType where

   placeHolderType  = panic Evaluated the place holder for a
 PostTcType

 instance PlaceHolderType PreTcType where
   placeHolderType = ()

 These are used to replace all instances of PostTcType in the hsSyn types.

 The change was applied against HEAD as of last friday, and can be found
 here

 https://github.com/alanz/ghc/tree/wip/landmine-param
 https://github.com/alanz/haddock/tree/wip/landmine-param

 They pass 'sh validate' with GHC 7.6.3, and compile against GHC 7.8.3. I
 have not tried to validate that yet, have no reason to expect failure.


 Can I please get some feedback as to whether this is a worthwhile change?

 It is the first step to getting a generic traversal safe AST

 Regards
   Alan


 On Mon, Jul 28, 2014 at 5:45 PM, Alan  Kim Zimmerman alan.z...@gmail.com
  wrote:

 FYI I edited the paste at http://lpaste.net/108262 to show the problem


 On Mon, Jul 28, 2014 at 5:41 PM, Alan  Kim Zimmerman 
 alan.z...@gmail.com wrote:

 I already tried that, the syntax does not seem to allow it.

 I suspect some higher form of sorcery will be required, as alluded to
 here
 http://stackoverflow.com/questions/14133121/can-i-constrain-a-type-family

 Alan


 On Mon, Jul 28, 2014 at 4:55 PM, p.k.f.holzensp...@utwente.nl wrote:

  Dear Alan,



 I would think you would want to constrain the result, i.e.



 type family (Data (PostTcType a)) = PostTcType a where …



 The Data-instance of ‘a’ doesn’t give you much if you have a
 ‘PostTcType a’.



 Your point about SYB-recognition of WrongPhase is, of course, a good
 one ;)



 Regards,

 Philip







 *From:* Alan  Kim Zimmerman [mailto:alan.z...@gmail.com]
 *Sent:* maandag 28 juli 2014 14:10
 *To:* Holzenspies, P.K.F. (EWI)
 *Cc:* Simon Peyton Jones; Edward Kmett; ghc-devs@haskell.org

 *Subject:* Re: Broken Data.Data instances



 Philip

 I think the main reason for the WrongPhase thing is to have something
 that explicitly has a Data and Typeable instance, to allow generic (SYB)
 traversal. If we can get by without this so much the better.

 On a related note, is there any way to constrain the 'a' in

 type family PostTcType a where
   PostTcType Id= TcType
   PostTcType other = WrongPhaseTyp

   to have an instance of Data?

 I am experimenting with traversals over my earlier paste, and got stuck
 here (which is the reason the Show instances were commentet out in the
 original).

 Alan





 On Mon, Jul 28, 2014 at 12:30 PM, p.k.f.holzensp...@utwente.nl wrote:

 Sorry about that… I’m having it out with my terminal server and the
 server seems to be winning. Here’s another go:



 I always read the () as “there’s nothing meaningful to stick in here,
 but I have to stick in something” so I don’t necessarily want the
 WrongPhase-thing. There is very old commentary stating it would be lovely
 if someone could expose the PostTcType as a parameter of the AST-types, but
 that there are so many types and constructors, that it’s a boring chore to
 do. Actually, I was hoping haRe would come up to speed to be able to do
 this. That being said, I think Simon’s idea to turn PostTcType into a
 type-family is a better way altogether; it also documents intent, i.e. ()
 may not say so much, but PostTcType RdrName says quite a lot.



 Simon commented that a lot of the internal structures aren’t trees, but
 cyclic graphs, e.g. the TyCon for Maybe references the DataCons for Just
 and Nothing, which again refer to the TyCon for Maybe. I was wondering
 whether it would be possible to make stateful lenses for this. Of course,
 for specific cases, we could do this, but I wonder if it is also possible
 to have lenses remember the things they visited and not visit them twice.
 Any ideas on this, Edward?



 Regards,

 Philip











 *From:* Alan  Kim Zimmerman [mailto:alan.z...@gmail.com]

 *Sent:* maandag 28 juli 2014 11:14

 *To:* Simon Peyton Jones
 *Cc:* Edward Kmett; Holzenspies, 

RE: Broken Data.Data instances

2014-08-13 Thread p.k.f.holzenspies
Dear Alan,

I’ve had a look at the diffs on Phabricator. They’re looking good. I have a few 
comments / questions:

1) As you said, the renamer and typechecker are heavily interwoven, but when 
you *know* that you’re between renamer and typechecker (i.e. when things have 
‘Name’s, but not ‘Id’s), isn’t it better to choose the PreTcType as argument? 
(Basically, look for any occurrence of “Name PostTcType” and replace with Pre.)

2) I saw your point about being able to distinguish PreTcType from () in 
SYB-traversals, but you have now defined PreTcType as a synonym for (). With an 
eye on the maximum line-width of 80 characters and these things being explicit 
everywhere as a type parameter (as opposed to a type family over the exposed 
id-parameter), how much added value is there still in having the names 
PreTcType and PostTcType? Would “()” and “Type” not be as clear? I ask, because 
when I started looking at GHC, I was overwhelmed with all the names for things 
in there, most of which then turn out to be different names for the same thing. 
The main reason to call the thing PostTcType in the first place was to give 
some kind of warning that there would be nothing there before TC.

3) The variable name “ptt” is a bit misleading to me. I would use “ty”.

4) In the cases of the types that have recently been parameterized in what they 
contain, is there a reason to have the ty-argument *after* the 
content-argument? E.g. why is it “LGRHS RdrName (LHsExpr RdrName PreTcType) 
PreTcType” instead of “LGRHS RdrName PreTcType (LHsExpr RdrName PreTcType)”? 
This may very well be a tiny stylistic thing, but it’s worth thinking about.

5) I much prefer deleting code over commenting it out. I understand the urge, 
but if you don’t remove these lines before your final commit, they will become 
noise in the long term. Versioning systems preserve the code for you. (Example: 
Convert.void)

Regards,
Philip






From: Alan  Kim Zimmerman [mailto:alan.z...@gmail.com]
Sent: woensdag 13 augustus 2014 8:50
To: Holzenspies, P.K.F. (EWI)
Cc: Simon Peyton Jones; Edward Kmett; ghc-devs@haskell.org
Subject: Re: Broken Data.Data instances

And I dipped my toes into the phabricator water, and uploaded a diff to 
https://phabricator.haskell.org/D153
I left the lines long for now, so that it is clear that I simply added 
parameters to existing type signatures.

On Tue, Aug 12, 2014 at 10:51 PM, Alan  Kim Zimmerman 
alan.z...@gmail.commailto:alan.z...@gmail.com wrote:
Status update
I have worked through a proof of concept update to the GHC AST whereby the type 
is provided as a parameter to each data type. This was basically a mechanical 
process of changing type signatures, and required very little actual code 
changes, being only to initialise the placeholder types.
The enabling types are

type PostTcType = Type-- Used for slots in the abstract syntax
-- where we want to keep slot for a type
-- to be added by the type checker...but
-- [before typechecking it's just bogus]
type PreTcType = () -- used before typechecking


class PlaceHolderType a where
  placeHolderType :: a

instance PlaceHolderType PostTcType where

  placeHolderType  = panic Evaluated the place holder for a PostTcType
instance PlaceHolderType PreTcType where
  placeHolderType = ()

These are used to replace all instances of PostTcType in the hsSyn types.

The change was applied against HEAD as of last friday, and can be found here

https://github.com/alanz/ghc/tree/wip/landmine-param
https://github.com/alanz/haddock/tree/wip/landmine-param
They pass 'sh validate' with GHC 7.6.3, and compile against GHC 7.8.3. I have 
not tried to validate that yet, have no reason to expect failure.

Can I please get some feedback as to whether this is a worthwhile change?

It is the first step to getting a generic traversal safe AST
Regards
  Alan

On Mon, Jul 28, 2014 at 5:45 PM, Alan  Kim Zimmerman 
alan.z...@gmail.commailto:alan.z...@gmail.com wrote:
FYI I edited the paste at http://lpaste.net/108262 to show the problem

On Mon, Jul 28, 2014 at 5:41 PM, Alan  Kim Zimmerman 
alan.z...@gmail.commailto:alan.z...@gmail.com wrote:
I already tried that, the syntax does not seem to allow it.

I suspect some higher form of sorcery will be required, as alluded to here 
http://stackoverflow.com/questions/14133121/can-i-constrain-a-type-family
Alan

On Mon, Jul 28, 2014 at 4:55 PM, 
p.k.f.holzensp...@utwente.nlmailto:p.k.f.holzensp...@utwente.nl wrote:
Dear Alan,

I would think you would want to constrain the result, i.e.

type family (Data (PostTcType a)) = PostTcType a where …

The Data-instance of ‘a’ doesn’t give you much if you have a ‘PostTcType a’.

Your point about SYB-recognition of WrongPhase is, of course, a good one ;)

Regards,
Philip



From: Alan  Kim Zimmerman 
[mailto:alan.z...@gmail.commailto:alan.z...@gmail.com]
Sent: maandag 28 juli 2014 14:10
To: 

Re: Broken Data.Data instances

2014-08-13 Thread Alan Kim Zimmerman
Hi Philip

Thanks for the feedback.

Firstly, I see this as a draft change as a proof of concept, and as such I
deliberately tried to keep things obvious until it had been fully worked
through. It helped in managing my own confusion to limit the changes to be
things that either HAD to change (PostTcType), or the introduction of new
things that did not previously exist (ptt, PreTcType). Naming them the way
I did I was able to make sure that I did not end up making cascading
changes to currently good code when I was in a sticky point.

This definitely helped in the renamer code.

It also makes it clearer to current reviewers that this is in fact a
straightforward change.

If there is a consensus that this is something worth doing, then I agree on
your proposed changes and will work them through.

On the void thing I only realised afterwards what was happening, I am now
not sure whether it is better to keep the new placeHolderType values or
restore void as a synonym for it. It must definitely go it it is not used
though.

Alan


On Wed, Aug 13, 2014 at 12:58 PM, p.k.f.holzensp...@utwente.nl wrote:

  Dear Alan,



 I’ve had a look at the diffs on Phabricator. They’re looking good. I have
 a few comments / questions:



 1) As you said, the renamer and typechecker are heavily interwoven, but
 when you **know** that you’re between renamer and typechecker (i.e. when
 things have ‘Name’s, but not ‘Id’s), isn’t it better to choose the
 PreTcType as argument? (Basically, look for any occurrence of “Name
 PostTcType” and replace with Pre.)



 2) I saw your point about being able to distinguish PreTcType from () in
 SYB-traversals, but you have now defined PreTcType as a synonym for ().
 With an eye on the maximum line-width of 80 characters and these things
 being explicit everywhere as a type parameter (as opposed to a type family
 over the exposed id-parameter), how much added value is there still in
 having the names PreTcType and PostTcType? Would “()” and “Type” not be as
 clear? I ask, because when I started looking at GHC, I was overwhelmed with
 all the names for things in there, most of which then turn out to be
 different names for the same thing. The main reason to call the thing
 PostTcType in the first place was to give some kind of warning that there
 would be nothing there before TC.



 3) The variable name “ptt” is a bit misleading to me. I would use “ty”.



 4) In the cases of the types that have recently been parameterized in what
 they contain, is there a reason to have the ty-argument **after** the
 content-argument? E.g. why is it “LGRHS RdrName (LHsExpr RdrName PreTcType)
 PreTcType” instead of “LGRHS RdrName PreTcType (LHsExpr RdrName
 PreTcType)”? This may very well be a tiny stylistic thing, but it’s worth
 thinking about.



 5) I much prefer deleting code over commenting it out. I understand the
 urge, but if you don’t remove these lines before your final commit, they
 will become noise in the long term. Versioning systems preserve the code
 for you. (Example: Convert.void)



 Regards,

 Philip













 *From:* Alan  Kim Zimmerman [mailto:alan.z...@gmail.com]
 *Sent:* woensdag 13 augustus 2014 8:50

 *To:* Holzenspies, P.K.F. (EWI)
 *Cc:* Simon Peyton Jones; Edward Kmett; ghc-devs@haskell.org
 *Subject:* Re: Broken Data.Data instances



 And I dipped my toes into the phabricator water, and uploaded a diff to
 https://phabricator.haskell.org/D153

 I left the lines long for now, so that it is clear that I simply added
 parameters to existing type signatures.



 On Tue, Aug 12, 2014 at 10:51 PM, Alan  Kim Zimmerman 
 alan.z...@gmail.com wrote:

 Status update

 I have worked through a proof of concept update to the GHC AST whereby the
 type is provided as a parameter to each data type. This was basically a
 mechanical process of changing type signatures, and required very little
 actual code changes, being only to initialise the placeholder types.

 The enabling types are

   type PostTcType = Type-- Used for slots in the abstract
 syntax
 -- where we want to keep slot for a type
 -- to be added by the type checker...but
 -- [before typechecking it's just bogus]

 type PreTcType = () -- used before typechecking


 class PlaceHolderType a where
   placeHolderType :: a

 instance PlaceHolderType PostTcType where


   placeHolderType  = panic Evaluated the place holder for a
 PostTcType

 instance PlaceHolderType PreTcType where
   placeHolderType = ()

 These are used to replace all instances of PostTcType in the hsSyn types.

 The change was applied against HEAD as of last friday, and can be found
 here

 https://github.com/alanz/ghc/tree/wip/landmine-param
 https://github.com/alanz/haddock/tree/wip/landmine-param

 They pass 'sh validate' with GHC 7.6.3, and compile against GHC 7.8.3. I
 have not tried to validate that yet, have no reason to expect failure.

   Can I 

Re: HEADS UP: Running cabal install with the latest GHC

2014-08-13 Thread Johan Tibell
Edward made some changes so that GHC 7.10 is backwards compatible with
older cabals (older cabals just can't use the new goodies, that's all),
which means that we won't need an earlier release. I'm still aiming for
another major release before 7.10? When's 7.10 scheduled before?


On Fri, Aug 8, 2014 at 11:17 PM, Edward Z. Yang ezy...@mit.edu wrote:

 They would be:

 2b50d0a Fix regression for V09 test library handling.
 d3a696a Disable reinstalls with distinct package keys for now.
 1d33c8f Add $pkgkey template variable, and use it for install paths.
 41610a0 Implement package keys, distinguishing packages built with
 different deps/flags

 Unfortunately, these patches fuzz a bit without this next patch:

 62450f9 Implement reexported-modules field, towards fixing GHC bug
 #8407.

 When you include that patch, there is only one piece of fuzz from
 41610a0.

 One important caveat is that these patches do rearrange some of the API,
 so you wouldn't be able to build GHC 7.8 against these patches.  So
 maybe we don't want to.

 If we had a way of releasing experimental, non-default picked up
 versions, that would be nice (i.e. Cabal 1.21). No warranty, but
 easy enough for GHC devs to say 'cabal install Cabal-1.21
 cabal-install-1.21' or something.

 Edward

 Excerpts from Johan Tibell's message of 2014-08-08 22:02:25 +0100:
  I'm not again putting out another release, but I'd prefer to make it on
 top
  of 1.20 if possible. Making a 1.22 release takes much more work (RC time,
  etc). Which are the patches in question. Can they easily be cherry-picked
  onto the 1.20 branch? Are there any risk of breakages?
 
  On Fri, Aug 8, 2014 at 2:00 PM, Edward Z. Yang ezy...@mit.edu wrote:
 
   Hey all,
  
   SPJ pointed out to me today that if you try to run:
  
   cabal install --with-ghc=/path/to/inplace/bin/ghc-stage2
  
   with the latest GHC HEAD, this probably will not actually work, because
   your system installed version of Cabal is probably too old to deal with
   the new package key stuff in HEAD.  So, how do you get a version
   of cabal-install (and Cabal) which is new enough to do what you need
   it to?
  
   The trick is to compile Cabal using your /old/ GHC. Step-by-step, this
   involves cd'ing into libraries/Cabal/Cabal and running `cabal install`
   (or install it in a sandbox, if you like) and then cd'ing to
   libraries/Cabal/cabal-install and cabal install'ing that.
  
   Cabal devs, is cutting a new release of Cabal and cabal-install in the
   near future possible? In that case, users can just cabal update; cabal
   install cabal-install and get a version of Cabal which will work for
   them.
  
   Cheers,
   Edward
   ___
   cabal-devel mailing list
   cabal-de...@haskell.org
   http://www.haskell.org/mailman/listinfo/cabal-devel
  

___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: HEADS UP: Running cabal install with the latest GHC

2014-08-13 Thread Mikhail Glushenkov
Hi,

On 13 August 2014 16:12, Johan Tibell johan.tib...@gmail.com wrote:
 I'm still aiming for another
 major release before 7.10? When's 7.10 scheduled before?

End of the year, I think.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: HEADS UP: Running cabal install with the latest GHC

2014-08-13 Thread Mikhail Glushenkov
Hi,

On 13 August 2014 16:22, Mikhail Glushenkov
the.dead.shall.r...@gmail.com wrote:
 End of the year, I think.

Correction: https://ghc.haskell.org/trac/ghc/wiki/Status/GHC-7.10.1
says February 2015.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


How's the integration of DWARF support coming along?

2014-08-13 Thread Johan Tibell
Hi,

How's the integration of DWARF support coming along? It's probably one of
the most important improvements to the runtime in quite some time since
unlocks *two* important features, namely

 * trustworthy profiling (using e.g. Linux perf events and other
low-overhead, code preserving, sampling profilers), and
 * stack traces.

The former is really important to move our core libraries performance up a
notch. Right now -prof is too invasive for it to be useful when evaluating
the hotspots in these libraries (which are already often heavily tuned).

The latter one is really important for real life Haskell on the server,
where you can sometimes can get some crash that only happens once a day
under very specific conditions. Knowing where the crash happens is then
*very* useful.

-- Johan
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: How's the integration of DWARF support coming along?

2014-08-13 Thread Johan Tibell
On Wed, Aug 13, 2014 at 5:07 PM, Tuncer Ayaz tuncer.a...@gmail.com wrote:

 On Wed, Aug 13, 2014 at 5:02 PM, Johan Tibell wrote:
  Hi,
 
  How's the integration of DWARF support coming along? It's probably
  one of the most important improvements to the runtime in quite some
  time since unlocks *two* important features, namely
 
   * trustworthy profiling (using e.g. Linux perf events and other
  low-overhead, code preserving, sampling profilers), and
   * stack traces.
 
  The former is really important to move our core libraries
  performance up a notch. Right now -prof is too invasive for it to be
  useful when evaluating the hotspots in these libraries (which are
  already often heavily tuned).
 
  The latter one is really important for real life Haskell on the
  server, where you can sometimes can get some crash that only happens
  once a day under very specific conditions. Knowing where the crash
  happens is then *very* useful.

 Doesn't it also enable using gdb and lldb, or is there another missing
 piece?


No, those should also work. It enables *a lot* of generic infrastructure
that programmers has written over the years.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: How's the integration of DWARF support coming along?

2014-08-13 Thread Ömer Sinan Ağacan
Is this stack trace support different than what we have currently?
(e.g. the one implemented with GHC.Stack and cost centers)

---
Ömer Sinan Ağacan
http://osa1.net


2014-08-13 18:02 GMT+03:00 Johan Tibell johan.tib...@gmail.com:
 Hi,

 How's the integration of DWARF support coming along? It's probably one of
 the most important improvements to the runtime in quite some time since
 unlocks *two* important features, namely

  * trustworthy profiling (using e.g. Linux perf events and other
 low-overhead, code preserving, sampling profilers), and
  * stack traces.

 The former is really important to move our core libraries performance up a
 notch. Right now -prof is too invasive for it to be useful when evaluating
 the hotspots in these libraries (which are already often heavily tuned).

 The latter one is really important for real life Haskell on the server,
 where you can sometimes can get some crash that only happens once a day
 under very specific conditions. Knowing where the crash happens is then
 *very* useful.

 -- Johan


 ___
 ghc-devs mailing list
 ghc-devs@haskell.org
 http://www.haskell.org/mailman/listinfo/ghc-devs

___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: How's the integration of DWARF support coming along?

2014-08-13 Thread Ömer Sinan Ağacan
Will generated stack traces be different that

---
Ömer Sinan Ağacan
http://osa1.net


2014-08-13 19:56 GMT+03:00 Johan Tibell johan.tib...@gmail.com:
 Yes, it doesn't use any code modification so it doesn't have runtime
 overhead (except when generating the actual trace) or interfere with
 compiler optimizations. In other words you can actually have it enabled at
 all time. It only requires that you compile with -g, just like with a C
 compiler.


 On Wed, Aug 13, 2014 at 6:45 PM, Ömer Sinan Ağacan omeraga...@gmail.com
 wrote:

 Is this stack trace support different than what we have currently?
 (e.g. the one implemented with GHC.Stack and cost centers)

 ---
 Ömer Sinan Ağacan
 http://osa1.net


 2014-08-13 18:02 GMT+03:00 Johan Tibell johan.tib...@gmail.com:
  Hi,
 
  How's the integration of DWARF support coming along? It's probably one
  of
  the most important improvements to the runtime in quite some time since
  unlocks *two* important features, namely
 
   * trustworthy profiling (using e.g. Linux perf events and other
  low-overhead, code preserving, sampling profilers), and
   * stack traces.
 
  The former is really important to move our core libraries performance up
  a
  notch. Right now -prof is too invasive for it to be useful when
  evaluating
  the hotspots in these libraries (which are already often heavily tuned).
 
  The latter one is really important for real life Haskell on the server,
  where you can sometimes can get some crash that only happens once a day
  under very specific conditions. Knowing where the crash happens is then
  *very* useful.
 
  -- Johan
 
 
  ___
  ghc-devs mailing list
  ghc-devs@haskell.org
  http://www.haskell.org/mailman/listinfo/ghc-devs
 


___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: How's the integration of DWARF support coming along?

2014-08-13 Thread Ömer Sinan Ağacan
Sorry for my previous email. (used a gmail shortcut by mistake)

We won't have stacks as we have in imperative(without TCO) and strict
languages. So we still need some kind of emulation and I think this
means some extra run-time operations. I'm wondering about two things:

1) Do we still get same traces as we get using GHC.Stack right now?
2) If yes, then how can we have that without any runtime costs?

Thanks and sorry again for my previous email.

---
Ömer Sinan Ağacan
http://osa1.net


2014-08-13 20:08 GMT+03:00 Ömer Sinan Ağacan omeraga...@gmail.com:
 Will generated stack traces be different that

 ---
 Ömer Sinan Ağacan
 http://osa1.net


 2014-08-13 19:56 GMT+03:00 Johan Tibell johan.tib...@gmail.com:
 Yes, it doesn't use any code modification so it doesn't have runtime
 overhead (except when generating the actual trace) or interfere with
 compiler optimizations. In other words you can actually have it enabled at
 all time. It only requires that you compile with -g, just like with a C
 compiler.


 On Wed, Aug 13, 2014 at 6:45 PM, Ömer Sinan Ağacan omeraga...@gmail.com
 wrote:

 Is this stack trace support different than what we have currently?
 (e.g. the one implemented with GHC.Stack and cost centers)

 ---
 Ömer Sinan Ağacan
 http://osa1.net


 2014-08-13 18:02 GMT+03:00 Johan Tibell johan.tib...@gmail.com:
  Hi,
 
  How's the integration of DWARF support coming along? It's probably one
  of
  the most important improvements to the runtime in quite some time since
  unlocks *two* important features, namely
 
   * trustworthy profiling (using e.g. Linux perf events and other
  low-overhead, code preserving, sampling profilers), and
   * stack traces.
 
  The former is really important to move our core libraries performance up
  a
  notch. Right now -prof is too invasive for it to be useful when
  evaluating
  the hotspots in these libraries (which are already often heavily tuned).
 
  The latter one is really important for real life Haskell on the server,
  where you can sometimes can get some crash that only happens once a day
  under very specific conditions. Knowing where the crash happens is then
  *very* useful.
 
  -- Johan
 
 
  ___
  ghc-devs mailing list
  ghc-devs@haskell.org
  http://www.haskell.org/mailman/listinfo/ghc-devs
 


___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: How's the integration of DWARF support coming along?

2014-08-13 Thread Johan Tibell
Without any overhead we'll get the runtime stack trace, which isn't exactly
the same as what we can get with emulation, but has the benefit that we can
leave it on in all of our shipped code if we like. This latter is a really
crucial property for stack traces to be widely useful.


On Wed, Aug 13, 2014 at 7:13 PM, Ömer Sinan Ağacan omeraga...@gmail.com
wrote:

 Sorry for my previous email. (used a gmail shortcut by mistake)

 We won't have stacks as we have in imperative(without TCO) and strict
 languages. So we still need some kind of emulation and I think this
 means some extra run-time operations. I'm wondering about two things:

 1) Do we still get same traces as we get using GHC.Stack right now?
 2) If yes, then how can we have that without any runtime costs?

 Thanks and sorry again for my previous email.

 ---
 Ömer Sinan Ağacan
 http://osa1.net


 2014-08-13 20:08 GMT+03:00 Ömer Sinan Ağacan omeraga...@gmail.com:
  Will generated stack traces be different that
 
  ---
  Ömer Sinan Ağacan
  http://osa1.net
 
 
  2014-08-13 19:56 GMT+03:00 Johan Tibell johan.tib...@gmail.com:
  Yes, it doesn't use any code modification so it doesn't have runtime
  overhead (except when generating the actual trace) or interfere with
  compiler optimizations. In other words you can actually have it enabled
 at
  all time. It only requires that you compile with -g, just like with a C
  compiler.
 
 
  On Wed, Aug 13, 2014 at 6:45 PM, Ömer Sinan Ağacan 
 omeraga...@gmail.com
  wrote:
 
  Is this stack trace support different than what we have currently?
  (e.g. the one implemented with GHC.Stack and cost centers)
 
  ---
  Ömer Sinan Ağacan
  http://osa1.net
 
 
  2014-08-13 18:02 GMT+03:00 Johan Tibell johan.tib...@gmail.com:
   Hi,
  
   How's the integration of DWARF support coming along? It's probably
 one
   of
   the most important improvements to the runtime in quite some time
 since
   unlocks *two* important features, namely
  
* trustworthy profiling (using e.g. Linux perf events and other
   low-overhead, code preserving, sampling profilers), and
* stack traces.
  
   The former is really important to move our core libraries
 performance up
   a
   notch. Right now -prof is too invasive for it to be useful when
   evaluating
   the hotspots in these libraries (which are already often heavily
 tuned).
  
   The latter one is really important for real life Haskell on the
 server,
   where you can sometimes can get some crash that only happens once a
 day
   under very specific conditions. Knowing where the crash happens is
 then
   *very* useful.
  
   -- Johan
  
  
   ___
   ghc-devs mailing list
   ghc-devs@haskell.org
   http://www.haskell.org/mailman/listinfo/ghc-devs
  
 
 

___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: How's the integration of DWARF support coming along?

2014-08-13 Thread Arash Rouhani

Hi Johan!

I haven't done much (just been lazy) lately, I've tried to benchmark my 
results but I don't get any sensible results at all yet.


Last time Peter said he's working on a more portable way to read dwarf 
information that doesn't require Linux. But I'm sure he'll give a more 
acurate update than me soon in this mail thread.


As for stack traces, I don't think there's any big tasks left, but I 
summarize what I have in mind:


 * The haskell interface is done and I've iterated on it a bit, so it's
   in a decent shape at least. Some parts still need testing.
 * I wish I could implement the `forceCaseContinuation` that I've
   described in my thesis. If someone is good with code generation (I
   just suck at it, it's probably simple) and is willing to assist me a
   bit, please say so. :)
 * I tried benchmarking, I gave up after not getting any useful results.
 * I'm unfortunately totally incapable to help out with dwarf debug
   data generation, only Peter knows that part, particularly I never
   grasped his theoretical framework of causality in Haskell.
 * Peter and I have finally agreed on a simple and sensible way to
   implement `catchWithStack` that have all most good properties you
   would like. I just need to implement it and test it. I can
   definitely man up and implement this. :)

Here's my master thesis btw [1], it should answer Ömer's question of how 
we retrieve a stack from a language you think won't have a stack. :)


Cheers,
Arash

[1]: http://arashrouhani.com/papers/master-thesis.pdf




On 2014-08-13 17:02, Johan Tibell wrote:

Hi,

How's the integration of DWARF support coming along? It's probably one 
of the most important improvements to the runtime in quite some time 
since unlocks *two* important features, namely


 * trustworthy profiling (using e.g. Linux perf events and other 
low-overhead, code preserving, sampling profilers), and

 * stack traces.

The former is really important to move our core libraries performance 
up a notch. Right now -prof is too invasive for it to be useful when 
evaluating the hotspots in these libraries (which are already often 
heavily tuned).


The latter one is really important for real life Haskell on the 
server, where you can sometimes can get some crash that only happens 
once a day under very specific conditions. Knowing where the crash 
happens is then *very* useful.


-- Johan



___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: How's the integration of DWARF support coming along?

2014-08-13 Thread Johan Tibell
What's the minimal amount of work we need to do to just get the dwarf data
in the codegen by 7.10 (RC late december) so we can start using e.g. linux
perf events to profile Haskell programs?


On Wed, Aug 13, 2014 at 7:31 PM, Arash Rouhani rar...@student.chalmers.se
wrote:

  Hi Johan!

 I haven't done much (just been lazy) lately, I've tried to benchmark my
 results but I don't get any sensible results at all yet.

 Last time Peter said he's working on a more portable way to read dwarf
 information that doesn't require Linux. But I'm sure he'll give a more
 acurate update than me soon in this mail thread.

 As for stack traces, I don't think there's any big tasks left, but I
 summarize what I have in mind:

- The haskell interface is done and I've iterated on it a bit, so it's
in a decent shape at least. Some parts still need testing.
- I wish I could implement the `forceCaseContinuation` that I've
described in my thesis. If someone is good with code generation (I just
suck at it, it's probably simple) and is willing to assist me a bit, please
say so. :)
- I tried benchmarking, I gave up after not getting any useful results.
 - I'm unfortunately totally incapable to help out with dwarf debug
data generation, only Peter knows that part, particularly I never grasped
his theoretical framework of causality in Haskell.
- Peter and I have finally agreed on a simple and sensible way to
implement `catchWithStack` that have all most good properties you would
like. I just need to implement it and test it. I can definitely man up and
implement this. :)

 Here's my master thesis btw [1], it should answer Ömer's question of how
 we retrieve a stack from a language you think won't have a stack. :)

 Cheers,
 Arash

 [1]: http://arashrouhani.com/papers/master-thesis.pdf





 On 2014-08-13 17:02, Johan Tibell wrote:

 Hi,

  How's the integration of DWARF support coming along? It's probably one
 of the most important improvements to the runtime in quite some time since
 unlocks *two* important features, namely

   * trustworthy profiling (using e.g. Linux perf events and other
 low-overhead, code preserving, sampling profilers), and
  * stack traces.

  The former is really important to move our core libraries performance up
 a notch. Right now -prof is too invasive for it to be useful when
 evaluating the hotspots in these libraries (which are already often heavily
 tuned).

  The latter one is really important for real life Haskell on the server,
 where you can sometimes can get some crash that only happens once a day
 under very specific conditions. Knowing where the crash happens is then
 *very* useful.

  -- Johan



 ___
 ghc-devs mailing 
 listghc-devs@haskell.orghttp://www.haskell.org/mailman/listinfo/ghc-devs



 ___
 ghc-devs mailing list
 ghc-devs@haskell.org
 http://www.haskell.org/mailman/listinfo/ghc-devs


___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: making ./validate run tests on all CPUs by default

2014-08-13 Thread Sergei Trofimovich
On Wed, 13 Aug 2014 11:39:56 +0200
Tuncer Ayaz tuncer.a...@gmail.com wrote:

 On Tue, Aug 12, 2014 at 10:31 PM, Sergei Trofimovich wrote:
  Good evening all!
 
  Currently when being  ran './validate' script (without any parameters):
  - builds ghc using 2 parallel jobs
  - runs testsuite using 2 parallel jobs
 
  I propose to change the default value to amount of available CPUs:
  - build ghc using N+1 parallel jobs
  - run testsuite using N+1 parallel jobs
 
  Pros:
  - first-time users will get faster ./validate
  - seasoned users will need less tweaking for buildbots
 
  Cons:
  - for imbalanced boxes (32 cores, 8GB RAM) it might
 be quite painful to drag box out of swap
 
  What do you think about it?
 
 Isn't the memory use also a problem on boxes with a much lower
 number of cores (e.g. 7.8 space leak(s))?
 
 On one machine with 1GB per core, I had to limit cabal install's
 parallelism when using 7.8.

It's true in general, but I would not expect such a massive growth
on ghc source. Current -Rghc-timing shows ~300MBs per ghc process
on amd64.

The fallout examples are HsSyn and cabal's PackageDescription modules.

ghc's build system is a bit different from Cabal's:
- Cabal runs one 'ghc --make' instance for a whole package.
  It leads to massive RAM usage in case of a multitude of modules
  (highlighting-kate and qthaskell come to mind).
- ghc's buld system uses one 'ghc -c' execution for a single .hs file (roughly)

 Assuming the patch goes in, is there a way to limit parallel jobs
 on the command line?

The mechanism to set limit manually is the same as before:
CPUS=8 ./validate

It's the default that is proposed to be changed.

-- 

  Sergei


signature.asc
Description: PGP signature
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: How's the integration of DWARF support coming along?

2014-08-13 Thread Arash Rouhani
Peter will have to answer that. But it seemed to me that it has been 
working fine all the time. I suppose it's just to resolve merge 
conflicts. There were some refactorings he wanted to do. In addition to 
this it will also be some packaging issues I suppose. I'm hoping Peter 
will answer in this mail thread soon, since he knows this much better.


/Arash

On 2014-08-13 20:01, Johan Tibell wrote:
What's the minimal amount of work we need to do to just get the dwarf 
data in the codegen by 7.10 (RC late december) so we can start using 
e.g. linux perf events to profile Haskell programs?



On Wed, Aug 13, 2014 at 7:31 PM, Arash Rouhani 
rar...@student.chalmers.se mailto:rar...@student.chalmers.se wrote:


Hi Johan!

I haven't done much (just been lazy) lately, I've tried to
benchmark my results but I don't get any sensible results at all yet.

Last time Peter said he's working on a more portable way to read
dwarf information that doesn't require Linux. But I'm sure he'll
give a more acurate update than me soon in this mail thread.

As for stack traces, I don't think there's any big tasks left, but
I summarize what I have in mind:

  * The haskell interface is done and I've iterated on it a bit,
so it's in a decent shape at least. Some parts still need testing.
  * I wish I could implement the `forceCaseContinuation` that I've
described in my thesis. If someone is good with code
generation (I just suck at it, it's probably simple) and is
willing to assist me a bit, please say so. :)
  * I tried benchmarking, I gave up after not getting any useful
results.
  * I'm unfortunately totally incapable to help out with dwarf
debug data generation, only Peter knows that part,
particularly I never grasped his theoretical framework of
causality in Haskell.
  * Peter and I have finally agreed on a simple and sensible way
to implement `catchWithStack` that have all most good
properties you would like. I just need to implement it and
test it. I can definitely man up and implement this. :)

Here's my master thesis btw [1], it should answer Ömer's question
of how we retrieve a stack from a language you think won't have a
stack. :)

Cheers,
Arash

[1]: http://arashrouhani.com/papers/master-thesis.pdf





On 2014-08-13 17:02, Johan Tibell wrote:

Hi,

How's the integration of DWARF support coming along? It's
probably one of the most important improvements to the runtime in
quite some time since unlocks *two* important features, namely

 * trustworthy profiling (using e.g. Linux perf events and other
low-overhead, code preserving, sampling profilers), and
 * stack traces.

The former is really important to move our core libraries
performance up a notch. Right now -prof is too invasive for it to
be useful when evaluating the hotspots in these libraries (which
are already often heavily tuned).

The latter one is really important for real life Haskell on the
server, where you can sometimes can get some crash that only
happens once a day under very specific conditions. Knowing where
the crash happens is then *very* useful.

-- Johan



___
ghc-devs mailing list
ghc-devs@haskell.org  mailto:ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs



___
ghc-devs mailing list
ghc-devs@haskell.org mailto:ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs




___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: How's the integration of DWARF support coming along?

2014-08-13 Thread Peter Wortmann


At this point I have a bit more time on my hands again (modulo post-thesis 
vacations), but we are basically still in “review hell”.

I think “just” for perf_events support we’d need the following patches[1]:
1. Source notes (Core support)
2. Source notes (CorePrep  Stg support)
3. Source notes (Cmm support)
4. Tick scopes
5. Debug data extraction (NCG support)
6. Generate .loc/.file directives

We have a basic “okay” from the Simons up to number 2 (conditional on better 
documentation). Number 4 sticks out because Simon Marlow wanted to have a 
closer look at it - this is basically about how to maintain source ticks in a 
robust fashion on the Cmm level (see also section 5.5 of my thesis[2]).

Meanwhile I have ported NCG DWARF generation over to Mac Os, and am working on 
reviving LLVM support. My plan was to check that I didn’t accidentally break 
Linux support, then push for review again in a week or so (Phab?).

Greetings,
  Peter

[1] https://github.com/scpmw/ghc/commits/profiling-import
[2] http://www.personal.leeds.ac.uk/~scpmw/static/thesis.pdf

On 13 Aug 2014, at 20:01, Johan Tibell 
johan.tib...@gmail.commailto:johan.tib...@gmail.com wrote:

What's the minimal amount of work we need to do to just get the dwarf data in 
the codegen by 7.10 (RC late december) so we can start using e.g. linux perf 
events to profile Haskell programs?


On Wed, Aug 13, 2014 at 7:31 PM, Arash Rouhani 
rar...@student.chalmers.semailto:rar...@student.chalmers.se wrote:
Hi Johan!

I haven't done much (just been lazy) lately, I've tried to benchmark my results 
but I don't get any sensible results at all yet.

Last time Peter said he's working on a more portable way to read dwarf 
information that doesn't require Linux. But I'm sure he'll give a more acurate 
update than me soon in this mail thread.

As for stack traces, I don't think there's any big tasks left, but I summarize 
what I have in mind:

 *   The haskell interface is done and I've iterated on it a bit, so it's in a 
decent shape at least. Some parts still need testing.
 *   I wish I could implement the `forceCaseContinuation` that I've described 
in my thesis. If someone is good with code generation (I just suck at it, it's 
probably simple) and is willing to assist me a bit, please say so. :)
 *   I tried benchmarking, I gave up after not getting any useful results.
 *   I'm unfortunately totally incapable to help out with dwarf debug data 
generation, only Peter knows that part, particularly I never grasped his 
theoretical framework of causality in Haskell.
 *   Peter and I have finally agreed on a simple and sensible way to implement 
`catchWithStack` that have all most good properties you would like. I just need 
to implement it and test it. I can definitely man up and implement this. :)

Here's my master thesis btw [1], it should answer Ömer's question of how we 
retrieve a stack from a language you think won't have a stack. :)

Cheers,
Arash

[1]: http://arashrouhani.com/papers/master-thesis.pdf





On 2014-08-13 17:02, Johan Tibell wrote:
Hi,

How's the integration of DWARF support coming along? It's probably one of the 
most important improvements to the runtime in quite some time since unlocks 
*two* important features, namely

 * trustworthy profiling (using e.g. Linux perf events and other low-overhead, 
code preserving, sampling profilers), and
 * stack traces.

The former is really important to move our core libraries performance up a 
notch. Right now -prof is too invasive for it to be useful when evaluating the 
hotspots in these libraries (which are already often heavily tuned).

The latter one is really important for real life Haskell on the server, where 
you can sometimes can get some crash that only happens once a day under very 
specific conditions. Knowing where the crash happens is then *very* useful.

-- Johan




___
ghc-devs mailing list
ghc-devs@haskell.orgmailto:ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs



___
ghc-devs mailing list
ghc-devs@haskell.orgmailto:ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs



___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: How's the integration of DWARF support coming along?

2014-08-13 Thread Peter Wortmann

Johan Tibell wrote:
Do you mind expanding on what tick scopes are. It sounds scarily like something 
that happens at runtime. :)

It’s a pretty basic problem - for Core we can always walk the tree upwards to 
find some source ticks that might be useful. Cmm on the other hand is flat: 
Given one block without any annotations on its own, there is no robust way we 
could look around for debugging information.

This is especially tricky because Cmm stages want to be able to liberally add 
or remove blocks. So let’s say we have an extra GC block added: Which source 
location should we see as associated with it? And if two blocks are combined 
using common block elimination: What is now the best source location? And how 
do we express all this in a way that won’t make code generation more 
complicated? The latter is an important consideration, because code generation 
is very irregular in how it treats code - often alternating between 
accumulating it in a monad and passing it around by hand.

I have found it quite tricky to find a good solution in this design space - the 
current idea is that we associate every piece of generated Cmm with a “tick 
scope”, which decides how far a tick will “apply”. So for example a GC block 
would be generated using the same tick scope as the function’s entry block, and 
therefore will get all ticks associated with the function’s top level, which is 
probably the best choice. On the other hand, for merging blocks we can 
“combine” the scopes in a way that guarantees that we find (at least) the same 
ticks as before, therefore losing no information.

And yes, this design could be simplified somewhat for pure DWARF generation. 
After all, for that particular purpose every tick scope will just boil down to 
a single source location anyway. So we could simply replace scopes with the 
source link right away. But I think it would come down to about the same code 
complexity, plus having a robust structure around makes it easier to carry 
along extra information such as unwind information, extra source ticks or the 
generating Core.

Greetings,
  Peter

On Wed, Aug 13, 2014 at 8:49 PM, Peter Wortmann 
sc...@leeds.ac.ukmailto:sc...@leeds.ac.uk wrote:


At this point I have a bit more time on my hands again (modulo post-thesis 
vacations), but we are basically still in “review hell”.

I think “just” for perf_events support we’d need the following patches[1]:
1. Source notes (Core support)
2. Source notes (CorePrep  Stg support)
3. Source notes (Cmm support)
4. Tick scopes
5. Debug data extraction (NCG support)
6. Generate .loc/.file directives

We have a basic “okay” from the Simons up to number 2 (conditional on better 
documentation). Number 4 sticks out because Simon Marlow wanted to have a 
closer look at it - this is basically about how to maintain source ticks in a 
robust fashion on the Cmm level (see also section 5.5 of my thesis[2]).

Meanwhile I have ported NCG DWARF generation over to Mac Os, and am working on 
reviving LLVM support. My plan was to check that I didn’t accidentally break 
Linux support, then push for review again in a week or so (Phab?).

Greetings,
  Peter

[1] https://github.com/scpmw/ghc/commits/profiling-import
[2] http://www.personal.leeds.ac.uk/~scpmw/static/thesis.pdf

On 13 Aug 2014, at 20:01, Johan Tibell 
johan.tib...@gmail.commailto:johan.tib...@gmail.commailto:johan.tib...@gmail.commailto:johan.tib...@gmail.com
 wrote:

What's the minimal amount of work we need to do to just get the dwarf data in 
the codegen by 7.10 (RC late december) so we can start using e.g. linux perf 
events to profile Haskell programs?


On Wed, Aug 13, 2014 at 7:31 PM, Arash Rouhani 
rar...@student.chalmers.semailto:rar...@student.chalmers.semailto:rar...@student.chalmers.semailto:rar...@student.chalmers.se
 wrote:
Hi Johan!

I haven't done much (just been lazy) lately, I've tried to benchmark my results 
but I don't get any sensible results at all yet.

Last time Peter said he's working on a more portable way to read dwarf 
information that doesn't require Linux. But I'm sure he'll give a more acurate 
update than me soon in this mail thread.

As for stack traces, I don't think there's any big tasks left, but I summarize 
what I have in mind:

 *   The haskell interface is done and I've iterated on it a bit, so it's in a 
decent shape at least. Some parts still need testing.
 *   I wish I could implement the `forceCaseContinuation` that I've described 
in my thesis. If someone is good with code generation (I just suck at it, it's 
probably simple) and is willing to assist me a bit, please say so. :)
 *   I tried benchmarking, I gave up after not getting any useful results.
 *   I'm unfortunately totally incapable to help out with dwarf debug data 
generation, only Peter knows that part, particularly I never grasped his 
theoretical framework of causality in Haskell.
 *   Peter and I have finally agreed on a simple and sensible way to implement 
`catchWithStack` that have 

Building HEAD (e83e873d) on mips64el: unknown package: old-locale-1.0.0.6

2014-08-13 Thread Nikita Karetnikov
$ git clone git://github.com/ghc/ghc.git ghc-github
$ cd ghc-github
$ ./sync-all get
$ perl boot
$ ./configure
$ make

[…]

inplace/bin/ghc-stage1 -this-package-key rts -shared -dynamic -dynload deploy 
-no-auto-link-packages -Lrts/dist/build -lffi -optl-Wl,-rpath 
-optl-Wl,'$ORIGIN' -optl-Wl,-zorigin `cat rts/dist/libs.depend` 
rts/dist/build/Adjustor.dyn_o rts/dist/build/Arena.dyn_o 
rts/dist/build/Capability.dyn_o rts/dist/build/CheckUnload.dyn_o 
rts/dist/build/ClosureFlags.dyn_o rts/dist/build/Disassembler.dyn_o 
rts/dist/build/FileLock.dyn_o rts/dist/build/Globals.dyn_o 
rts/dist/build/Hash.dyn_o rts/dist/build/Hpc.dyn_o rts/dist/build/HsFFI.dyn_o 
rts/dist/build/Inlines.dyn_o rts/dist/build/Interpreter.dyn_o 
rts/dist/build/LdvProfile.dyn_o rts/dist/build/Linker.dyn_o 
rts/dist/build/Messages.dyn_o rts/dist/build/OldARMAtomic.dyn_o 
rts/dist/build/Papi.dyn_o rts/dist/build/Printer.dyn_o 
rts/dist/build/ProfHeap.dyn_o rts/dist/build/Profiling.dyn_o 
rts/dist/build/Proftimer.dyn_o rts/dist/build/RaiseAsync.dyn_o 
rts/dist/build/RetainerProfile.dyn_o rts/dist/build/RetainerSet.dyn_o 
rts/dist/build/RtsAPI.dyn_o rts/dist/build/RtsDllMain.dyn_o 
rts/dist/build/RtsFlags.dyn_o rts/dist/build/RtsMain.dyn_o 
rts/dist/build/RtsMessages.dyn_o rts/dist/build/RtsStartup.dyn_o 
rts/dist/build/RtsUtils.dyn_o rts/dist/build/STM.dyn_o 
rts/dist/build/Schedule.dyn_o rts/dist/build/Sparks.dyn_o 
rts/dist/build/Stable.dyn_o rts/dist/build/Stats.dyn_o 
rts/dist/build/StgCRun.dyn_o rts/dist/build/StgPrimFloat.dyn_o 
rts/dist/build/Task.dyn_o rts/dist/build/ThreadLabels.dyn_o 
rts/dist/build/ThreadPaused.dyn_o rts/dist/build/Threads.dyn_o 
rts/dist/build/Ticky.dyn_o rts/dist/build/Timer.dyn_o 
rts/dist/build/Trace.dyn_o rts/dist/build/WSDeque.dyn_o 
rts/dist/build/Weak.dyn_o rts/dist/build/hooks/FlagDefaults.dyn_o 
rts/dist/build/hooks/MallocFail.dyn_o rts/dist/build/hooks/OnExit.dyn_o 
rts/dist/build/hooks/OutOfHeap.dyn_o rts/dist/build/hooks/StackOverflow.dyn_o 
rts/dist/build/sm/BlockAlloc.dyn_o rts/dist/build/sm/Compact.dyn_o 
rts/dist/build/sm/Evac.dyn_o rts/dist/build/sm/GC.dyn_o 
rts/dist/build/sm/GCAux.dyn_o rts/dist/build/sm/GCUtils.dyn_o 
rts/dist/build/sm/MBlock.dyn_o rts/dist/build/sm/MarkWeak.dyn_o 
rts/dist/build/sm/Sanity.dyn_o rts/dist/build/sm/Scav.dyn_o 
rts/dist/build/sm/Storage.dyn_o rts/dist/build/sm/Sweep.dyn_o 
rts/dist/build/eventlog/EventLog.dyn_o rts/dist/build/posix/GetEnv.dyn_o 
rts/dist/build/posix/GetTime.dyn_o rts/dist/build/posix/Itimer.dyn_o 
rts/dist/build/posix/OSMem.dyn_o rts/dist/build/posix/OSThreads.dyn_o 
rts/dist/build/posix/Select.dyn_o rts/dist/build/posix/Signals.dyn_o 
rts/dist/build/posix/TTY.dyn_o   rts/dist/build/Apply.dyn_o 
rts/dist/build/Exception.dyn_o rts/dist/build/HeapStackCheck.dyn_o 
rts/dist/build/PrimOps.dyn_o rts/dist/build/StgMiscClosures.dyn_o 
rts/dist/build/StgStartup.dyn_o rts/dist/build/StgStdThunks.dyn_o 
rts/dist/build/Updates.dyn_o rts/dist/build/AutoApply.dyn_o  -fPIC -dynamic  
-H32m -O -Iincludes -Iincludes/dist -Iincludes/dist-derivedconstants/header 
-Iincludes/dist-ghcconstants/header -Irts -Irts/dist/build -DCOMPILING_RTS 
-this-package-key rts -optc-DNOSMP -dcmm-lint  -i -irts -irts/dist/build 
-irts/dist/build/autogen -Irts/dist/build -Irts/dist/build/autogen   
-O2-fno-use-rpaths  -optl-Wl,-zorigin  -o 
rts/dist/build/libHSrts-ghc7.9.20140809.so
/usr/bin/ld: rts/dist/build/Adjustor.dyn_o: relocation R_MIPS_HI16 against 
`__gnu_local_gp' can not be used when making a shared object; recompile with 
-fPIC
rts/dist/build/Adjustor.dyn_o: could not read symbols: Bad value
collect2: ld returned 1 exit status
make[1]: *** [rts/dist/build/libHSrts-ghc7.9.20140809.so] Error 1
make[1]: *** Waiting for unfinished jobs
make: *** [all] Error 2

After making this change (see #8857)

$ diff -Nru config.mk.in-orig config.mk.in
--- config.mk.in-orig   2014-08-11 04:39:24.257232224 +
+++ config.mk.in2014-08-11 04:41:50.666057938 +
@@ -99,7 +99,8 @@
x86_64-unknown-mingw32 \
i386-unknown-mingw32 \
sparc-sun-solaris2 \
-   sparc-unknown-linux
+   sparc-unknown-linux \
+mipsel-unknown-linux
 
 ifeq $(SOLARIS_BROKEN_SHLD) YES
 NoSharedLibsPlatformList += i386-unknown-solaris2

and running

$ make distclean
$ ./configure
$ make

it failed with a different error:

inplace/bin/ghc-stage1 -hisuf hi -osuf  o -hcsuf hc -static  -H32m -O
-this-package-key time_KUji6QoLFw0LtcZkg4b7t4 -hide-all-packages -i 
-ilibraries/time/. -ilibraries/time/dist-install/build 
-ilibraries/time/dist-install/build/autogen -Ilibraries/time/dist-install/build 
-Ilibraries/time/dist-install/build/autogen -Ilibraries/time/include   
-optP-DLANGUAGE_Rank2Types -optP-DLANGUAGE_DeriveDataTypeable 
-optP-DLANGUAGE_StandaloneDeriving -optP-include 
-optPlibraries/time/dist-install/build/autogen/cabal_macros.h -package-key 
base_DiPQ1siqG3SBjHauL3L03p -package-key deeps_L0rJEVU1Zgn8x0Qs5aTOsU 
-package-key 

Re: Moving Haddock *development* out of GHC tree

2014-08-13 Thread Mateusz Kowalczyk
On 08/08/2014 06:25 AM, Mateusz Kowalczyk wrote:
 Hello,
 
 [snip]
 
 Transition from current setup:
 If I receive some patches I was promised then I will then make a 2.14.4
 bugfix/compat release make sure that master is up to date and then
 create something like GHC-tracking branch from master and track that. I
 will then abandon that branch and not push to it unless it is GHC
 release time. The next commit in master will bring Haddock to a state
 where it works with 7.8.3: yes, this means removing all new API stuff
 until 7.10 or 7.8.4 or whatever. GHC API changes go onto GHC-tracking
 while all the stuff I write goes master. When GHC makes a release or is
 about to, I make master work with that and make GHC-tracking point to
 that instead.
 
 
 Thanks!
 

So it is now close to a week gone and I have received many positive
replies and no negative ones. I will probably execute what I stated
initially at about this time tomorrow.

To reiterate in short:

1. I make sure what we have now compiles with GHC HEAD and I stick it in
separate branch which GHC folk will now track and apply any API patches
to. Unless something changes by tomorrow, this will most likely be what
master is at right now, perhaps with a single change to the version in
cabal file.

2. I make the master branch work with 7.8.3 (and possibly 7.8.x) and do
development without worrying about any API changes in HEAD, releasing as
often as I need to.

3. At GHC release time, I update master with API changes so that
up-to-date Haddock is ready to be used to generate the docs and ship
with the compiler.

I don't know what the GHC branch name will be yet. ‘ghc-head’ makes most
sense but IIRC Herbert had some objections as it had been used in the
past for something else, but maybe he can pitch in.

The only thing I require from GHC folk is to simply use that branch and
not push/pull to/from master unless contributing feature patches or
trying to port some fixes into HEAD version for whatever reason.

Thanks!

-- 
Mateusz K.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: ARM64 Task Force

2014-08-13 Thread Luke Iannini
Indeed, the float register stuff was a red herring -- restoring it caused no
problems and all my tests are working great. So yahoo!! We've got ARM64
support.

I'll tidy up the patches and create a ticket for review and merge.

Luke


On Tue, Aug 12, 2014 at 4:47 PM, Luke Iannini lukex...@gmail.com wrote:

 Hi all,
 Yahoo, happy news --  I think I've got it. Studying enough of the
 non-handwritten ASM that I was stepping through led me to make this change:

 https://github.com/lukexi/ghc/commit/1140e11db07817fcfc12446c74cd5a2bcdf92781
 (I think disabling the floating point registers was just a red herring;
 I'll confirm that next)

 And I can now call this fib code happily via the FFI:
 fibs :: [Int]
 fibs = 1:1:zipWith (+) fibs (tail fibs)

 foreign export ccall fib :: Int - Int
 fib :: Int - Int
 fib = (fibs !!)

 For posterity, more detail on the crashing case earlier (this is fixed now
 with proper storage and updating of the frame pointer):
 Calling fib(1) or fib(2) worked, but calling fib(3) triggered the crash.
 This was the backtrace, where you can see the errant 0x000100f05110
 frame values.
 (lldb) bt
 * thread #1: tid = 0xac6ed, 0x000100f05110, queue =
 'com.apple.main-thread', stop reason = EXC_BAD_ACCESS (code=2,
 address=0x100f05110)
 frame #0: 0x000100f05110
 frame #1: 0x000100f05110
   * frame #2: 0x0001000ffc9c HelloHaskell`-[SPJViewController
 viewDidLoad](self=0x000144e0cf10, _cmd=0x000186ae429a) + 76 at
 SPJViewController.m:22
 frame #3: 0x0001862f8b70 UIKit`-[UIViewController
 loadViewIfRequired] + 692
 frame #4: 0x0001862f8880 UIKit`-[UIViewController view] + 32
 frame #5: 0x0001862feeb0 UIKit`-[UIWindow
 addRootViewControllerViewIfPossible] + 72
 frame #6: 0x0001862fc6d4 UIKit`-[UIWindow _setHidden:forced:] + 296
 frame #7: 0x00018636d2bc UIKit`-[UIWindow makeKeyAndVisible] + 56
 frame #8: 0x00018657ff74 UIKit`-[UIApplication
 _callInitializationDelegatesForMainScene:transitionContext:] + 2804
 frame #9: 0x0001865824ec UIKit`-[UIApplication
 _runWithMainScene:transitionContext:completion:] + 1480
 frame #10: 0x000186580b84 UIKit`-[UIApplication
 workspaceDidEndTransaction:] + 184
 frame #11: 0x000189d846ac FrontBoardServices`__31-[FBSSerialQueue
 performAsync:]_block_invoke + 28
 frame #12: 0x000181c7a360
 CoreFoundation`__CFRUNLOOP_IS_CALLING_OUT_TO_A_BLOCK__ + 20
 frame #13: 0x000181c79468 CoreFoundation`__CFRunLoopDoBlocks + 312
 frame #14: 0x000181c77a8c CoreFoundation`__CFRunLoopRun + 1756
 frame #15: 0x000181ba5664 CoreFoundation`CFRunLoopRunSpecific + 396
 frame #16: 0x000186363140 UIKit`-[UIApplication _run] + 552
 frame #17: 0x00018635e164 UIKit`UIApplicationMain + 1488
 frame #18: 0x000100100268 HelloHaskell`main(argc=1,
 argv=0x00016fd07a58) + 204 at main.m:24
 frame #19: 0x0001921eea08 libdyld.dylib`start + 4



 On Tue, Aug 12, 2014 at 11:24 AM, Karel Gardas karel.gar...@centrum.cz
 wrote:

 On 08/12/14 11:03 AM, Luke Iannini wrote:

 It looks like it's jumping somewhere strange; lldb tells me it's to
 0x100e05110: .long 0x ; unknown opcode
 0x100e05114: .long 0x ; unknown opcode
 0x100e05118: .long 0x ; unknown opcode
 0x100e0511c: .long 0x ; unknown opcode
 0x100e05120: .long 0x ; unknown opcode
 0x100e05124: .long 0x ; unknown opcode
 0x100e05128: .long 0x ; unknown opcode
 0x100e0512c: .long 0x ; unknown opcode

 If I put a breakpoint on StgRun and step by instruction, I seem to make
 it to about:
 https://github.com/lukexi/ghc/blob/e99b7a41e64f3ddb9bb420c0d5583f
 0e302e321e/rts/StgCRun.c#L770
 (give or take a line)


 strange that it's in the middle of the stp isns block. Anyway, this looks
 like a cpu exception doesn't it? You will need to find out the reg which
 holds the exception reason value and then look on it in your debugger to
 find out what's going wrong there.

 Karel



___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: Moving Haddock *development* out of GHC tree

2014-08-13 Thread Carter Schonwald
one thing I wonder about is how should we approach noting
 theres a new language constructor, we should figure out a good way to
present it in haddock in this work flow?
because the initial haddocks presentation might just be a strawman till
someone thinks about it carefully right?


On Wed, Aug 13, 2014 at 6:30 PM, Herbert Valerio Riedel hvrie...@gmail.com
wrote:

 On 2014-08-14 at 00:09:40 +0200, Mateusz Kowalczyk wrote:

 [...]

  I don't know what the GHC branch name will be yet. ‘ghc-head’ makes most
  sense but IIRC Herbert had some objections as it had been used in the
  past for something else, but maybe he can pitch in.

 I had no objections at all to that name, 'ghc-head' is fine with me :-)
 ___
 ghc-devs mailing list
 ghc-devs@haskell.org
 http://www.haskell.org/mailman/listinfo/ghc-devs

___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: ARM64 Task Force

2014-08-13 Thread Ben Gamari
Luke Iannini lukex...@gmail.com writes:

 Indeed, the float register stuff was a red herring -- restoring it caused no
 problems and all my tests are working great. So yahoo!! We've got ARM64
 support.

Yay! Awesome work!

Cheers,

- Ben



pgpC53b1AIFVm.pgp
Description: PGP signature
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs