Re: New type of ($) operator in GHC 8.0 is problematic

2016-02-17 Thread Herbert Valerio Riedel
On 2016-02-18 at 04:02:24 +0100, Eric Seidel wrote:
> On Wed, Feb 17, 2016, at 08:09, Christopher Allen wrote:
>> I have tried a beginner's Prelude with people. I don't have a lot of data
>> because it was clearly a failure early on so I bailed them out into the
>> usual thing. It's just not worth it and it deprives them of the
>> preparedness to go write real Haskell code. That's not something I'm
>> willing to give up just so I can teach _less_.
>
> Chris, have you written about your experiences teaching with a
> beginner's Prelude? I'd be quite interested to read about it, as (1) it
> seems like a natural thing to do and (2) the Racket folks seem to have
> had good success with their staged teaching languages.
>
> In particular, I'm curious if your experience is in the context of
> teaching people with no experience programming at all, vs programming
> experience but no Haskell (or generally FP) experience. The Racket "How
> to Design Programs" curriculum seems very much geared towards absolute
> beginners, and that could be a relevant distinction.

Btw, IMHO it's also interesting to distinguish between teaching
functional programming vs teaching Haskell.

I've noticed that in the former case, instructors would often prefer a
radically slimmed down standard-library and conceal some of Haskell's
language features not pertinent to their FP curriculum (e.g. typeclasses
or record syntax).

-- 
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: New type of ($) operator in GHC 8.0 is problematic

2016-02-17 Thread Eric Seidel
On Wed, Feb 17, 2016, at 08:09, Christopher Allen wrote:
> I have tried a beginner's Prelude with people. I don't have a lot of data
> because it was clearly a failure early on so I bailed them out into the
> usual thing. It's just not worth it and it deprives them of the
> preparedness to go write real Haskell code. That's not something I'm
> willing to give up just so I can teach _less_.

Chris, have you written about your experiences teaching with a
beginner's Prelude? I'd be quite interested to read about it, as (1) it
seems like a natural thing to do and (2) the Racket folks seem to have
had good success with their staged teaching languages.

In particular, I'm curious if your experience is in the context of
teaching people with no experience programming at all, vs programming
experience but no Haskell (or generally FP) experience. The Racket "How
to Design Programs" curriculum seems very much geared towards absolute
beginners, and that could be a relevant distinction.

Thanks!
Eric
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: New type of ($) operator in GHC 8.0 is problematic

2016-02-17 Thread Manuel M T Chakravarty
Hi Takenobu,

> Takenobu Tani :
> 
> Hi Manuel,
> 
> > * Introduce the concept of overloading right away. People get that easily,
> > because they use overloaded arithmetic functions in math, too.
> > (Num and Show are the typical classes to explain it at.)
> > As an example, see how we do that in the first chapter of our new Haskell
> > tutorial: http://learn.hfm.io/first_steps.html 
> > 
> 
> This is straightforward and elegance tutorial! I like this.

You are very kind. Thank you.

> As you know, I'll share one point to list.
> 
> It's better to assume that "beginners are not only students in universities".
> Beginners are also engineers, hobbyist, teenager, your friends and your 
> family.
> Someone of them will touch to Haskell alone and learn with self study
> in a little bit of knowledge.
> 
> If they feel complex at first impression in their 1st week,
> they would go back before they realize beauty of Haskell.
> 
> Sometimes, I’m worried about it.

I do worry about the same thing. The Haskell ecosystem is very much geared 
towards experts and tinkerers (with laudable exceptions, such as, for example, 
the great work done by Chris Allen). Being an expert and tinkerer that didn’t 
worry me too much, but lately I am trying to make functional programming and 
Haskell accessible to a broader audience and it is an uphill battle. Even many 
professional software developers are put off even trying to install the 
toolchain. It is not that they wouldn’t been able to do it if they wanted. They 
just can’t be bothered because they are not convinced of the value of doing so 
at this stage — exactly as you are saying.

We should make it easier to get started, not harder.

Manuel

___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: New type of ($) operator in GHC 8.0 is problematic

2016-02-17 Thread Manuel M T Chakravarty
> Christopher Allen :
> > Sorry for the mostly off-topic post, but since a beginner’s Prelude was 
> > mentioned here multiple times recently as a countermeasure to making the 
> > real Prelude more complicated, I just want to say, don’t put too much hope 
> > into a ”solution” you never actually tried.
> 
> I have tried a beginner's Prelude with people. I don't have a lot of data 
> because it was clearly a failure early on so I bailed them out into the usual 
> thing. It's just not worth it and it deprives them of the preparedness to go 
> write real Haskell code. That’s not something I'm willing to give up just so 
> I can teach _less_.

That is a valuable data point. Thanks!

Manuel
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Fwd: Is anything being done to remedy the soul crushing compile times of GHC?

2016-02-17 Thread Thomas Tuegel
On Wed, Feb 17, 2016 at 2:21 PM, Ben Gamari  wrote:
> Thomas Tuegel  writes:
>> I have contributed several performance patches to Cabal in the past,
>> so I feel somewhat qualified to speak here. The remaining slowness in
>> `cabal build` is mostly due to the pre-process phase. There work in
>> progress which may improve this situation. We could also look at
>> separating the pre-process phase from the build phase. (It seems odd
>> to call it `pre-process` when it is *always* run during the build
>> phase, doesn't it?) This has the advantage of sidestepping the
>> slowness issue entirely, but it may break some users' workflows. Is
>> that trade-off worth it? We could use user feedback here.
>>
> What exactly does the pre-process phase do, anyways?

It runs the appropriate pre-processor (Alex, Happy, c2hs, etc.) for
modules that require it. It's slow because of the way the process is
carried out: For each module in the package description, Cabal tries
to find an associated .hs source file in the hs-source-dirs. If it
cannot, it looks for a file with an extension matching one of the
pre-processors it knows about. If it finds one, it runs the
corresponding program if the output files are missing or outdated.

If this doesn't sound TOO bad, consider: how many modules on Hackage
use pre-processors? Certainly less than 5%, maybe even less than 1%.
That's a LOT of work every time you call `cabal build`.

- Tom

P.S. If I may get a little philosophical, this is representative of
the problems we have in Cabal. Cabal tries to be very automagical, at
the cost of being sometimes slow and always opaque when things break!
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Fwd: Is anything being done to remedy the soul crushing compile times of GHC?

2016-02-17 Thread Ben Gamari
Thomas Tuegel  writes:

> On Wed, Feb 17, 2016 at 2:58 AM, Ben Gamari  wrote:
>> Yes, it would be great if someone could step up to look at Cabal's
>> performance. Running `cabal build` on an up-to-date tree of a
>> moderately-sized (10 kLoC, 8 components, 60 modules) Haskell project I
>> have laying around takes over 5 seconds from start-to-finish.
>>
>> `cabal build`ing just a single executable component takes 4 seconds.
>> This same executable takes 48 seconds for GHC to build from scratch with
>> optimization and 12 seconds without.
>
> I have contributed several performance patches to Cabal in the past,
> so I feel somewhat qualified to speak here. The remaining slowness in
> `cabal build` is mostly due to the pre-process phase. There work in
> progress which may improve this situation. We could also look at
> separating the pre-process phase from the build phase. (It seems odd
> to call it `pre-process` when it is *always* run during the build
> phase, doesn't it?) This has the advantage of sidestepping the
> slowness issue entirely, but it may break some users' workflows. Is
> that trade-off worth it? We could use user feedback here.
>
What exactly does the pre-process phase do, anyways?

Cheers,

- Ben



signature.asc
Description: PGP signature
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Fwd: Is anything being done to remedy the soul crushing compile times of GHC?

2016-02-17 Thread Tuncer Ayaz
On 17 February 2016 at 15:47, Austin Seipp  wrote:
> The better approach, I think, might be to section off certain times
> in a release period where we only allow such changes. Only for a
> month or so, for example, and you're just encouraged to park your
> current work for a little while, during that time, and just improve
> things.
>
> The only problem is, it's not clear if people will want to commit as
> much if the hard rule is just to fix bugs/improve performance for a
> select time. Nobody is obligated to contribute, so it could easily
> fall into a lull period if people get tired of it. But maybe the
> shared sense of community in doing it would help.
>
> Whatever we do, it has to be strict in these times, because in
> practice we have a policy like this ("just bug/perf fixes") during
> the time leading up to the RC, but we always slip and merge other
> things regardless. So, if we do this, we must be quite strict about
> it in practice and police ourselves better, I think.

Exactly, the time-boxing aspect is what I tried to express with my
concern about even/odd branching model, which failed for linux
pre-Bitkeeper. So, maybe a model like Linus does with two weeks of a
merge window and then strictly fixes, but that would require a
ghc-next branch with a maintainer, so probably not feasible with the
resources right now.

> On Wed, Feb 17, 2016 at 7:35 AM, Tuncer Ayaz  wrote:
> > Here's a thought: has anyone experience with limiting a certain
> > major release to just bug fixes and perf regression fixes, while
> > postponing all feature patches? It sounds like a good idea on
> > paper, but has anyone seen it work, and would this be something to
> > consider for GHC? I'm not suggesting the even/odd versioning
> > scheme, if anyone wonders. These don't work so well and nobody
> > tests odd versions.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Fwd: Is anything being done to remedy the soul crushing compile times of GHC?

2016-02-17 Thread Evan Laforge
On Wed, Feb 17, 2016 at 2:14 AM, Edward Z. Yang  wrote:
> Another large culprit for performance is that the fact that ghc --make
> must preprocess and parse the header of every local Haskell file:
> https://ghc.haskell.org/trac/ghc/ticket/618 (as well
> as https://ghc.haskell.org/trac/ghc/ticket/1290).  Neil and I
> have observed that when you use something better (like Shake)
> recompilation performance gets a lot better, esp. when you
> have a lot of modules.

I can second this, and I suspect the reason I've never had slowness
problems is that I use shake exclusively.

I'm looking forward to your work with integrating shake into ghc.  If
it means we can get both shake's speed and parallelism as well as
--make's ability to retain hi files in memory, it should give a really
nice speed boost.  I see a lot of "compilation IS NOT required",
presumably that would be even faster if it didn't have to start a new
ghc and inspect all the hi files again before finding that out!
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Fwd: Is anything being done to remedy the soul crushing compile times of GHC?

2016-02-17 Thread Tuncer Ayaz
On 17 February 2016 at 14:31, Tuncer Ayaz  wrote:

> On 17 February 2016 at 07:40, Evan Laforge  wrote:
>
> > My impression from the reddit thread is that three things are going
> > on:
> >
> > 1 - cabal has quite a bit of startup overhead
> > 2 - ghc takes a long time on certain inputs, e.g. long list literals.
> > There are probably already tickets for these.
>
> In my experience GHC startup overhead (time) has increased quite a
> lot somewhere in 7.x. I don't know if it's the cause, but perhaps
> dyn libs may be part of the reason. I'm not sure because I once (7.8
> I believe) tried to build without dynlink support and couldn't
> measure a substantial improvement.
>
> So, if you start ghc(i) for the first time from a spinning disk,
> it's very noticeable and quite a delay. Once it's cached, it's fast,
> so I think it's primarily due to reading stuff from disk.
>
> Just to mention the ideal overhead: anything below 400ms is small
> enough to not disrupt the flow and feels responsive. Go over 1s and
> it breaks.

Freshly booted machine with an SSD required 2 seconds for GHCi, so
maybe it's just that there's a lot more stuff to load, which leads me
to the next question: (better) dead code elimination would probably
help, where it at minimum only includes modules used, with a future
improvement of skipping unused functions, etc. as well, but I don't
know enough about GHC's DCE to talk about it.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Fwd: Is anything being done to remedy the soul crushing compile times of GHC?

2016-02-17 Thread Thomas Tuegel
On Wed, Feb 17, 2016 at 2:58 AM, Ben Gamari  wrote:
> Yes, it would be great if someone could step up to look at Cabal's
> performance. Running `cabal build` on an up-to-date tree of a
> moderately-sized (10 kLoC, 8 components, 60 modules) Haskell project I
> have laying around takes over 5 seconds from start-to-finish.
>
> `cabal build`ing just a single executable component takes 4 seconds.
> This same executable takes 48 seconds for GHC to build from scratch with
> optimization and 12 seconds without.

I have contributed several performance patches to Cabal in the past,
so I feel somewhat qualified to speak here. The remaining slowness in
`cabal build` is mostly due to the pre-process phase. There work in
progress which may improve this situation. We could also look at
separating the pre-process phase from the build phase. (It seems odd
to call it `pre-process` when it is *always* run during the build
phase, doesn't it?) This has the advantage of sidestepping the
slowness issue entirely, but it may break some users' workflows. Is
that trade-off worth it? We could use user feedback here.

Regards,
Tom
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Fwd: Is anything being done to remedy the soul crushing compile times of GHC?

2016-02-17 Thread Ben Gamari
Tuncer Ayaz  writes:

> On 17 February 2016 at 07:40, Evan Laforge  wrote:
>
>> My impression from the reddit thread is that three things are going
>> on:
>>
>> 1 - cabal has quite a bit of startup overhead
>> 2 - ghc takes a long time on certain inputs, e.g. long list literals.
>> There are probably already tickets for these.
>
> In my experience GHC startup overhead (time) has increased quite a lot
> somewhere in 7.x. I don't know if it's the cause, but perhaps dyn libs
> may be part of the reason. I'm not sure because I once (7.8 I believe)
> tried to build without dynlink support and couldn't measure a
> substantial improvement.
>
I'm not sure this is going to be a significant effect, but last night
thomie, rwbarton, and I discovered that the way we structure the Haskell
library directory makes the dynamic linker do significantly more work than
necessary. Merely compiling "Hello world" requires 800 `open` system
calls with a dynamically linked compiler. Seems like we should really
try to fix this. See #11587.

Cheers,

- Ben


signature.asc
Description: PGP signature
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: New type of ($) operator in GHC 8.0 is problematic

2016-02-17 Thread Christopher Allen
I agree 100% with Manuel here. My N is smaller (100s rather than thousands)
but this is what I've seen working with self-learners, programmers and
non-programmers alike. I don't have anything further to add because I
couldn't find a point in his listing that didn't match my experience.

> Sorry for the mostly off-topic post, but since a beginner’s Prelude was
mentioned here multiple times recently as a countermeasure to making the
real Prelude more complicated, I just want to say, don’t put too much hope
into a ”solution” you never actually tried.

I have tried a beginner's Prelude with people. I don't have a lot of data
because it was clearly a failure early on so I bailed them out into the
usual thing. It's just not worth it and it deprives them of the
preparedness to go write real Haskell code. That's not something I'm
willing to give up just so I can teach _less_.


On Tue, Feb 16, 2016 at 5:05 PM, Manuel M T Chakravarty <
c...@justtesting.org> wrote:

> > Richard Eisenberg :
> >
> > On Feb 16, 2016, at 5:35 AM, Ben Gamari  wrote:
> >> Indeed. I'll just say that I envision that a beginner's prelude would be
> >> for learning and nothing more. The goal is that it would be used in the
> >> first dozen hours of picking up the language and nothing more.
> >>
> >> I'd be interested to hear what Richard had in mind when he has time
> >> again.
> >
> > Yes, that's right. And, of course, it doesn't even need to be something
> released with GHC.
> >
> > I hope to have the opportunity to teach intro Haskell in the
> not-too-distant future. Maybe even this coming fall. Though I have yet to
> look closely at Chris's book, which will be one of the first things I do to
> prep, I suspect I'll be writing a custom Prelude for my first few weeks of
> the class, eliminating all type-classes. No Foldable, no Monoid, no Num.
> And then, by week 3 or 4, bring all that back in.
>
> As somebody who has taught Haskell to hordes of students (literally,
> 1000s), I am not at all convinced that this is going to be helpful. This is
> for the following reasons:
>
> * Students understand the basic idea of overloading for Num, Show, etc
> easily (week 1 or 2). We actually tried both doing basic overloading very
> early or delaying it until later. The former worked much better.
>
> * You are not in a position to explain Foldable and probably not Monoid by
> week 3 or 4. So, either you need to have more than one beginner Prelude,
> delay overloading until much later (awkward), or we are really only talking
> about the impact of Show, Num, etc here.
>
> * Changing things (i.e., one Prelude to another) always confuses some
> people — i.e., there is an intellectual cost to all this.
>
> * Students will want to use Haskell on their own computers. Then, you need
> to make sure, they import the right Prelude and that they stop importing
> the beginner Prelude when you switch back to the normal one. A certain
> percentage of students will mess that up and be confused.
>
> What we found works best is the following:
>
> * Introduce the concept of overloading right away. People get that easily,
> because they use overloaded arithmetic functions in math, too. (Num and
> Show are the typical classes to explain it at.) As an example, see how we
> do that in the first chapter of our new Haskell tutorial:
> http://learn.hfm.io/first_steps.html
>
> * Be careful in designing your exercises/assignments (esp early ones), to
> make it unlikely for students to run into class related errors.
>
> Sorry for the mostly off-topic post, but since a beginner’s Prelude was
> mentioned here multiple times recently as a countermeasure to making the
> real Prelude more complicated, I just want to say, don’t put too much hope
> into a ”solution” you never actually tried.
>
> Manuel
>
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>



-- 
Chris Allen
Currently working on http://haskellbook.com
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Fwd: Is anything being done to remedy the soul crushing compile times of GHC?

2016-02-17 Thread Austin Seipp
The better approach, I think, might be to section off certain times in
a release period where we only allow such changes. Only for a month or
so, for example, and you're just encouraged to park your current work
for a little while, during that time, and just improve things.

The only problem is, it's not clear if people will want to commit as
much if the hard rule is just to fix bugs/improve performance for a
select time. Nobody is obligated to contribute, so it could easily
fall into a lull period if people get tired of it. But maybe the
shared sense of community in doing it would help.

Whatever we do, it has to be strict in these times, because in
practice we have a policy like this ("just bug/perf fixes") during the
time leading up to the RC, but we always slip and merge other things
regardless. So, if we do this, we must be quite strict about it in
practice and police ourselves better, I think.

On Wed, Feb 17, 2016 at 7:35 AM, Tuncer Ayaz  wrote:
> Here's a thought: has anyone experience with limiting a certain major
> release to just bug fixes and perf regression fixes, while postponing
> all feature patches? It sounds like a good idea on paper, but has
> anyone seen it work, and would this be something to consider for GHC?
> I'm not suggesting the even/odd versioning scheme, if anyone wonders.
> These don't work so well and nobody tests odd versions.
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>



-- 
Regards,

Austin Seipp, Haskell Consultant
Well-Typed LLP, http://www.well-typed.com/
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


RE: Generalising the demand analysis to sum types

2016-02-17 Thread Simon Peyton Jones
I think you could do that.  The key question is (as someone asked) what use you 
would make of the information.  That is, what's the payoff?

Simon

|  -Original Message-
|  From: ghc-devs [mailto:ghc-devs-boun...@haskell.org] On Behalf Of José
|  Manuel Calderón Trilla
|  Sent: 17 February 2016 04:50
|  To: ghc-devs@haskell.org
|  Subject: Generalising the demand analysis to sum types
|  
|  Hello ghc-devs,
|  
|  Apologies for the longish email.
|  
|  I'm looking to generalise GHC's demand analysis to work on sum types.
|  This is in connection with the work on unboxed sums. The idea is that
|  if the strictness analysis tells us that a sum type is strict in all
|  the fields of all of its constructors, it is safe to unbox that sum
|  automatically. We hope for this to feed into a worker/wrapper like
|  transformation for sum types.
|  
|  I am new to the GHC codebase and wanted to make sure that my plan of
|  attack seemed sensible to you all.
|  
|  GHC's current type representing demand is StrDmd, which is defined as
|  follows:
|  
|  data StrDmd
|  = HyperStrict
|  | SCall StrDmd
|  | SProd [ArgStr]
|  | HeadStr
|  
|  I believe that adding sum types will be as simple as adding a single
|  new constructor for Sums:
|  
|  data StrDmd
|  = HyperStrict
|  | SCall StrDmd
|  | SProd [ArgStr]
|  | SSum [(Tag, StrDmd)]
|  | HeadStr
|  
|  We need the constructor tag so that we can analyze each branch of a
|  case expression independently before combining the results of each
|  branch. Since we only care if all fields are strict for all
|  constructors, we won't actually use the tag information except in the
|  analysis itself.
|  
|  Projection-based strictness analysis on sum types is not new (though
|  sum types + higher-order seems to be). That being said, all previous
|  treatments of the subject that I'm aware of forbid polymorphic
|  recursion. Luckily for us we aren't trying to unbox recursive types, so
|  for now [1] we will not attempt to analyze recursive types, hence no
|  SMu or SRec constructor above.
|  
|  With the analysis accepting sum types we will be able to analyze
|  functions with sum types as parameters, as a trivial example:
|  
|  fromMaybe2 x y = case x of
|  Just v -> v
|  Nothing -> y
|  
|  You would get a demand signature like:
|  
|  Str=Dmdtype ), ("Nothing", <>)],U>
|  
|  Which says that the function is strict in its first argument and that
|  if the value is a 'Just' then its field is used strictly, if the value
|  is a 'Nothing' then there is no further demand (a nullary product).
|  
|  For those with more experience in these matter, does this seem to be a
|  sensible approach?
|  
|  Thanks in advance for any insight,
|  
|  Jose
|  
|  
|  [1]: Those of you who saw my Haskell Symposium talk on implicit
|  parallelism last year might remember that we will need the analysis to
|  work on recursive types in order to automatically generate parallel
|  strategies, but that's for later (I've spoken with Ilya Sergey about
|  working on that).
|  ___
|  ghc-devs mailing list
|  ghc-devs@haskell.org
|  https://na01.safelinks.protection.outlook.com/?url=http%3a%2f%2fmail.ha
|  skell.org%2fcgi-bin%2fmailman%2flistinfo%2fghc-
|  devs&data=01%7c01%7csimonpj%40064d.mgd.microsoft.com%7cf731333e0c5d45f3
|  298408d33755dc98%7c72f988bf86f141af91ab2d7cd011db47%7c1&sdata=hWHFSUVrr
|  ZQqTPeQsFMQVlveHgV5ozx6%2bdpsylRIwhg%3d
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Generalising the demand analysis to sum types

2016-02-17 Thread Ömer Sinan Ağacan
2016-02-17 3:15 GMT-05:00 Joachim Breitner :
> BTW, what is the status of
> https://ghc.haskell.org/trac/ghc/wiki/UnpackedSumTypes ?

It's mostly done, you can try it here:
https://github.com/osa1/ghc/tree/unboxed-sums-stg
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Fwd: Is anything being done to remedy the soul crushing compile times of GHC?

2016-02-17 Thread Tuncer Ayaz
Here's a thought: has anyone experience with limiting a certain major
release to just bug fixes and perf regression fixes, while postponing
all feature patches? It sounds like a good idea on paper, but has
anyone seen it work, and would this be something to consider for GHC?
I'm not suggesting the even/odd versioning scheme, if anyone wonders.
These don't work so well and nobody tests odd versions.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Fwd: Is anything being done to remedy the soul crushing compile times of GHC?

2016-02-17 Thread Tuncer Ayaz
On 17 February 2016 at 07:40, Evan Laforge  wrote:

> My impression from the reddit thread is that three things are going
> on:
>
> 1 - cabal has quite a bit of startup overhead
> 2 - ghc takes a long time on certain inputs, e.g. long list literals.
> There are probably already tickets for these.

In my experience GHC startup overhead (time) has increased quite a lot
somewhere in 7.x. I don't know if it's the cause, but perhaps dyn libs
may be part of the reason. I'm not sure because I once (7.8 I believe)
tried to build without dynlink support and couldn't measure a
substantial improvement.

So, if you start ghc(i) for the first time from a spinning disk, it's
very noticeable and quite a delay. Once it's cached, it's fast, so I
think it's primarily due to reading stuff from disk.

Just to mention the ideal overhead: anything below 400ms is small
enough to not disrupt the flow and feels responsive. Go over 1s and it
breaks.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: New type of ($) operator in GHC 8.0 is problematic

2016-02-17 Thread Takenobu Tani
Hi Manuel,

> * Introduce the concept of overloading right away. People get that easily,
> because they use overloaded arithmetic functions in math, too.
> (Num and Show are the typical classes to explain it at.)
> As an example, see how we do that in the first chapter of our new Haskell
> tutorial: http://learn.hfm.io/first_steps.html

This is straightforward and elegance tutorial! I like this.


As you know, I'll share one point to list.

It's better to assume that "beginners are not only students in
universities".
Beginners are also engineers, hobbyist, teenager, your friends and your
family.
Someone of them will touch to Haskell alone and learn with self study
in a little bit of knowledge.

If they feel complex at first impression in their 1st week,
they would go back before they realize beauty of Haskell.

Sometimes, I'm worried about it.

Regards,
Takenobu


2016-02-17 8:05 GMT+09:00 Manuel M T Chakravarty :

> > Richard Eisenberg :
> >
> > On Feb 16, 2016, at 5:35 AM, Ben Gamari  wrote:
> >> Indeed. I'll just say that I envision that a beginner's prelude would be
> >> for learning and nothing more. The goal is that it would be used in the
> >> first dozen hours of picking up the language and nothing more.
> >>
> >> I'd be interested to hear what Richard had in mind when he has time
> >> again.
> >
> > Yes, that's right. And, of course, it doesn't even need to be something
> released with GHC.
> >
> > I hope to have the opportunity to teach intro Haskell in the
> not-too-distant future. Maybe even this coming fall. Though I have yet to
> look closely at Chris's book, which will be one of the first things I do to
> prep, I suspect I'll be writing a custom Prelude for my first few weeks of
> the class, eliminating all type-classes. No Foldable, no Monoid, no Num.
> And then, by week 3 or 4, bring all that back in.
>
> As somebody who has taught Haskell to hordes of students (literally,
> 1000s), I am not at all convinced that this is going to be helpful. This is
> for the following reasons:
>
> * Students understand the basic idea of overloading for Num, Show, etc
> easily (week 1 or 2). We actually tried both doing basic overloading very
> early or delaying it until later. The former worked much better.
>
> * You are not in a position to explain Foldable and probably not Monoid by
> week 3 or 4. So, either you need to have more than one beginner Prelude,
> delay overloading until much later (awkward), or we are really only talking
> about the impact of Show, Num, etc here.
>
> * Changing things (i.e., one Prelude to another) always confuses some
> people — i.e., there is an intellectual cost to all this.
>
> * Students will want to use Haskell on their own computers. Then, you need
> to make sure, they import the right Prelude and that they stop importing
> the beginner Prelude when you switch back to the normal one. A certain
> percentage of students will mess that up and be confused.
>
> What we found works best is the following:
>
> * Introduce the concept of overloading right away. People get that easily,
> because they use overloaded arithmetic functions in math, too. (Num and
> Show are the typical classes to explain it at.) As an example, see how we
> do that in the first chapter of our new Haskell tutorial:
> http://learn.hfm.io/first_steps.html
>
> * Be careful in designing your exercises/assignments (esp early ones), to
> make it unlikely for students to run into class related errors.
>
> Sorry for the mostly off-topic post, but since a beginner’s Prelude was
> mentioned here multiple times recently as a countermeasure to making the
> real Prelude more complicated, I just want to say, don’t put too much hope
> into a ”solution” you never actually tried.
>
> Manuel
>
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: New type of ($) operator in GHC 8.0 is problematic

2016-02-17 Thread Takenobu Tani
Hi Alexander,

> Prelude> :t foldr
> foldr :: Foldable t => (a -> b -> b) -> b -> t a -> b
> For example:
> foldr :: (a -> b -> b) -> b -> [a] -> b
> foldr :: (a -> b -> b) -> b -> Maybe a -> b
> foldr :: (a -> b -> b) -> b -> Identity a -> b
> foldr :: (a -> b -> b) -> b -> (c, a) -> b
> and more
>
> It is easy to see a pattern here.  The order of the instances used could
be the load order, so the ones from Prelude would come first.

interesting idea.
It's ":t" 's verbose representation mode.

The ghci represents true type (not lie) and
beginners may intuitively understand the relation between
Foldable type class and instances.

Beginners will be overcome FTP more easily.



> Prelude> :t ($)
> ($) :: <"unreadable blurb">
> For example:
> ($) :: (a -> b) -> a -> b
> ($) :: forall a (b :: #). (a -> b) -> a -> b
>
> At least one of those lines should be understandable.

It's one of the options.
But I feel that Levity (or RuntimeRep) is more deep than the type class.
They may feel difficult to understand the difference of two patterns in ($).

(If it will be long, it's better to separate thread =) )

Regards,
Takenobu


2016-02-16 16:28 GMT+09:00 Alexander Kjeldaas 
:

>
>
> On Fri, Feb 5, 2016 at 2:16 PM, Takenobu Tani 
> wrote:
>
>> Hi,
>>
>> I'll worry about the learning curve of beginners.
>> Maybe, beginners will try following session in their 1st week.
>>
>>   ghci> :t foldr
>>   ghci> :t ($)
>>
>> They'll get following result.
>>
>>
>> Before ghc7.8:
>>
>>   Prelude> :t foldr
>>   foldr :: (a -> b -> b) -> b -> [a] -> b
>>
>>   Prelude> :t ($)
>>   ($) :: (a -> b) -> a -> b
>>
>>   Beginners should only understand about following:
>>
>> * type variable (polymorphism)
>>
>>
>> After ghc8.0:
>>
>>   Prelude> :t foldr
>>   foldr :: Foldable t => (a -> b -> b) -> b -> t a -> b
>>
>>
> If the output was the following it would be more understandable (and more
> encouraging!)
>
> """
> Prelude> :t foldr
> foldr :: Foldable t => (a -> b -> b) -> b -> t a -> b
> For example:
> foldr :: (a -> b -> b) -> b -> [a] -> b
> foldr :: (a -> b -> b) -> b -> Maybe a -> b
> foldr :: (a -> b -> b) -> b -> Identity a -> b
> foldr :: (a -> b -> b) -> b -> (c, a) -> b
> and more
> """
>
> It is easy to see a pattern here.  The order of the instances used could
> be the load order, so the ones from Prelude would come first.
>
>
>
>>   Prelude> :t ($)
>>   ($)
>> :: forall (w :: GHC.Types.Levity) a (b :: TYPE w).
>>(a -> b) -> a -> b
>>
>>
> I'm not sure how this would work here, but when Levity is *, this should
> collapse into the old syntax, so:
>
> """
> Prelude> :t ($)
> ($) :: <"unreadable blurb">
> For example:
> ($) :: (a -> b) -> a -> b
> ($) :: forall a (b :: #). (a -> b) -> a -> b
> """
>
> At least one of those lines should be understandable.
>
> Alexander
>
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Fwd: Is anything being done to remedy the soul crushing compile times of GHC?

2016-02-17 Thread Edward Z. Yang
Another large culprit for performance is that the fact that ghc --make
must preprocess and parse the header of every local Haskell file:
https://ghc.haskell.org/trac/ghc/ticket/618 (as well
as https://ghc.haskell.org/trac/ghc/ticket/1290).  Neil and I
have observed that when you use something better (like Shake)
recompilation performance gets a lot better, esp. when you
have a lot of modules.

Edward

Excerpts from Ben Gamari's message of 2016-02-17 00:58:43 -0800:
> Evan Laforge  writes:
> 
> > On Wed, Feb 17, 2016 at 4:38 AM, Ben Gamari  wrote:
> >> Multiple modules aren't a problem. It is dependencies on Hackage
> >> packages that complicate matters.
> >
> > I guess the problem is when ghc breaks a bunch of hackage packages,
> > you can't build with it anymore until those packages are updated,
> > which won't happen until after the release?
> >
> This is one issue, although perhaps not the largest. Here are some of
> the issues I can think of off the top of my head,
> 
>  * The issue you point out: Hackage packages need to be updated
> 
>  * Hackage dependencies mean that the performance of the testcase is now
>dependent upon code over which we have no control. If a test's
>performance improves is this because the compiler improved or merely
>because a dependency of the testcase was optimized?
> 
>Of course, you could maintain a stable fork of the dependency, but
>at this point you might as well just take the pieces you need and
>fold them into the testcase.
> 
>  * Hackage dependencies greatly complicate packaging. You need to
>somehow download and install them. The obvious approach here is to
>use cabal-install but it is unavailable during a GHC build
> 
>  * Hackage dependencies make it much harder to determine what the
>compiler is doing. If I have a directory of modules, I can rebuild
>all of them with `ghc --make -fforce-recomp`. Things are quite a bit
>trickier when packages enter the picture.
> 
> In short, the whole packaging system really acts as nothing more than a
> confounding factor for performance analysis, in addition to making
> implementation quite a bit trickier.
> 
> That being said, developing another performance testsuite consisting of
> a set of larger, dependency-ful applications may be useful at some
> point. I think the first priority, however, should be nofib.
> 
> > From a certain point of view, this could be motivation to either break
> > fewer things, or to patch breaking dependents as soon as the breaking
> > patch goes into ghc.  Which doesn't sound so bad in theory.  Of course
> > someone would need to spend time doing boring maintenance, but it
> > seems that will be required regardless.  And ultimately someone has to
> > do it eventually.
> >
> Much of the effort necessary to bring Hackage up to speed with a new GHC
> release isn't due to breakage; it's just bumping version bounds. I'm
> afraid the GHC project really doesn't have the man-power to do this work
> consistently. We already owe hvr a significant amount of gratitude for
> handling so many of these issues leading up to the release.
> 
> > Of course, said person's boring time might be better spent directly
> > addressing known performance problems.
> >
> Indeed.
> 
> > My impression from the reddit thread is that three things are going on:
> >
> > 1 - cabal has quite a bit of startup overhead
> 
> Yes, it would be great if someone could step up to look at Cabal's
> performance. Running `cabal build` on an up-to-date tree of a
> moderately-sized (10 kLoC, 8 components, 60 modules) Haskell project I
> have laying around takes over 5 seconds from start-to-finish.
> 
> `cabal build`ing just a single executable component takes 4 seconds.
> This same executable takes 48 seconds for GHC to build from scratch with
> optimization and 12 seconds without.
> 
> > 2 - ghc takes a long time on certain inputs, e.g. long list literals.
> > There are probably already tickets for these.
> >
> Indeed, there are plenty of pathological cases. For better or worse,
> these are generally the "easier" performance problems to tackle.
> 
> > 3 - and of course, ghc can be just generally slow, in the million tiny
> > cuts sense.
> >
> And this is the tricky one. Beginning to tackle this will require that
> someone perform some very careful measurements on current and previous
> releases.
> 
> Performance issues are always on my and Austin's to-do list, but we are
> unfortunately rather limited in the amount of time we can spend on these
> due to funding considerations. As Simon mentioned, if someone would like
> to see this fixed and has money to put towards the cause, we would love
> to hear from you.
> 
> Cheers,
> 
> - Ben
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Installing Cabal for GHC 8.0 release candidates

2016-02-17 Thread Ben Gamari
George Colpitts  writes:

> works for me, on my Mac
>
Thanks for the confirmation, George!

Cheers,

- Ben



signature.asc
Description: PGP signature
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: New type of ($) operator in GHC 8.0 is problematic

2016-02-17 Thread Manuel M T Chakravarty
We have tried variations of this. This approach has two problems. 

(1) It’s generally easier to use something than to define it and, e.g., in the 
case of lists, Haskell ”magic” syntax is more intuitive to use.

For example, we use lists for a while before we define them. Even recursive 
definitions with pattern matching on [] and (:) are fairly easy to explain and 
for students to write quite a while before we discuss data declarations.

The build-in syntax for lists is convenient for using collections before 
properly explaining the full structure of terms and the concept of data 
constructors etc.

(2) We repeatedly found that any for-teaching-only definitions come with quite 
a bit of hidden costs.

At some point, you need to make the transition from the for-teaching-only 
definitions to the ”real” definitions. At this point, you invariably lose the 
students who managed to just get everything up to that point — i.e., those 
students that were struggling anyway. For us, alpha renaming is a trivial 
process. However, learners (and esp. the weaker learners) are rather tied to 
syntax and concrete names. They don’t deal well with alpha renaming.

Additionally, by introducing for-teaching-only definitions, you basically cut 
learners of from all the online resources that they could otherwise access. You 
probably won’t be able to match it up with a textbook unless you write one 
yourself. (Even if there is a book, this book then becomes the only resource.) 
This is an enormous disadvantage.

Manuel

> Joachim Breitner :
> Am Mittwoch, den 17.02.2016, 10:05 +1100 schrieb Manuel M T Chakravarty:
>> * Be careful in designing your exercises/assignments (esp early
>> ones), to make it unlikely for students to run into class related 
>> errors.
> 
> have you, or someone else, considered or tried to simply have students
> start with their own list data type, i.e.
> 
>data List a = Nil | Cons a (List a)
> 
> and have them implement the combinators they need themselves? Of
> course, you’d have to tell them to use names that do not clash with
> Prelude-exported, but this would avoid Foldable et. al. and be more
> educational (at the expense of maybe a slower progress, and not having
> nice syntax).
> 
> Similarly, one could teach them about the non-magicness of $ and side-
> step the issue with $ by having them write 
> 
>($$)   :: (a -> b) -> a -> b
>f $$ x =  f x
>infixr 0  $
> 
> (or whatever symbol they fancy).
> 
> 
> This would be akin to using a beginner’s prelude, only that the
> students would be writing it themselves, which IMHO is a big difference
> from an educational point of view.
> 
> 
> Greetings,
> Joachim
> 
> -- 
> Joachim “nomeata” Breitner
>   m...@joachim-breitner.de • https://www.joachim-breitner.de/
>   XMPP: nome...@joachim-breitner.de • OpenPGP-Key: 0xF0FBF51F
>   Debian Developer: nome...@debian.org
> 
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs

___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Fwd: Is anything being done to remedy the soul crushing compile times of GHC?

2016-02-17 Thread Ben Gamari
Evan Laforge  writes:

> On Wed, Feb 17, 2016 at 4:38 AM, Ben Gamari  wrote:
>> Multiple modules aren't a problem. It is dependencies on Hackage
>> packages that complicate matters.
>
> I guess the problem is when ghc breaks a bunch of hackage packages,
> you can't build with it anymore until those packages are updated,
> which won't happen until after the release?
>
This is one issue, although perhaps not the largest. Here are some of
the issues I can think of off the top of my head,

 * The issue you point out: Hackage packages need to be updated

 * Hackage dependencies mean that the performance of the testcase is now
   dependent upon code over which we have no control. If a test's
   performance improves is this because the compiler improved or merely
   because a dependency of the testcase was optimized?

   Of course, you could maintain a stable fork of the dependency, but
   at this point you might as well just take the pieces you need and
   fold them into the testcase.

 * Hackage dependencies greatly complicate packaging. You need to
   somehow download and install them. The obvious approach here is to
   use cabal-install but it is unavailable during a GHC build

 * Hackage dependencies make it much harder to determine what the
   compiler is doing. If I have a directory of modules, I can rebuild
   all of them with `ghc --make -fforce-recomp`. Things are quite a bit
   trickier when packages enter the picture.

In short, the whole packaging system really acts as nothing more than a
confounding factor for performance analysis, in addition to making
implementation quite a bit trickier.

That being said, developing another performance testsuite consisting of
a set of larger, dependency-ful applications may be useful at some
point. I think the first priority, however, should be nofib.

> From a certain point of view, this could be motivation to either break
> fewer things, or to patch breaking dependents as soon as the breaking
> patch goes into ghc.  Which doesn't sound so bad in theory.  Of course
> someone would need to spend time doing boring maintenance, but it
> seems that will be required regardless.  And ultimately someone has to
> do it eventually.
>
Much of the effort necessary to bring Hackage up to speed with a new GHC
release isn't due to breakage; it's just bumping version bounds. I'm
afraid the GHC project really doesn't have the man-power to do this work
consistently. We already owe hvr a significant amount of gratitude for
handling so many of these issues leading up to the release.

> Of course, said person's boring time might be better spent directly
> addressing known performance problems.
>
Indeed.

> My impression from the reddit thread is that three things are going on:
>
> 1 - cabal has quite a bit of startup overhead

Yes, it would be great if someone could step up to look at Cabal's
performance. Running `cabal build` on an up-to-date tree of a
moderately-sized (10 kLoC, 8 components, 60 modules) Haskell project I
have laying around takes over 5 seconds from start-to-finish.

`cabal build`ing just a single executable component takes 4 seconds.
This same executable takes 48 seconds for GHC to build from scratch with
optimization and 12 seconds without.

> 2 - ghc takes a long time on certain inputs, e.g. long list literals.
> There are probably already tickets for these.
>
Indeed, there are plenty of pathological cases. For better or worse,
these are generally the "easier" performance problems to tackle.

> 3 - and of course, ghc can be just generally slow, in the million tiny
> cuts sense.
>
And this is the tricky one. Beginning to tackle this will require that
someone perform some very careful measurements on current and previous
releases.

Performance issues are always on my and Austin's to-do list, but we are
unfortunately rather limited in the amount of time we can spend on these
due to funding considerations. As Simon mentioned, if someone would like
to see this fixed and has money to put towards the cause, we would love
to hear from you.

Cheers,

- Ben


signature.asc
Description: PGP signature
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: New type of ($) operator in GHC 8.0 is problematic

2016-02-17 Thread Joachim Breitner
Hi,

Am Mittwoch, den 17.02.2016, 10:05 +1100 schrieb Manuel M T Chakravarty:
> * Be careful in designing your exercises/assignments (esp early
> ones), to make it unlikely for students to run into class related 
> errors.

have you, or someone else, considered or tried to simply have students
start with their own list data type, i.e.

data List a = Nil | Cons a (List a)

and have them implement the combinators they need themselves? Of
course, you’d have to tell them to use names that do not clash with
Prelude-exported, but this would avoid Foldable et. al. and be more
educational (at the expense of maybe a slower progress, and not having
nice syntax).

Similarly, one could teach them about the non-magicness of $ and side-
step the issue with $ by having them write 

($$)   :: (a -> b) -> a -> b
f $$ x =  f x
infixr 0  $

(or whatever symbol they fancy).


This would be akin to using a beginner’s prelude, only that the
students would be writing it themselves, which IMHO is a big difference
from an educational point of view.


Greetings,
Joachim

-- 
Joachim “nomeata” Breitner
  m...@joachim-breitner.de • https://www.joachim-breitner.de/
  XMPP: nome...@joachim-breitner.de • OpenPGP-Key: 0xF0FBF51F
  Debian Developer: nome...@debian.org



signature.asc
Description: This is a digitally signed message part
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Generalising the demand analysis to sum types

2016-02-17 Thread Joachim Breitner
Hi,

interesting!

Am Dienstag, den 16.02.2016, 23:50 -0500 schrieb José Manuel Calderón Trilla:
> For those with more experience in these matter, does this seem to be
> a sensible approach?

not sure if I qualify, but I have two questions nevertheless. 

You write "fromMaybe2", but (besides the order of the argument) it is
the same as the library "fromMaybe". Is the order of the arguments
relevant?

And more importatly: Can you outline what you want to do with the
result of the analysis? A worker-wrapper transformation of "fromMaybe"
with a worker that takes an unboxed sum type? If so, what would be the
unboxed sum type be here?

(With products it is easier, because you can just pass the components
as arguments, or use unboxed tuples. But here you would need more
complicated unboxed types.)

BTW, what is the status of
https://ghc.haskell.org/trac/ghc/wiki/UnpackedSumTypes ?

Greetings,
Joachim


-- 
Joachim “nomeata” Breitner
  m...@joachim-breitner.de • https://www.joachim-breitner.de/
  XMPP: nome...@joachim-breitner.de • OpenPGP-Key: 0xF0FBF51F
  Debian Developer: nome...@debian.org



signature.asc
Description: This is a digitally signed message part
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs