Re: How does GHC's testsuite work?

2017-10-30 Thread Edward Z. Yang
Excerpts from Sébastien Hinderer's message of 2017-10-30 16:39:24 +0100:
> Dear Edward,
> 
> Many thanks for your prompt response!
> 
> Edward Z. Yang (2017/10/30 11:25 -0400):
> > Actually, it's the reverse of what you said: like OCaml, GHC essentially
> > has ~no unit tests; it's entirely Haskell programs which we compile
> > (and sometimes run; a lot of tests are for the typechecker only so
> > we don't bother running those.)  The .T file is just a way of letting
> > the Python driver know what tests exist.
> 
> Oh okay! Would you be able to point me to just a few tests to get an
> idea of a few typical situations, please?

For example:

The metadata

https://github.com/ghc/ghc/blob/master/testsuite/tests/typecheck/should_fail/all.T

The source file

https://github.com/ghc/ghc/blob/master/testsuite/tests/typecheck/should_fail/tcfail011.hs

The expected error output

https://github.com/ghc/ghc/blob/master/testsuite/tests/typecheck/should_fail/tcfail011.stderr

> One other question I forgot to ask: how do you deal with conditional
> tests? For instance, if a test should be run only on some platforms? Or,
> in OCaml we have tests for Fortran bindings that should be run only if a
> Fortran compiler is available. How would you deal with such tests?

All managed inside the Python driver code.

Example:
https://github.com/ghc/ghc/blob/master/testsuite/tests/rts/all.T#L32

Edward
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users


Re: How does GHC's testsuite work?

2017-10-30 Thread Edward Z. Yang
Actually, it's the reverse of what you said: like OCaml, GHC essentially
has ~no unit tests; it's entirely Haskell programs which we compile
(and sometimes run; a lot of tests are for the typechecker only so
we don't bother running those.)  The .T file is just a way of letting
the Python driver know what tests exist.

Edward

Excerpts from Sébastien Hinderer's message of 2017-10-30 16:17:38 +0100:
> Dear all,
> 
> I am a member of OCaml's developement team. More specifically, I am
> working on a test-driver for the OCaml compiler, which will be part of
> OCaml's 4.06 release.
> 
> I am currently writing an article to describe the tool and its
> principles. In this article, I would like to also talk about how other
> compilers' testsuites are handled and loking how things are done in GHC
> is natural.
> 
> In OCaml, our testsuite essentially consist in whole programs that
> we compile and run, checking that the compilation and execution results
> match the expected ones.
> 
> From what I could see from GHC's testsuite, it seemed to me that it uses
> Python to drive the tests. I also understood that the testsuite has
> tests that are more kind of unit-tests, in the .T file. Am I correct
> here? Or do you guys also have whole program tests?
> If you do, how do you compile and run them?
> 
> Any comment / hint on this aspect of the test harness' design would be
> really helpful.
> 
> Many thanks in advance,
> 
> Sébastien.
> 
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users


Re: How to get a heap visualization

2017-08-30 Thread Edward Z. Yang
Why not the plain old heap profiler?

Edward

Excerpts from Yitzchak Gale's message of 2017-08-30 18:34:05 +0300:
> I need a simple heap visualization for debugging purposes.
> I'm using GHC 8.0.2 to compile a large and complex yesod-based
> web app. What's the quickest and easiest way?
> 
> Vacuum looks simple and nice. But it has some long-outstanding
> PRs against it to support GHC 7.10 and GHC 8.0 that were never
> applied.
> 
> https://github.com/thoughtpolice/vacuum/issues/9
> 
> Getting ghc-vis to compile looks hopeless, for a number of reasons.
> The dependencies on gtk and cairo are huge. It hasn't been updated
> on Hackage for a year and a half. It requires base < 4.9. I need to run
> the visualizer either on a headless Ubuntu 16.04 server, or locally on
> Windows. And anyway, the fancy GUI in ghc-vis is way overkill for me.
> 
> The heap scraper backend for ghc-vis, ghc-heap-view, looks usable,
> and better supported than vacuum. But is there a quick and simple
> visualizer for its output, without ghc-vis?
> 
> Is there anything else? Is the best option to fork vacuum and and try
> to apply the PRs?
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users


Re: Profiling plugins

2017-06-11 Thread Edward Z. Yang
Hello M,

Unfortunately, if you want detailed profiling, you will have to rebuild
GHC with profiling.  Note that you can basic heap profile information
without rebuilding GHC.

Edward

Excerpts from M Farkas-Dyck's message of 2017-06-06 12:34:57 -0800:
> How is this done? I am working on ConCat
> [https://github.com/conal/concat] and we need a profile of the plugin
> itself. I tried "stack test --profile" but that does a profile of the
> test program, not the plugin. Can i do this and not rebuild GHC?
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users


Re: Accessing the "original" names via GHC API

2017-01-25 Thread Edward Z. Yang
Hi Ranjit,

Unfortunately you need more information to do this, since the
set of modules which are available for import can vary depending
on whether or not packages are hidden or not (not even counting
whether or not a module is exposed or not!)

The way GHC's pretty printer gives a good name is that it keeps
track of all of the names in scope and where they came from
in a GlobalRdrEnv.  The relevant code is in 'mkPrintUnqualified'
in HscTypes, but if you pretty print using user-style with
an appropriately set up GlobalRdrEnv you should
get the things you want.

Edward

Excerpts from Ranjit Jhala's message of 2017-01-24 19:00:05 -0800:
> Dear Joachim,
> 
> You are right -- some more context.
> 
> Given
> 
>   tc  :: TyCon
>   m   :: ModName
>   env :: HscEnv
> 
> I want to get a
> 
>   s :: String
> 
> such that _in_ the context given by `m` and `env` I can resolve `s` to get
> back the original `TyCon`, e.g. something like
> 
>   L _ rn <- hscParseIdentifier env s
>   name   <- lookupRdrName env modName rn
> 
> would then return `name :: Name` which corresponds to the original `TyCon`.
> 
> That is, the goal is _not_ pretty printing, but "serialization" into a
> String
> representation that lets me recover the original `TyCon` later.
> 
> (Consequently, `"Data.Set.Base.Set"` doesn't work as the `Data.Set.Base`
> module is hidden and hence, when I try the above, GHC complains that the
> name is not in scope.
> 
> Does that clarify the problem?
> 
> Thanks!
> 
> - Ranjit.
> 
> 
> On Tue, Jan 24, 2017 at 6:11 PM, Joachim Breitner 
> wrote:
> 
> > Hi Ranjit,
> >
> > Am Dienstag, den 24.01.2017, 16:09 -0800 schrieb Ranjit Jhala:
> > > My goal is to write a function
> > >
> > >tyconString :: TyCon -> String
> > >
> > > (perhaps with extra parameters) such that given the
> > > `TyCon` corresponding to `Set`, I get back the "original"
> > > name `S.Set`, or even `Data.Set.Set`.
> > >
> > > Everything I've tried, which is fiddling with different variants of
> > > `PprStyle`, end up giving me `Data.Set.Base.Set`
> > >
> > > Does anyone have a suggestion for how to proceed?
> >
> > in a way, `Data.Set.Base.Set` is the “original”, proper name for Set,
> > everything else is just a local view on the name.
> >
> > So, are you maybe looking for a way to get the “most natural way” to
> > print a name in a certain module context?
> >
> > This functionality must exist somewhere, as ghci is printing out errors
> > this way. But it certainly would require an additional argument to
> > tyconString, to specify in which module to print the name.
> >
> > Greetings,
> > Joachim
> >
> >
> > --
> > Joachim “nomeata” Breitner
> >   m...@joachim-breitner.de • https://www.joachim-breitner.de/
> >   XMPP: nome...@joachim-breitner.de • OpenPGP-Key: 0xF0FBF51F
> >   Debian Developer: nome...@debian.org
> > ___
> > Glasgow-haskell-users mailing list
> > Glasgow-haskell-users@haskell.org
> > http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users
> >
> >
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users


Re: [Haskell-community] [Haskell-cafe] technical thoughts on stack

2016-09-15 Thread Edward Z. Yang
Excerpts from Harendra Kumar's message of 2016-09-15 13:02:50 +0530:
> While I agree that stack.yaml is a frozen config, we do not necessarily
> need a separate config file or a separate format for that. My main point
> was a that a new user will have to understand two more languages
> (YAML/cabal) in addition to Haskell. We can have the config spread in
> multiple files, but they should look like an extension of the same thing
> rather than disparate things.

For what it's worth, cabal.project files use the same parser/lexical
structure as Cabal files; the fields/stanzas are just different.  I'm
not familiar with the reasons why Stack decided to use YAML for their
configuration format.

> The stack snapshot config can be seen as a higher level concept of the
> pvp-bounds in the cabal file. While pvp-bounds specifies a whole range, the
> snapshot is a point in that space. It can also be seen as a more general
> version of the "tested-with" field in the cabal file. We can instead say -
> tested-with these snapshots (or set of versions). Instead of using
> stack-7.8.yaml, stack-8.0.yaml manually, the build tool itself can list
> which snapshot configs that are available and you can choose which one you
> want to build. The config could be tool agnostic.

Well, if the "snapshot" config is put in specific file, there's no
reason why Cabal couldn't be taught to also read configuration from that
file.  But if cabal-install wants it to be in "pkg description format"
and Stack wants it to be in YAML I'm not sure how you are going to get
the projects to agree on a shared format.  Snapshot config is put
in cabal.project.freeze, which has the virtue of having the *same*
format of cabal.project.

Edward
___
Haskell-community mailing list
Haskell-community@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-community


FINAL CALL FOR TALKS (Aug 8 deadline): Haskell Implementors Workshop 2016, Sep 24, Nara

2016-08-01 Thread Edward Z. Yang
Deadline is in a week!  Submit your talks!

Call for Contributions
   ACM SIGPLAN Haskell Implementors' Workshop

http://haskell.org/haskellwiki/HaskellImplementorsWorkshop/2016
Nara, Japan, 24 September, 2016

Co-located with ICFP 2016
   http://www.icfpconference.org/icfp2016/

Important dates
---
Proposal Deadline:  Monday, 8 August, 2016
Notification:   Monday, 22 August, 2016
Workshop:   Saturday, 24 September, 2016

The 8th Haskell Implementors' Workshop is to be held alongside ICFP
2016 this year in Nara. It is a forum for people involved in the
design and development of Haskell implementations, tools, libraries,
and supporting infrastructure, to share their work and discuss future
directions and collaborations with others.

Talks and/or demos are proposed by submitting an abstract, and
selected by a small program committee. There will be no published
proceedings; the workshop will be informal and interactive, with a
flexible timetable and plenty of room for ad-hoc discussion, demos,
and impromptu short talks.

Scope and target audience
-

It is important to distinguish the Haskell Implementors' Workshop from
the Haskell Symposium which is also co-located with ICFP 2016. The
Haskell Symposium is for the publication of Haskell-related research. In
contrast, the Haskell Implementors' Workshop will have no proceedings --
although we will aim to make talk videos, slides and presented data
available with the consent of the speakers.

In the Haskell Implementors' Workshop, we hope to study the underlying
technology. We want to bring together anyone interested in the
nitty-gritty details behind turning plain-text source code into a
deployed product. Having said that, members of the wider Haskell
community are more than welcome to attend the workshop -- we need your
feedback to keep the Haskell ecosystem thriving.

The scope covers any of the following topics. There may be some topics
that people feel we've missed, so by all means submit a proposal even if
it doesn't fit exactly into one of these buckets:

  * Compilation techniques
  * Language features and extensions
  * Type system implementation
  * Concurrency and parallelism: language design and implementation
  * Performance, optimisation and benchmarking
  * Virtual machines and run-time systems
  * Libraries and tools for development or deployment

Talks
-

At this stage we would like to invite proposals from potential speakers
for talks and demonstrations. We are aiming for 20 minute talks with 10
minutes for questions and changeovers. We want to hear from people
writing compilers, tools, or libraries, people with cool ideas for
directions in which we should take the platform, proposals for new
features to be implemented, and half-baked crazy ideas. Please submit a
talk title and abstract of no more than 300 words.

Submissions should be made via HotCRP. The website is:
  https://icfp-hiw16.hotcrp.com/

We will also have a lightning talks session which will be organised on
the day. These talks will be 5-10 minutes, depending on available time.
Suggested topics for lightning talks are to present a single idea, a
work-in-progress project, a problem to intrigue and perplex Haskell
implementors, or simply to ask for feedback and collaborators.

Organisers
--- End forwarded message ---
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users


Call for talks: Haskell Implementors Workshop 2016, Sep 24 (FIXED), Nara

2016-06-09 Thread Edward Z. Yang
(...and now with the right date in the subject line!)

Call for Contributions
   ACM SIGPLAN Haskell Implementors' Workshop

http://haskell.org/haskellwiki/HaskellImplementorsWorkshop/2016
Nara, Japan, 24 September, 2016

Co-located with ICFP 2016
   http://www.icfpconference.org/icfp2016/

Important dates
---
Proposal Deadline:  Monday, 8 August, 2016
Notification:   Monday, 22 August, 2016
Workshop:   Saturday, 24 September, 2016

The 8th Haskell Implementors' Workshop is to be held alongside ICFP
2016 this year in Nara. It is a forum for people involved in the
design and development of Haskell implementations, tools, libraries,
and supporting infrastructure, to share their work and discuss future
directions and collaborations with others.

Talks and/or demos are proposed by submitting an abstract, and
selected by a small program committee. There will be no published
proceedings; the workshop will be informal and interactive, with a
flexible timetable and plenty of room for ad-hoc discussion, demos,
and impromptu short talks.

Scope and target audience
-

It is important to distinguish the Haskell Implementors' Workshop from
the Haskell Symposium which is also co-located with ICFP 2016. The
Haskell Symposium is for the publication of Haskell-related research. In
contrast, the Haskell Implementors' Workshop will have no proceedings --
although we will aim to make talk videos, slides and presented data
available with the consent of the speakers.

In the Haskell Implementors' Workshop, we hope to study the underlying
technology. We want to bring together anyone interested in the
nitty-gritty details behind turning plain-text source code into a
deployed product. Having said that, members of the wider Haskell
community are more than welcome to attend the workshop -- we need your
feedback to keep the Haskell ecosystem thriving.

The scope covers any of the following topics. There may be some topics
that people feel we've missed, so by all means submit a proposal even if
it doesn't fit exactly into one of these buckets:

  * Compilation techniques
  * Language features and extensions
  * Type system implementation
  * Concurrency and parallelism: language design and implementation
  * Performance, optimisation and benchmarking
  * Virtual machines and run-time systems
  * Libraries and tools for development or deployment

Talks
-

At this stage we would like to invite proposals from potential speakers
for talks and demonstrations. We are aiming for 20 minute talks with 10
minutes for questions and changeovers. We want to hear from people
writing compilers, tools, or libraries, people with cool ideas for
directions in which we should take the platform, proposals for new
features to be implemented, and half-baked crazy ideas. Please submit a
talk title and abstract of no more than 300 words.

Submissions should be made via HotCRP. The website is:
  https://icfp-hiw16.hotcrp.com/

We will also have a lightning talks session which will be organised on
the day. These talks will be 5-10 minutes, depending on available time.
Suggested topics for lightning talks are to present a single idea, a
work-in-progress project, a problem to intrigue and perplex Haskell
implementors, or simply to ask for feedback and collaborators.

Organisers
--

  * Joachim Breitner(Karlsruhe Institut für Technologie)
  * Duncan Coutts   (Well Typed)
  * Michael Snoyman (FP Complete)
  * Luite Stegeman  (ghcjs)
  * Niki Vazou  (UCSD)
  * Stephanie Weirich   (University of Pennsylvania) 
  * Edward Z. Yang - chair  (Stanford University)
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users


Call for talks: Haskell Implementors Workshop 2016, Aug 24, Nara

2016-06-09 Thread Edward Z. Yang
Call for Contributions
   ACM SIGPLAN Haskell Implementors' Workshop

http://haskell.org/haskellwiki/HaskellImplementorsWorkshop/2016
Nara, Japan, 24 September, 2016

Co-located with ICFP 2016
   http://www.icfpconference.org/icfp2016/

Important dates
---
Proposal Deadline:  Monday, 8 August, 2016
Notification:   Monday, 22 August, 2016
Workshop:   Saturday, 24 September, 2016

The 8th Haskell Implementors' Workshop is to be held alongside ICFP
2016 this year in Nara. It is a forum for people involved in the
design and development of Haskell implementations, tools, libraries,
and supporting infrastructure, to share their work and discuss future
directions and collaborations with others.

Talks and/or demos are proposed by submitting an abstract, and
selected by a small program committee. There will be no published
proceedings; the workshop will be informal and interactive, with a
flexible timetable and plenty of room for ad-hoc discussion, demos,
and impromptu short talks.

Scope and target audience
-

It is important to distinguish the Haskell Implementors' Workshop from
the Haskell Symposium which is also co-located with ICFP 2016. The
Haskell Symposium is for the publication of Haskell-related research. In
contrast, the Haskell Implementors' Workshop will have no proceedings --
although we will aim to make talk videos, slides and presented data
available with the consent of the speakers.

In the Haskell Implementors' Workshop, we hope to study the underlying
technology. We want to bring together anyone interested in the
nitty-gritty details behind turning plain-text source code into a
deployed product. Having said that, members of the wider Haskell
community are more than welcome to attend the workshop -- we need your
feedback to keep the Haskell ecosystem thriving.

The scope covers any of the following topics. There may be some topics
that people feel we've missed, so by all means submit a proposal even if
it doesn't fit exactly into one of these buckets:

  * Compilation techniques
  * Language features and extensions
  * Type system implementation
  * Concurrency and parallelism: language design and implementation
  * Performance, optimisation and benchmarking
  * Virtual machines and run-time systems
  * Libraries and tools for development or deployment

Talks
-

At this stage we would like to invite proposals from potential speakers
for talks and demonstrations. We are aiming for 20 minute talks with 10
minutes for questions and changeovers. We want to hear from people
writing compilers, tools, or libraries, people with cool ideas for
directions in which we should take the platform, proposals for new
features to be implemented, and half-baked crazy ideas. Please submit a
talk title and abstract of no more than 300 words.

Submissions should be made via HotCRP. The website is:
  https://icfp-hiw16.hotcrp.com/

We will also have a lightning talks session which will be organised on
the day. These talks will be 5-10 minutes, depending on available time.
Suggested topics for lightning talks are to present a single idea, a
work-in-progress project, a problem to intrigue and perplex Haskell
implementors, or simply to ask for feedback and collaborators.

Organisers
--

  * Joachim Breitner(Karlsruhe Institut für Technologie)
  * Duncan Coutts   (Well Typed)
  * Michael Snoyman (FP Complete)
  * Luite Stegeman  (ghcjs)
  * Niki Vazou  (UCSD)
  * Stephanie Weirich   (University of Pennsylvania) 
  * Edward Z. Yang - chair  (Stanford University)
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users


Re: idea: tool to suggest adding imports

2016-03-18 Thread Edward Z. Yang
Hello John,

In my opinion, the big question is whether or not your Emacs extension
should know how to build your Haskell project.  Without this knowledge,
(1) and (3) are non-starters, since you have to pass the right set of
-package flags to GHC to get the process started.

If you do assume you have this knowledge, then I think writing a
little stub program using the GHC API (the best fit is the recent
frontend plugins feature:
https://downloads.haskell.org/~ghc/8.0.1-rc2/docs/html/users_guide/extending_ghc.html#frontend-plugins
because it will handle command line parsing for you; unfortunately, it
will also limit you to GHC 8 only) is your best bet.

Edward

Excerpts from John Williams's message of 2016-03-18 11:27:34 -0700:
> I have an idea for a tool I'd like to implement, and I'm looking for advice
> on the best way to do it.
> 
> Ideally, I want to write an Emacs extension where, if I'm editing Haskell
> code and I try to use a symbol that's not defined or imported, it will try
> to automatically add an appropriate import for the symbol. If instance, if
> I have "import Data.Maybe (isNothing)" in my module, and I try to call
> "isJust", the extension would automatically change the import to "import
> Data.Maybe (isJust, isNothing)".
> 
> The Emacs part is easy, but the Haskell part has me kind of lost. Basically
> I want to figure out how to heuristically resolve a name, using an existing
> set of imports as hints and constraints. The main heuristic I'd like to
> implement is that, if some symbols are imported from a module M, consider
> importing additional symbols from M. A more advanced heuristic might
> suggest that if a symbol is exported from a module M in a visible package
> P, the symbol should be imported from M. Finally, if a symbol is exported
> by a module in the Haskell platform, I'd like to suggest adding the
> relevant package as a dependency in the .cabal and/or stack.yaml file, and
> adding an import for it in the .hs file.
> 
> Here are some implementation options I'm considering:
> 
> 1. Add a ghci command to implement my heuristics directly, since ghc
> already understands modules, packages and import statements.
> 2. Load a modified version of the source file into ghci where imports like
> "import M (...)" are replaced with "import M", and parse the error messages
> about ambiguous symbols.
> 3. Write a separate tool that reads Haskell imports and duplicates ghc and
> cabal's name resolution mechanisms.
> 4. Write a tool that reads Haskell imports and suggests imports from a list
> of commonly imported symbols, ignoring which packages are actually visible.
> 
> Right now the options that look best to me are 2 and 4, because the don't
> require me to understand or duplicate big parts of ghc, but if modifying
> ghc isn't actually that hard, then maybe 1 is the way to go. Option 3 might
> be a good way to go if there are libraries I can use to do the hard work
> for me.
> 
> Any thoughts?
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users


Re: Discovery of source dependencies without --make

2015-12-13 Thread Edward Z. Yang
I missed context, but if you just want the topological graph,
depanal will give you a module graph which you can then topsort
with topSortModuleGraph (all in GhcMake).  Then you can do what you want
with the result.  You will obviously need accurate targets but
frontend plugins and guessTarget will get you most of the way there.

Edward

Excerpts from Thomas Miedema's message of 2015-12-13 16:12:39 -0800:
> On Fri, Nov 28, 2014 at 3:41 PM, Lars Hupel  wrote:
> 
> > Let's say the hypothetical feature is selected via the GHC flag
> > "--topo-sort". It would add a step before regular compilation and
> > wouldn't affect any other flag:
> >
> >   ghc -c --topo-sort fileA.hs fileB.hs ...
> >
> > This would first read in the specified source files and look at their
> > module headers and import statements. It would build a graph of module
> > dependencies _between_ the specified source files (ignoring circular
> > dependencies), perform a topological sort on that graph, and proceed
> > with compiling the source files in that order.
> >
> 
> GHC 8 will have support for Frontend plugins. Frontend plugins enable you
> to write plugins to replace
> GHC major modes.
> 
> E.g. instead of saying
> 
> ghc --make A B C
> 
> you can now say:
> 
> ghc --frontend TopoSort A B C
> 
> You still have to implement TopoSort.hs yourself, using the GHC API to
> compile A B C in topological order, but some of the plumbing is taken care
> of by the Frontend plugin infrastructure already.
> 
> Take a look at this commit, especially the user's guide section and the
> test case:
> https://github.com/ghc/ghc/commit/a3c2a26b3af034f09c960b2dad38f73be7e3a655.
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users


Re: type error formatting

2015-10-23 Thread Edward Z. Yang
I think this is quite a reasonable suggestion.

Edward

Excerpts from Evan Laforge's message of 2015-10-23 19:48:07 -0700:
> Here's a typical simple type error from GHC:
> 
> Derive/Call/India/Pakhawaj.hs:142:62:
> Couldn't match type ‘Text’ with ‘(a1, Syllable)’
> Expected type: [([(a1, Syllable)], [Sequence Bol])]
>   Actual type: [([Syllable], [Sequence Bol])]
> Relevant bindings include
>   syllables :: [(a1, Syllable)]
> (bound at Derive/Call/India/Pakhawaj.hs:141:16)
>   best_match :: [(a1, Syllable)]
> -> Maybe (Int, ([(a1, Syllable)], [(a1, Sequence Bol)]))
> (bound at Derive/Call/India/Pakhawaj.hs:141:5)
> In the second argument of ‘mapMaybe’, namely ‘all_bols’
> In the second argument of ‘($)’, namely
>   ‘mapMaybe (match_bols syllables) all_bols’
> 
> I've been having more trouble than usual reading GHC's errors, and I
> finally spent some time to think about it.  The problem is that this new
> "relevant bindings include" section gets in between the expected and actual
> types (I still don't like that wording but I've gotten used to it), which
> is the most critical part, and the location context, which is second most
> critical.  Notice the same effect in the previous sentence :)  After I see
> a type error the next thing I want to see is the where it happened, so I
> have to skip over the bindings, which can be long and complicated.  Then I
> usually know what to do, and only look into the bindings if something more
> complicated is going on, like wonky inference.  So how about reordering the
> message:
> 
> Derive/Call/India/Pakhawaj.hs:142:62:
> Couldn't match type ‘Text’ with ‘(a1, Syllable)’
> Expected type: [([(a1, Syllable)], [Sequence Bol])]
>   Actual type: [([Syllable], [Sequence Bol])]
> In the second argument of ‘mapMaybe’, namely ‘all_bols’
> In the second argument of ‘($)’, namely
>   ‘mapMaybe (match_bols syllables) all_bols’
> Relevant bindings include
>   syllables :: [(a1, Syllable)]
> (bound at Derive/Call/India/Pakhawaj.hs:141:16)
>   best_match :: [(a1, Syllable)]
> -> Maybe (Int, ([(a1, Syllable)], [(a1, Sequence Bol)]))
> (bound at Derive/Call/India/Pakhawaj.hs:141:5)
> 
> After this, why not go one step further and set off the various sections
> visibly to make it easier to scan.  The context section can also be really
> long if it gets an entire do block or record:
> 
> Derive/Call/India/Pakhawaj.hs:142:62:
>   * Couldn't match type ‘Text’ with ‘(a1, Syllable)’
> Expected type: [([(a1, Syllable)], [Sequence Bol])]
>   Actual type: [([Syllable], [Sequence Bol])]
>   * In the second argument of ‘mapMaybe’, namely ‘all_bols’
> In the second argument of ‘($)’, namely
>   ‘mapMaybe (match_bols syllables) all_bols’
>   * Relevant bindings include
>   syllables :: [(a1, Syllable)]
> (bound at Derive/Call/India/Pakhawaj.hs:141:16)
>   best_match :: [(a1, Syllable)]
> -> Maybe (Int, ([(a1, Syllable)], [(a1, Sequence Bol)]))
> (bound at Derive/Call/India/Pakhawaj.hs:141:5)
> 
> Or alternately, taking up a bit more vertical space:
> 
> Derive/Call/India/Pakhawaj.hs:142:62:
> Couldn't match type ‘Text’ with ‘(a1, Syllable)’
> Expected type: [([(a1, Syllable)], [Sequence Bol])]
>   Actual type: [([Syllable], [Sequence Bol])]
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users


Re: [Haskell-cafe] The evil GADTs extension in ghci 7.8.4 (maybe in other versions too?)

2015-06-04 Thread Edward Z. Yang
GHC used to always generalize let-bindings, but our experience
with GADTs lead us to decide that let should not be generalized
with GADTs.  So, it's not like we /wanted/ MonoLocalBinds, but
that having them makes the GADT machinery simpler.

This blog post gives more details on the matter:
https://ghc.haskell.org/trac/ghc/blog/LetGeneralisationInGhc7

Edward

Excerpts from Ki Yung Ahn's message of 2015-06-04 20:37:27 -0700:
 Such order dependent could be very confusing for the users. I thought I 
 turned off certain feature but some other extension turning it on is 
 strange. Wouldn't it be better to decouple GADT and MonoLocalBinds?
 
 2015년 06월 04일 20:31에 Edward Z. Yang 이(가) 쓴 글:
  This is because -XGADTs implies -XMonoLocalBinds.
 
  Edward
 
  Excerpts from Ki Yung Ahn's message of 2015-06-04 20:29:50 -0700:
  \y - let x = (\z - y) in x x
 
  is a perfectly fine there whose type is  a - a.
  (1) With no options, ghci infers its type correctly.
  (2) However, with -XGADTs, type check fails and raises occurs check.
  (3) We can remedy this by supplying some additional options
  (4) Howver, if you put -XGADTs option at the end, it fails again :(
 
 
  kyagrd@kyahp:~$ ghci
  GHCi, version 7.8.4: http://www.haskell.org/ghc/  :? for help
  Loading package ghc-prim ... linking ... done.
  Loading package integer-gmp ... linking ... done.
  Loading package base ... linking ... done.
  Prelude :t \y - let x = (\z - y) in x x
  \y - let x = (\z - y) in x x :: t - t
  Prelude :q
  Leaving GHCi.
 
 
  kyagrd@kyahp:~$ ghci -XGADTs
  GHCi, version 7.8.4: http://www.haskell.org/ghc/  :? for help
  Loading package ghc-prim ... linking ... done.
  Loading package integer-gmp ... linking ... done.
  Loading package base ... linking ... done.
  Prelude :t \y - let x = (\z - y) in x x
 
  interactive:1:30:
Occurs check: cannot construct the infinite type: t0 ~ t0 - t
Relevant bindings include
  x :: t0 - t (bound at interactive:1:11)
  y :: t (bound at interactive:1:2)
In the first argument of ‘x’, namely ‘x’
In the expression: x x
  Prelude :q
  Leaving GHCi.
 
 
  ~$ ghci -XGADTs -XNoMonoLocalBinds -XNoMonomorphismRestriction
  GHCi, version 7.8.4: http://www.haskell.org/ghc/  :? for help
  Loading package ghc-prim ... linking ... done.
  Loading package integer-gmp ... linking ... done.
  Loading package base ... linking ... done.
  Prelude :t \y - let x = (\z - y) in x x
  \y - let x = (\z - y) in x x :: t - t
  Prelude :q
  Leaving GHCi.
 
 
  ~$ ghci -XNoMonoLocalBinds -XNoMonomorphismRestriction -XGADTs
  GHCi, version 7.8.4: http://www.haskell.org/ghc/  :? for help
  Loading package ghc-prim ... linking ... done.
  Loading package integer-gmp ... linking ... done.
  Loading package base ... linking ... done.
  Prelude :t \y - let x = (\z - y) in x x
 
  interactive:1:30:
Occurs check: cannot construct the infinite type: t0 ~ t0 - t
Relevant bindings include
  x :: t0 - t (bound at interactive:1:11)
  y :: t (bound at interactive:1:2)
In the first argument of ‘x’, namely ‘x’
 
  ___
  Glasgow-haskell-users mailing list
  Glasgow-haskell-users@haskell.org
  http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users
 
 
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users


Re: [Haskell-cafe] The evil GADTs extension in ghci 7.8.4 (maybe in other versions too?)

2015-06-04 Thread Edward Z. Yang
This is because -XGADTs implies -XMonoLocalBinds.

Edward

Excerpts from Ki Yung Ahn's message of 2015-06-04 20:29:50 -0700:
 \y - let x = (\z - y) in x x
 
 is a perfectly fine there whose type is  a - a.
 (1) With no options, ghci infers its type correctly.
 (2) However, with -XGADTs, type check fails and raises occurs check.
 (3) We can remedy this by supplying some additional options
 (4) Howver, if you put -XGADTs option at the end, it fails again :(
 
 
 kyagrd@kyahp:~$ ghci
 GHCi, version 7.8.4: http://www.haskell.org/ghc/  :? for help
 Loading package ghc-prim ... linking ... done.
 Loading package integer-gmp ... linking ... done.
 Loading package base ... linking ... done.
 Prelude :t \y - let x = (\z - y) in x x
 \y - let x = (\z - y) in x x :: t - t
 Prelude :q
 Leaving GHCi.
 
 
 kyagrd@kyahp:~$ ghci -XGADTs
 GHCi, version 7.8.4: http://www.haskell.org/ghc/  :? for help
 Loading package ghc-prim ... linking ... done.
 Loading package integer-gmp ... linking ... done.
 Loading package base ... linking ... done.
 Prelude :t \y - let x = (\z - y) in x x
 
 interactive:1:30:
  Occurs check: cannot construct the infinite type: t0 ~ t0 - t
  Relevant bindings include
x :: t0 - t (bound at interactive:1:11)
y :: t (bound at interactive:1:2)
  In the first argument of ‘x’, namely ‘x’
  In the expression: x x
 Prelude :q
 Leaving GHCi.
 
 
 ~$ ghci -XGADTs -XNoMonoLocalBinds -XNoMonomorphismRestriction
 GHCi, version 7.8.4: http://www.haskell.org/ghc/  :? for help
 Loading package ghc-prim ... linking ... done.
 Loading package integer-gmp ... linking ... done.
 Loading package base ... linking ... done.
 Prelude :t \y - let x = (\z - y) in x x
 \y - let x = (\z - y) in x x :: t - t
 Prelude :q
 Leaving GHCi.
 
 
 ~$ ghci -XNoMonoLocalBinds -XNoMonomorphismRestriction -XGADTs
 GHCi, version 7.8.4: http://www.haskell.org/ghc/  :? for help
 Loading package ghc-prim ... linking ... done.
 Loading package integer-gmp ... linking ... done.
 Loading package base ... linking ... done.
 Prelude :t \y - let x = (\z - y) in x x
 
 interactive:1:30:
  Occurs check: cannot construct the infinite type: t0 ~ t0 - t
  Relevant bindings include
x :: t0 - t (bound at interactive:1:11)
y :: t (bound at interactive:1:2)
  In the first argument of ‘x’, namely ‘x’
 
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users


Re: SRC_HC_OPTS in perf build

2015-05-26 Thread Edward Z. Yang
Sounds like an oversight to me!  Submit a fix?

Excerpts from Jeremy's message of 2015-05-25 06:44:10 -0700:
 build.mk.sample contains the lines:
 
 # perf matches the default settings, repeated here for comparison:
 SRC_HC_OPTS = -O -H64m
 
 However, in config.mk.in this is:
 
 SRC_HC_OPTS += -H32m -O
 
 What actually is the default for SRC_HC_OPTS? Why does config.mk.in seem to
 set it to -H32m, then every profile in build.mk.sample sets -H64m?
 
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users


Re: runghc and GhcWithInterpreter

2015-04-06 Thread Edward Z. Yang
No, it's not supposed to work, since runghc interprets GHC code.
runghc itself is just a little shell script which calls GHC proper
with the -f flag, so I suppose the build system was just not set
up to not create this link in that case.

Edward

Excerpts from Jeremy's message of 2015-04-06 07:34:34 -0700:
 I've built GHC with GhcWithInterpreter = NO. runghc is built and installed,
 but errors out with not built for interactive use.
 
 Is runghc supposed to work with such a build? If not, why is it built at
 all?
 
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users


Re: Binary bloat in 7.10

2015-04-01 Thread Edward Z. Yang
Yes, this does seem like a potential culprit, although
we did do some measurements and I didn't think it was too bad.
Maybe we were wrong!

Edward

Excerpts from Jeremy's message of 2015-04-01 07:26:55 -0700:
 Carter Schonwald wrote
  How much of this might be attributable to longer linker symbol names? Ghc
  7.10 object  code does have larger symbols!  Is there a way to easily
  tabulate that?
 
 That would explain why the hi files have also increased many-fold. Is there
 any way to avoid the larger symbols?
 
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users


Re: Found hole

2015-01-20 Thread Edward Z. Yang
Hello Volker,

All identifiers prefixed with an underscore are typed holes,
see:
https://downloads.haskell.org/~ghc/7.8.3/docs/html/users_guide/typed-holes.html

Edward

Excerpts from Volker Wysk's message of 2015-01-20 10:36:09 -0800:
 Hello!
 
 What is a hole? 
 
 This program fails to compile:
 
 main = _exit 0
 
 I get this error message:
 
 ex.hs:1:8:
 Found hole ‘_exit’ with type: t
 Where: ‘t’ is a rigid type variable bound by
the inferred type of main :: t at ex.hs:1:1
 Relevant bindings include main :: t (bound at ex.hs:1:1)
 In the expression: _exit
 In an equation for ‘main’: main = _exit
 
 When I replace _exit with foo, it produces a not in scope error, as 
 expected. What is special about _exit? It doesn't occur in the Haskell 
 Hierarchical Libraries.
 
 Bye
 Volker
 
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: GHC 7.10 regression when using foldr

2015-01-20 Thread Edward Z. Yang
I like this proposal: if you're explicit about an import that
would otherwise be implicit by Prelude, you shouldn't get a
warning for it. If it is not already the case, we also need to
make sure the implicit Prelude import never causes unused import
errors.

Edward

Excerpts from Edward Kmett's message of 2015-01-20 15:41:13 -0800:
 Sure.
 
 Adding it to the CHANGELOG makes a lot of sense. I first found out about it
 only a few weeks ago when Herbert mentioned it in passing.
 
 Of course, the geek in me definitely prefers technical fixes to human ones.
 Humans are messy. =)
 
 I'd be curious how much of the current suite of warnings could be fixed
 just by switching the implicit Prelude import to the end of the import list
 inside GHC.
 
 Now that Herbert has all of his crazy tooling to build stuff with 7.10 and
 with HEAD, it might be worth trying out such a change to see how much it
 reduces the warning volume and if it somehow manages to introduce any new
 warnings.
 
 I hesitate to make such a proposal this late in the release candidate game,
 but if it worked it'd be pretty damn compelling.
 
 -Edward
 
 On Tue, Jan 20, 2015 at 6:27 PM, Edward Z. Yang ezy...@mit.edu wrote:
 
  Hello Edward,
 
  Shouldn't we publicize this trick? Perhaps in the changelog?
 
  Edward
 
  Excerpts from Edward Kmett's message of 2015-01-20 15:22:57 -0800:
   Building -Wall clean across this change-over has a big of a trick to it.
  
   The easiest way I know of when folks already had lots of
  
   import Data.Foldable
   import Data.Traversable
  
   stuff
  
   is to just add
  
   import Prelude
  
   explicitly to the bottom of your import list rather than painstakingly
   exclude the imports with CPP.
  
   This has the benefit of not needing a bunch of CPP to manage what names
   come from where.
  
   Why? GHC checks that the imports provide something 'new' that is used by
   the module in a top-down fashion, and you are almost suredly using
   something from Prelude that didn't come from one of the modules above.
  
   On the other hand the implicit import of Prelude effectively would come
   first in the list.
  
   It is a dirty trick, but it does neatly side-step this problem for folks
  in
   your situation.
  
   -Edward
  
   On Tue, Jan 20, 2015 at 6:12 PM, Bryan O'Sullivan b...@serpentine.com
   wrote:
  
   
On Tue, Jan 20, 2015 at 3:02 PM, Herbert Valerio Riedel h...@gnu.org
wrote:
   
I'm a bit confused, several past attoparsec versions seem to build
  just
fine with GHC 7.10:
   
  https://ghc.haskell.org/~hvr/buildreports/attoparsec.html
   
were there hidden breakages not resulting in compile errors?
Or are the fixes you mention about restoring -Wall hygiene?
   
   
I build with -Wall -Werror, and also have to maintain the test and
benchmark suites.
   
 
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: GHC 7.10 regression when using foldr

2015-01-20 Thread Edward Z. Yang
I don't see why that would be the case: we haven't *excluded* any
old import lists, so -ddump-minimal-imports could still
take advantage of Prelude in a warning-free way.

Edward

Excerpts from Edward Kmett's message of 2015-01-20 16:36:53 -0800:
 It isn't without a cost. On the down-side, the results of
 -ddump-minimal-imports would be er.. less minimal.
 
 On Tue, Jan 20, 2015 at 6:47 PM, Edward Z. Yang ezy...@mit.edu wrote:
 
  I like this proposal: if you're explicit about an import that
  would otherwise be implicit by Prelude, you shouldn't get a
  warning for it. If it is not already the case, we also need to
  make sure the implicit Prelude import never causes unused import
  errors.
 
  Edward
 
  Excerpts from Edward Kmett's message of 2015-01-20 15:41:13 -0800:
   Sure.
  
   Adding it to the CHANGELOG makes a lot of sense. I first found out about
  it
   only a few weeks ago when Herbert mentioned it in passing.
  
   Of course, the geek in me definitely prefers technical fixes to human
  ones.
   Humans are messy. =)
  
   I'd be curious how much of the current suite of warnings could be fixed
   just by switching the implicit Prelude import to the end of the import
  list
   inside GHC.
  
   Now that Herbert has all of his crazy tooling to build stuff with 7.10
  and
   with HEAD, it might be worth trying out such a change to see how much it
   reduces the warning volume and if it somehow manages to introduce any new
   warnings.
  
   I hesitate to make such a proposal this late in the release candidate
  game,
   but if it worked it'd be pretty damn compelling.
  
   -Edward
  
   On Tue, Jan 20, 2015 at 6:27 PM, Edward Z. Yang ezy...@mit.edu wrote:
  
Hello Edward,
   
Shouldn't we publicize this trick? Perhaps in the changelog?
   
Edward
   
Excerpts from Edward Kmett's message of 2015-01-20 15:22:57 -0800:
 Building -Wall clean across this change-over has a big of a trick to
  it.

 The easiest way I know of when folks already had lots of

 import Data.Foldable
 import Data.Traversable

 stuff

 is to just add

 import Prelude

 explicitly to the bottom of your import list rather than
  painstakingly
 exclude the imports with CPP.

 This has the benefit of not needing a bunch of CPP to manage what
  names
 come from where.

 Why? GHC checks that the imports provide something 'new' that is
  used by
 the module in a top-down fashion, and you are almost suredly using
 something from Prelude that didn't come from one of the modules
  above.

 On the other hand the implicit import of Prelude effectively would
  come
 first in the list.

 It is a dirty trick, but it does neatly side-step this problem for
  folks
in
 your situation.

 -Edward

 On Tue, Jan 20, 2015 at 6:12 PM, Bryan O'Sullivan 
  b...@serpentine.com
 wrote:

 
  On Tue, Jan 20, 2015 at 3:02 PM, Herbert Valerio Riedel 
  h...@gnu.org
  wrote:
 
  I'm a bit confused, several past attoparsec versions seem to build
just
  fine with GHC 7.10:
 
https://ghc.haskell.org/~hvr/buildreports/attoparsec.html
 
  were there hidden breakages not resulting in compile errors?
  Or are the fixes you mention about restoring -Wall hygiene?
 
 
  I build with -Wall -Werror, and also have to maintain the test and
  benchmark suites.
 
   
 
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Compiling a cabal project with LLVM on GHC 7.10 RC1

2015-01-07 Thread Edward Z. Yang
...is there -dynamic in the -v output?  Don't you also want
--disable-shared?

Excerpts from Brandon Simmons's message of 2015-01-07 12:21:48 -0800:
 I've tried:
 
   $ cabal install --only-dependencies -w
 /usr/local/bin/ghc-7.10.0.20141222  --enable-tests --enable-benchmarks
 --ghc-option=-fllvm --ghc-option=-static
   $ cabal configure -w /usr/local/bin/ghc-7.10.0.20141222
 --enable-tests --enable-benchmarks --ghc-option=-fllvm
 --ghc-option=-static
   $ cabal build
   Building foo-0.3.0.0...
   Preprocessing library foo-0.3.0.0...
 
   when making flags consistent: Warning:
   Using native code generator rather than LLVM, as LLVM is
 incompatible with -fPIC and -dynamic   on this platform
 
 I don't see anything referencing PIC in the output of cabal build
 -v. I can build a hello world program, just fine with `ghc --make`:
 
   $ /usr/local/bin/ghc-7.10.0.20141222 --make -O2 -fllvm   Main.hs
   [1 of 1] Compiling Main ( Main.hs, Main.o )
   Linking Main ...
 
 Thanks,
 Brandon
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: GHC 7.4.2 on Ubuntu Trusty

2015-01-04 Thread Edward Z. Yang
For transformers, I needed:

diff --git a/Control/Monad/Trans/Error.hs b/Control/Monad/Trans/Error.hs
index 0158a8a..0dea478 100644
--- a/Control/Monad/Trans/Error.hs
+++ b/Control/Monad/Trans/Error.hs
@@ -57,6 +57,10 @@ instance MonadPlus IO where
 mzero   = ioError (userError mzero)
 m `mplus` n = m `catchIOError` \_ - n
 
+instance Alternative IO where
+empty = mzero
+(|) = mplus
+
 #if !(MIN_VERSION_base(4,4,0))
 -- exported by System.IO.Error from base-4.4
 catchIOError :: IO a - (IOError - IO a) - IO a

For hpc, I needed:

 Build-Depends:
-base   = 4.4.1   4.8,
+base   = 4.4.1   4.9,
 containers = 0.4.1   0.6,
 directory  = 1.1 1.3,
-time   = 1.2 1.5
+time   = 1.2 1.6

For hoopl, I needed:

-  Build-Depends: base = 4.3   4.8
+  Build-Depends: base = 4.3   4.9

For the latter two, I think this should be a perfectly acceptable
point release.  For transformers, we could also just ifdef the
Alternative into the GHC sources.

Edward

Excerpts from Herbert Valerio Riedel's message of 2015-01-04 00:22:28 -0800:
 Hello Edward,
 
 On 2015-01-04 at 08:54:58 +0100, Edward Z. Yang wrote:
 
 [...]
 
  There are also some changes to hoopl, transformers and hpc (mostly
  because their bootstrap libraries.)
 
 ...what kind of changes specifically? 
 
 Once thing that needs to be considered is that we'd require to upstream
 changes to transformers (it's not under GHC HQ's direct control) for a
 transformers point(?) release ... and we'd need that as we can't release
 any source-tarball that contains libraries (which get installed into the
 pkg-db) that don't match their upstream version on Hackage.
 
 Cheers,
   hvr
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: GHC 7.4.2 on Ubuntu Trusty

2015-01-03 Thread Edward Z. Yang
Hey guys,

I have a local branch of ghc-7.8 which can be compiled by 7.10.
The most annoying patch that needed to be backported was AMP
adjustment changes.  I also messed up some stuff involving LANGUAGE
pragmas which I am going to go back and clean up.

https://github.com/ezyang/ghc/tree/ghc-7.8

There are also some changes to hoopl, transformers and hpc (mostly
because their bootstrap libraries.)

Unfortunately I can't easily Phab these changes.  Any suggestions
for how to coordinate landing these changes?

Edward

Excerpts from Yitzchak Gale's message of 2014-12-28 13:38:47 -0500:
 Resurrecting this thread:
 
 My impression was that Edward's suggestion was a simple and
 obvious solution to the problem of previous GHC versions quickly
 becoming orphaned and unbuildable. But Austin thought that this
 thread was stuck.
 
 Would Edward's suggestion be difficult to implement for any
 reason? Specifically, right now would be the time to do it, and
 it would mean:
 
 1. Create a 7.8.5 branch.
 2. Tweak the stage 1 Haskell sources to build with 7.10 and tag
 3. Create only a source tarball and upload it to the download
 site
 
 Thanks,
 Yitz
 
 On Wed, Oct 29, 2014 at 12:10 AM, Edward Z. Yang wrote:
  Excerpts from Yitzchak Gale's message of 2014-10-28 13:58:08 -0700:
  How about this: Currently, every GHC source distribution
  requires no later than its own version of GHC for bootstrapping.
  Going backwards, that chops up the sequence of GHC versions
  into tiny incompatible pieces - there is no way to start with a
  working GHC and work backwards to an older version by compiling
  successively older GHC sources.
 
  If instead each GHC could be compiled using at least one
  subsequent version, the chain would not be broken. I.e.,
  always provide a compatibility flag or some other reasonably
  simple mechanism that would enable the current GHC to
  compile the source code of at least the last previous released
  version.
 
  Here is an alternate proposal: when we make a new major version release,
  we should also make a minor version release of the previous series, which
  is prepped so that it can compile from the new major version.  If it
  is the case that one version of the compiler can compile any other
  version in the same series, this would be sufficient to go backwards.
 
  Concretely, the action plan is very simple too: take 7.6 and apply as
  many patches as is necessary to make it compile from 7.8, and cut
  a release with those patches.
 
  Edward
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: ANNOUNCE: GHC 7.10.1 Release Candidate 1 - problem with latest cabal-install

2015-01-01 Thread Edward Z. Yang
If you still have your old GHC around, it will be much better to
compile the newest cabal-install using the *old GHC*, and then
use that copy to bootstrap a copy of the newest cabal-install.

Edward

Excerpts from George Colpitts's message of 2015-01-01 12:08:44 -0500:
 ​$ ​
 cabal update
 Downloading the latest package list from hackage.haskell.org
 Note: *there is a new version of cabal-install available.*
 To upgrade, run: cabal install cabal-install
 bash-3.2$ *cabal install -j3 cabal-install *
 *​...​*
 
 
 *Resolving dependencies...cabal: Could not resolve dependencies:*
 trying: cabal-install-1.20.0.6 (user goal)
 trying: base-4.8.0.0/installed-779... (dependency of cabal-install-1.20.0.6)
 next goal: process (dependency of cabal-install-1.20.0.6)
 rejecting: process-1.2.1.0/installed-2db... (conflict: unix==2.7.1.0,
 process
 = unix==2.7.1.0/installed-4ae...)
 trying: process-1.2.1.0
 next goal: directory (dependency of cabal-install-1.20.0.6)
 rejecting: directory-1.2.1.1/installed-b08... (conflict: directory =
 time==1.5.0.1/installed-c23..., cabal-install = time=1.1  1.5)
 rejecting: directory-1.2.1.0 (conflict: base==4.8.0.0/installed-779...,
 directory = base=4.5  4.8)
 rejecting: directory-1.2.0.1, 1.2.0.0 (conflict:
 base==4.8.0.0/installed-779..., directory = base=4.2  4.7)
 rejecting: directory-1.1.0.2 (conflict: base==4.8.0.0/installed-779...,
 directory = base=4.2  4.6)
 rejecting: directory-1.1.0.1 (conflict: base==4.8.0.0/installed-779...,
 directory = base=4.2  4.5)
 rejecting: directory-1.1.0.0 (conflict: base==4.8.0.0/installed-779...,
 directory = base=4.2  4.4)
 rejecting: directory-1.0.1.2, 1.0.1.1, 1.0.1.0, 1.0.0.3, 1.0.0.0 (conflict:
 process = directory=1.1  1.3)
 Dependency tree exhaustively searched.
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: ANNOUNCE: GHC 7.10.1 Release Candidate 1 - problem with latest cabal-install

2015-01-01 Thread Edward Z. Yang
Oh, because Cabal HQ hasn't cut a release yet.

Try installing out of Git.  https://github.com/haskell/cabal/

Edward

Excerpts from George Colpitts's message of 2015-01-01 14:23:50 -0500:
 I still have 7.8.3 but it doesn't seem to want to build the latest cabal:
 
  ghc --version
 The Glorious Glasgow Haskell Compilation System, version 7.8.3
 bash-3.2$ cabal install cabal-install
 Resolving dependencies...
 Configuring cabal-install-1.20.0.6...
 Building cabal-install-1.20.0.6...
 Installed cabal-install-1.20.0.6
 Updating documentation index
 /Users/gcolpitts/Library/Haskell/share/doc/index.html
 
 On Thu, Jan 1, 2015 at 2:54 PM, Edward Z. Yang ezy...@mit.edu wrote:
 
  If you still have your old GHC around, it will be much better to
  compile the newest cabal-install using the *old GHC*, and then
  use that copy to bootstrap a copy of the newest cabal-install.
 
  Edward
 
  Excerpts from George Colpitts's message of 2015-01-01 12:08:44 -0500:
   ​$ ​
   cabal update
   Downloading the latest package list from hackage.haskell.org
   Note: *there is a new version of cabal-install available.*
   To upgrade, run: cabal install cabal-install
   bash-3.2$ *cabal install -j3 cabal-install *
   *​...​*
  
  
   *Resolving dependencies...cabal: Could not resolve dependencies:*
   trying: cabal-install-1.20.0.6 (user goal)
   trying: base-4.8.0.0/installed-779... (dependency of
  cabal-install-1.20.0.6)
   next goal: process (dependency of cabal-install-1.20.0.6)
   rejecting: process-1.2.1.0/installed-2db... (conflict: unix==2.7.1.0,
   process
   = unix==2.7.1.0/installed-4ae...)
   trying: process-1.2.1.0
   next goal: directory (dependency of cabal-install-1.20.0.6)
   rejecting: directory-1.2.1.1/installed-b08... (conflict: directory =
   time==1.5.0.1/installed-c23..., cabal-install = time=1.1  1.5)
   rejecting: directory-1.2.1.0 (conflict: base==4.8.0.0/installed-779...,
   directory = base=4.5  4.8)
   rejecting: directory-1.2.0.1, 1.2.0.0 (conflict:
   base==4.8.0.0/installed-779..., directory = base=4.2  4.7)
   rejecting: directory-1.1.0.2 (conflict: base==4.8.0.0/installed-779...,
   directory = base=4.2  4.6)
   rejecting: directory-1.1.0.1 (conflict: base==4.8.0.0/installed-779...,
   directory = base=4.2  4.5)
   rejecting: directory-1.1.0.0 (conflict: base==4.8.0.0/installed-779...,
   directory = base=4.2  4.4)
   rejecting: directory-1.0.1.2, 1.0.1.1, 1.0.1.0, 1.0.0.3, 1.0.0.0
  (conflict:
   process = directory=1.1  1.3)
   Dependency tree exhaustively searched.
 
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: ANNOUNCE: GHC 7.10.1 Release Candidate 1

2014-12-27 Thread Edward Z. Yang
Hello lonetiger,

I don't think any relevant logic changed in 7.10; however, this
commit may be relevant:

commit 8fb03bfd768ea0d5c666bbe07a50cb05214bbe92
Author: Ian Lynagh ig...@earth.li  Sun Mar 18 11:42:31 2012
Committer:  Ian Lynagh ig...@earth.li  Sun Mar 18 11:42:31 2012
Original File:  compiler/typecheck/TcForeign.lhs

If we say we're treating StdCall as CCall, then actually do so

But this warning should have applied even on older versions of GHC.

Are you running x86_64 Windows?  stdcall is specific to x86_32.

Edward

Excerpts from lonetiger's message of 2014-12-24 08:24:52 -0500:
 Hi,
 
 
 I’ve had some issues building this (and the git HEAD), it seems that the 
 config.guess and config.sub in the libffi tarball is old, it doesn’t detect 
 the platform when building with msys2. I had to unpack the tarfile and update 
 the files, after this it correctly built.
 
 
 Then I proceeded to try to make a shared library and got the following 
 warning:
 
 
 ManualCheck.hs:18:1: Warning:
 the 'stdcall' calling convention is unsupported on this platform,
 treating as ccall
 When checking declaration:
   foreign export stdcall testFoo testFooA :: CInt - IO (FooPtr)
 
 
 
 Does this mean that GHC no longer supports stdcall on windows? or could this 
 be related to issue I had building?
 
 
 Regards,
 
 Tamar
 
 
 
 
 
 From: Austin Seipp
 Sent: ‎Tuesday‎, ‎December‎ ‎23‎, ‎2014 ‎15‎:‎36
 To: ghc-d...@haskell.org, glasgow-haskell-users@haskell.org
 
 
 
 
 
 We are pleased to announce the first release candidate for GHC 7.10.1:
 
 https://downloads.haskell.org/~ghc/7.10.1-rc1/
 
 This includes the source tarball and bindists for 64bit/32bit Linux
 and Windows. Binary builds for other platforms will be available
 shortly. (CentOS 6.5 binaries are not available at this time like they
 were for 7.8.x). These binaries and tarballs have an accompanying
 SHA256SUMS file signed by my GPG key id (0x3B58D86F).
 
 We plan to make the 7.10.1 release sometime in February of 2015. We
 expect another RC to occur during January of 2015.
 
 Please test as much as possible; bugs are much cheaper if we find them
 before the release!
 
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Thread behavior in 7.8.3

2014-10-29 Thread Edward Z. Yang
I don't think this is directly related to the problem, but if you have a
thread that isn't yielding, you can force it to yield by using
-fno-omit-yields on your code.  It won't help if the non-yielding code
is in a library, and it won't help if the problem was that you just
weren't setting timeouts finely enough (which sounds like what was
happening). FYI.

Edward

Excerpts from John Lato's message of 2014-10-29 17:19:46 -0700:
 I guess I should explain what that flag does...
 
 The GHC RTS maintains capabilities, the number of capabilities is specified
 by the `+RTS -N` option.  Each capability is a virtual machine that
 executes Haskell code, and maintains its own runqueue of threads to process.
 
 A capability will perform a context switch at the next heap block
 allocation (every 4k of allocation) after the timer expires.  The timer
 defaults to 20ms, and can be set by the -C flag.  Capabilities perform
 context switches in other circumstances as well, such as when a thread
 yields or blocks.
 
 My guess is that either the context switching logic changed in ghc-7.8, or
 possibly your code used to trigger a switch via some other mechanism (stack
 overflow or something maybe?), but is optimized differently now so instead
 it needs to wait for the timer to expire.
 
 The problem we had was that a time-sensitive thread was getting scheduled
 on the same capability as a long-running non-yielding thread, so the
 time-sensitive thread had to wait for a context switch timeout (even though
 there were free cores available!).  I expect even with -N4 you'll still see
 occasional delays (perhaps 5% of calls).
 
 We've solved our problem with judicious use of `forkOn`, but that won't
 help at N1.
 
 We did see this behavior in 7.6, but it's definitely worse in 7.8.
 
 Incidentally, has there been any interest in a work-stealing scheduler?
 There was a discussion from about 2 years ago, in which Simon Marlow noted
 it might be tricky, but it would definitely help in situations like this.
 
 John L.
 
 On Thu, Oct 30, 2014 at 8:02 AM, Michael Jones m...@proclivis.com wrote:
 
  John,
 
  Adding -C0.005 makes it much better. Using -C0.001 makes it behave more
  like -N4.
 
  Thanks. This saves my project, as I need to deploy on a single core Atom
  and was stuck.
 
  Mike
 
  On Oct 29, 2014, at 5:12 PM, John Lato jwl...@gmail.com wrote:
 
  By any chance do the delays get shorter if you run your program with `+RTS
  -C0.005` ?  If so, I suspect you're having a problem very similar to one
  that we had with ghc-7.8 (7.6 too, but it's worse on ghc-7.8 for some
  reason), involving possible misbehavior of the thread scheduler.
 
  On Wed, Oct 29, 2014 at 2:18 PM, Michael Jones m...@proclivis.com wrote:
 
  I have a general question about thread behavior in 7.8.3 vs 7.6.X
 
  I moved from 7.6 to 7.8 and my application behaves very differently. I
  have three threads, an application thread that plots data with wxhaskell or
  sends it over a network (depends on settings), a thread doing usb bulk
  writes, and a thread doing usb bulk reads. Data is moved around with TChan,
  and TVar is used for coordination.
 
  When the application was compiled with 7.6, my stream of usb traffic was
  smooth. With 7.8, there are lots of delays where nothing seems to be
  running. These delays are up to 40ms, whereas with 7.6 delays were a 1ms or
  so.
 
  When I add -N2 or -N4, the 7.8 program runs fine. But on 7.6 it runs fine
  without with -N2/4.
 
  The program is compiled -O2 with profiling. The -N2/4 version uses more
  memory,  but in both cases with 7.8 and with 7.6 there is no space leak.
 
  I tired to compile and use -ls so I could take a look with threadscope,
  but the application hangs and writes no data to the file. The CPU fans run
  wild like it is in an infinite loop. It at least pops an unpainted
  wxhaskell window, so it got partially running.
 
  One of my libraries uses option -fsimpl-tick-factor=200 to get around the
  compiler.
 
  What do I need to know about changes to threading and event logging
  between 7.6 and 7.8? Is there some general documentation somewhere that
  might help?
 
  I am on Ubuntu 14.04 LTS. I downloaded the 7.8 tool chain tar ball and
  installed myself, after removing 7.6 with apt-get.
 
  Any hints appreciated.
 
  Mike
 
 
  ___
  Glasgow-haskell-users mailing list
  Glasgow-haskell-users@haskell.org
  http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
 
 
 
 
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Thread behavior in 7.8.3

2014-10-29 Thread Edward Z. Yang
Yes, that's right.

I brought it up because you mentioned that there might still be
occasional delays, and those might be caused by a thread not being
preemptible for a while.

Edward

Excerpts from John Lato's message of 2014-10-29 17:31:45 -0700:
 My understanding is that -fno-omit-yields is subtly different.  I think
 that's for the case when a function loops without performing any heap
 allocations, and thus would never yield even after the context switch
 timeout.  In my case the looping function does perform heap allocations and
 does eventually yield, just not until after the timeout.
 
 Is that understanding correct?
 
 (technically, doesn't it change to yielding after stack checks or something
 like that?)
 
 On Thu, Oct 30, 2014 at 8:24 AM, Edward Z. Yang ezy...@mit.edu wrote:
 
  I don't think this is directly related to the problem, but if you have a
  thread that isn't yielding, you can force it to yield by using
  -fno-omit-yields on your code.  It won't help if the non-yielding code
  is in a library, and it won't help if the problem was that you just
  weren't setting timeouts finely enough (which sounds like what was
  happening). FYI.
 
  Edward
 
  Excerpts from John Lato's message of 2014-10-29 17:19:46 -0700:
   I guess I should explain what that flag does...
  
   The GHC RTS maintains capabilities, the number of capabilities is
  specified
   by the `+RTS -N` option.  Each capability is a virtual machine that
   executes Haskell code, and maintains its own runqueue of threads to
  process.
  
   A capability will perform a context switch at the next heap block
   allocation (every 4k of allocation) after the timer expires.  The timer
   defaults to 20ms, and can be set by the -C flag.  Capabilities perform
   context switches in other circumstances as well, such as when a thread
   yields or blocks.
  
   My guess is that either the context switching logic changed in ghc-7.8,
  or
   possibly your code used to trigger a switch via some other mechanism
  (stack
   overflow or something maybe?), but is optimized differently now so
  instead
   it needs to wait for the timer to expire.
  
   The problem we had was that a time-sensitive thread was getting scheduled
   on the same capability as a long-running non-yielding thread, so the
   time-sensitive thread had to wait for a context switch timeout (even
  though
   there were free cores available!).  I expect even with -N4 you'll still
  see
   occasional delays (perhaps 5% of calls).
  
   We've solved our problem with judicious use of `forkOn`, but that won't
   help at N1.
  
   We did see this behavior in 7.6, but it's definitely worse in 7.8.
  
   Incidentally, has there been any interest in a work-stealing scheduler?
   There was a discussion from about 2 years ago, in which Simon Marlow
  noted
   it might be tricky, but it would definitely help in situations like this.
  
   John L.
  
   On Thu, Oct 30, 2014 at 8:02 AM, Michael Jones m...@proclivis.com
  wrote:
  
John,
   
Adding -C0.005 makes it much better. Using -C0.001 makes it behave more
like -N4.
   
Thanks. This saves my project, as I need to deploy on a single core
  Atom
and was stuck.
   
Mike
   
On Oct 29, 2014, at 5:12 PM, John Lato jwl...@gmail.com wrote:
   
By any chance do the delays get shorter if you run your program with
  `+RTS
-C0.005` ?  If so, I suspect you're having a problem very similar to
  one
that we had with ghc-7.8 (7.6 too, but it's worse on ghc-7.8 for some
reason), involving possible misbehavior of the thread scheduler.
   
On Wed, Oct 29, 2014 at 2:18 PM, Michael Jones m...@proclivis.com
  wrote:
   
I have a general question about thread behavior in 7.8.3 vs 7.6.X
   
I moved from 7.6 to 7.8 and my application behaves very differently. I
have three threads, an application thread that plots data with
  wxhaskell or
sends it over a network (depends on settings), a thread doing usb bulk
writes, and a thread doing usb bulk reads. Data is moved around with
  TChan,
and TVar is used for coordination.
   
When the application was compiled with 7.6, my stream of usb traffic
  was
smooth. With 7.8, there are lots of delays where nothing seems to be
running. These delays are up to 40ms, whereas with 7.6 delays were a
  1ms or
so.
   
When I add -N2 or -N4, the 7.8 program runs fine. But on 7.6 it runs
  fine
without with -N2/4.
   
The program is compiled -O2 with profiling. The -N2/4 version uses
  more
memory,  but in both cases with 7.8 and with 7.6 there is no space
  leak.
   
I tired to compile and use -ls so I could take a look with
  threadscope,
but the application hangs and writes no data to the file. The CPU
  fans run
wild like it is in an infinite loop. It at least pops an unpainted
wxhaskell window, so it got partially running.
   
One of my libraries uses option -fsimpl-tick-factor=200 to get around

Re: GHC 7.4.2 on Ubuntu Trusty

2014-10-28 Thread Edward Z. Yang
Excerpts from Yitzchak Gale's message of 2014-10-28 13:58:08 -0700:
 How about this: Currently, every GHC source distribution
 requires no later than its own version of GHC for bootstrapping.
 Going backwards, that chops up the sequence of GHC versions
 into tiny incompatible pieces - there is no way to start with a
 working GHC and work backwards to an older version by compiling
 successively older GHC sources.
 
 If instead each GHC could be compiled using at least one
 subsequent version, the chain would not be broken. I.e.,
 always provide a compatibility flag or some other reasonably
 simple mechanism that would enable the current GHC to
 compile the source code of at least the last previous released
 version.

Here is an alternate proposal: when we make a new major version release,
we should also make a minor version release of the previous series, which
is prepped so that it can compile from the new major version.  If it
is the case that one version of the compiler can compile any other
version in the same series, this would be sufficient to go backwards.

Concretely, the action plan is very simple too: take 7.6 and apply as
many patches as is necessary to make it compile from 7.8, and cut
a release with those patches.

Edward
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: optimizing StgPtr allocate (Capability *cap, W_ n)

2014-10-16 Thread Edward Z. Yang
Hi Bulat,

This seems quite reasonable to me. Have you eyeballed the assembly
GCC produces to see that the hotpath is improved? If you can submit
a patch that would be great!

Cheers,
Edward

Excerpts from Bulat Ziganshin's message of 2014-10-14 10:08:59 -0700:
 Hello Glasgow-haskell-users,
 
 i'm looking a the 
 https://github.com/ghc/ghc/blob/23bb90460d7c963ee617d250fa0a33c6ac7bbc53/rts/sm/Storage.c#L680
 
 if i correctly understand, it's speed-critical routine?
 
 i think that it may be improved in this way:
 
 StgPtr allocate (Capability *cap, W_ n)
 {
 bdescr *bd;
 StgPtr p;
 
 TICK_ALLOC_HEAP_NOCTR(WDS(n));
 CCS_ALLOC(cap-r.rCCCS,n);
 
 /// here starts new improved code:
 
 bd = cap-r.rCurrentAlloc;
 if (bd == NULL || bd-free + n  bd-end) {
 if (n = LARGE_OBJECT_THRESHOLD/sizeof(W_)) {
 
 }
 if (bd-free + n = bd-start + BLOCK_SIZE_W)
 bd-end = min (bd-start + BLOCK_SIZE_W, bd-free + 
 LARGE_OBJECT_THRESHOLD)
 goto usual_alloc;
 }
 
 }
 
 /// and here it stops
 
 usual_alloc:
 p = bd-free;
 bd-free += n;
 
 IF_DEBUG(sanity, ASSERT(*((StgWord8*)p) == 0xaa));
 return p;
 }
 
 
 i  think  it's  obvious - we consolidate two if's on the crirical path
 into the single one plus avoid one ADD by keeping highly-useful bd-end 
 pointer
 
 further   improvements   may   include   removing  bd==NULL  check  by
 initializing bd-free=bd-end=NULL   and   moving   entire   if body
 into   separate   slow_allocate()  procedure  marked  noinline  with
 allocate() probably marked as forceinline:
 
 StgPtr allocate (Capability *cap, W_ n)
 {
 bdescr *bd;
 StgPtr p;
 
 TICK_ALLOC_HEAP_NOCTR(WDS(n));
 CCS_ALLOC(cap-r.rCCCS,n);
 
 bd = cap-r.rCurrentAlloc;
 if (bd-free + n  bd-end)
 return slow_allocate(cap,n);
 
 p = bd-free;
 bd-free += n;
 
 IF_DEBUG(sanity, ASSERT(*((StgWord8*)p) == 0xaa));
 return p;
 }
 
 this  change  will  greatly simplify optimizer's work. according to my
 experience   current  C++  compilers  are  weak  on  optimizing  large
 functions with complex execution paths and such transformations really
 improve the generated code
 
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: hmatrix

2014-08-24 Thread Edward Z . Yang
Hello Adrian,

This sounds like a definite bug in Cabal, in that it should report
accordingly if it is looking for both static and dynamic versions
of the library, and only finds the static one.  Can you file a bug
report?

Thanks,
Edward

Excerpts from Adrian Victor Crisciu's message of 2014-08-23 23:45:48 +0100:
 After 3 days of frustrating trials and errors, I managed to install the new
 hmatrix package on Slackware 13.1. I post this message in case anyone else
 hits the same problem, as the process requires some alteration of the
 standard build process of ATLAS, LAPACK, hmatrix and hmatrix-gsl. The
 following steps assume that LAPACK is built against an optimized ATLAS
 library.
 
 1.) By default, ATLAS builds only static libraries. However, hmatrix needs
 shared objects, so ATLAS should be configured with the --share option and,
 after the build is complete, the commands make shared and/ore make
 ptshared need to be issued in BUILDDIR/lib
 
 2.) LAPACK also buils by default only static libraries and, for the same
 reason as above, we need position independent conde in ALL the objects in
 liblapack. In order to do this we need to
   2.1.) Add -fPIC to OPTS, NOOPT and LOADOPT in LAPACKROOT/make.inc
2.2.) Change the BLASLIB macro in the same file to point to the
 optimized tatlas (os satlas) library
   2.3.) Add the target liblapack.so to SRC/Makefile:
   ../liblapack.so: $(ALLOBJ)
 gfortran -shared -W1 -o $@ $(ALLOBJ)
 (This step is a corected version of
 http://theoryno3.blogspot.ro/2010/12/compiling-lapack-as-shared-library-in.html
 )
 
 3.) Change the extra-libraries line in hmatrix.cabal to read:
   extra-libraries: tatlas lapack
 
 4.) Change the extra-library line in hmatrix-gsl to read:
extra-libraries: gslcblas gsl
 
 Again, this procedure worked for may Slackware 13.1 linux box, but I think
 it will work on any decent linux machine.
 
 Thanks everyone for your time and useful comments!
 Adrian Victor.
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: hmatrix-0.16.0.4 installation problem

2014-08-22 Thread Edward Z . Yang
Excerpts from Adrian Victor Crisciu's message of 2014-08-22 10:55:00 +0100:
 I tried the following command line:
 
 cabal install --enable-documentation
 --extra-include-dirs=/usr;local/include --extra-lib-dirs=/usr/local/lib
 hmatrix

Is that semicolon a typo?

Edward
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: hmatrix-0.16.0.4 installation problem

2014-08-21 Thread Edward Z . Yang
Hello Adrian,

Are the header files for blas and lapack on your system? (I'm not sure
what the configure script for other software was checking for.)

Edward

Excerpts from Adrian Victor Crisciu's message of 2014-08-21 14:22:58 +0100:
 Sorry!
 
 This is the the failed cabal install command and its output: The blas
 (libcblas.so) and lapack (both liblapack.a and liblapack.so) are in
 /usr/local/lib64, so they can be easily found. And the configure script for
 other software did found them.
 
 cabal install --enable-documentation hmatrix
 
 Resolving dependencies...
 Configuring hmatrix-0.16.0.4...
 cabal: Missing dependencies on foreign libraries:
 * Missing C libraries: blas, lapack
 This problem can usually be solved by installing the system packages that
 provide these libraries (you may need the -dev versions). If the libraries
 are already installed but in a non-standard location then you can use the
 flags --extra-include-dirs= and --extra-lib-dirs= to specify where they are.
 Failed to install hmatrix-0.16.0.4
 cabal: Error: some packages failed to install:
 hmatrix-0.16.0.4 failed during the configure step. The exception was:
 ExitFailure 1
 
 Adrian-Victor
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: 'import ccall unsafe' and parallelism

2014-08-14 Thread Edward Z . Yang
I have to agree with Brandon's diagnosis: unsafePerformIO will
take out a lock, which is likely why you are seeing no parallelism.

Edward

Excerpts from Brandon Allbery's message of 2014-08-14 17:12:00 +0100:
 On Thu, Aug 14, 2014 at 11:54 AM, Christian Höner zu Siederdissen 
 choe...@tbi.univie.ac.at wrote:
 
  go xs = unsafePerformIO $ do
forM_ xs $ cfun
return $ somethingUnhealthy
 
 
 I wonder if this is your real problem. `unsafePerformIO` does some extra
 locking; the FFI specifies a function `unsafeLocalState`, which in GHC is
 `unsafeDupablePerformIO` which skips the extra locking.
 
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: cabal repl failing silently on missing exposed-modules

2014-08-08 Thread Edward Z . Yang
If you haven't already, go file a bug on
https://github.com/haskell/cabal/issues

Edward

Excerpts from cheater00 .'s message of 2014-08-06 15:18:04 +0100:
 Hi,
 I have just spent some time trying to figure out why all of a sudden
 cabal repl silently exits without an error message. What helped was
 to take a project that could launch the repl and compare the cabal
 files to my new project. It turns out the exposed-modules entry was
 missing. I was wondering whether this behaviour was intentional, as I
 don't recollect this happening before, but I don't have older systems
 to test this on.
 
 The reason I wanted to run a repl without editing exposed modules was
 to test some dependencies I pulled in to the sandbox with cabal
 install. The package in question didn't have any code of its own yet.
 In this case I would just expect ghci to load with the Prelude.
 
 Thanks!
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Failure compiling ghc-mtl with ghc-7.8.{2,3}

2014-07-20 Thread Edward Z . Yang
The last time I saw this error, it was because the package database
was messed up (there was an instance of MonadIO in scope, but it
was for the wrong package.)  However, I don't know what the source
of the problem is here.

Edward

Excerpts from i hamsa's message of 2014-07-20 08:26:52 +0100:
 I was trying to upgrade to ghc-7.8 the other day, and got this
 compilation failure when building ghc-mtl-1.2.1.0 (see the end of the
 message).
 
 I'm using the haskell overlay on Gentoo Linux straight out of the box,
 no local cabal installations of anything.
 
 Now I was told that other people can compile ghc-mtl with 7.8 just
 fine, so there must be something broken in my specific configuration.
 What would be an effective way to approach the situation?
 
 In the sources I see that an instance of MonadIO GHC.Ghc does exist. I
 don't understand these errors. Are there multiple different MonadIO
 classes in different modules?
 
 Thank you and happy hacking.
 
 Now the errors:
 
 Control/Monad/Ghc.hs:42:15:
 No instance for (GHC.MonadIO Ghc)
   arising from the 'deriving' clause of a data type declaration
 Possible fix:
   use a standalone 'deriving instance' declaration,
 so you can specify the instance context yourself
 When deriving the instance for (GHC.ExceptionMonad Ghc)
 
 Control/Monad/Ghc.hs:46:15:
 No instance for (MonadIO GHC.Ghc)
   arising from the 'deriving' clause of a data type declaration
 Possible fix:
   use a standalone 'deriving instance' declaration,
 so you can specify the instance context yourself
 When deriving the instance for (MonadIO Ghc)
 
 Control/Monad/Ghc.hs:49:15:
 No instance for (GHC.MonadIO Ghc)
   arising from the 'deriving' clause of a data type declaration
 Possible fix:
   use a standalone 'deriving instance' declaration,
 so you can specify the instance context yourself
 When deriving the instance for (GHC.GhcMonad Ghc)
 
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Failure compiling ghc-mtl with ghc-7.8.{2,3}

2014-07-20 Thread Edward Z . Yang
It looks like you will have to install old versions of mtl/exceptions
which work on transformers-0.3.0.0, although undoubtedly the real
problem is that GHC should update what version of transformers it
is distributing.

Edawrd

Excerpts from i hamsa's message of 2014-07-20 19:25:36 +0100:
 I think I found the problem.
 
 package ghc-7.8.3 requires transformers-0.3.0.0
 package mtl-2.2.1 requires transformers-0.4.1.0
 package exceptions-0.6.1 requires transformers-0.4.1.0
 
 I wonder how is this ever supposed to work :(
 
 On Sun, Jul 20, 2014 at 9:01 PM, Edward Z. Yang ezy...@mit.edu wrote:
  The last time I saw this error, it was because the package database
  was messed up (there was an instance of MonadIO in scope, but it
  was for the wrong package.)  However, I don't know what the source
  of the problem is here.
 
  Edward
 
  Excerpts from i hamsa's message of 2014-07-20 08:26:52 +0100:
  I was trying to upgrade to ghc-7.8 the other day, and got this
  compilation failure when building ghc-mtl-1.2.1.0 (see the end of the
  message).
 
  I'm using the haskell overlay on Gentoo Linux straight out of the box,
  no local cabal installations of anything.
 
  Now I was told that other people can compile ghc-mtl with 7.8 just
  fine, so there must be something broken in my specific configuration.
  What would be an effective way to approach the situation?
 
  In the sources I see that an instance of MonadIO GHC.Ghc does exist. I
  don't understand these errors. Are there multiple different MonadIO
  classes in different modules?
 
  Thank you and happy hacking.
 
  Now the errors:
 
  Control/Monad/Ghc.hs:42:15:
  No instance for (GHC.MonadIO Ghc)
arising from the 'deriving' clause of a data type declaration
  Possible fix:
use a standalone 'deriving instance' declaration,
  so you can specify the instance context yourself
  When deriving the instance for (GHC.ExceptionMonad Ghc)
 
  Control/Monad/Ghc.hs:46:15:
  No instance for (MonadIO GHC.Ghc)
arising from the 'deriving' clause of a data type declaration
  Possible fix:
use a standalone 'deriving instance' declaration,
  so you can specify the instance context yourself
  When deriving the instance for (MonadIO Ghc)
 
  Control/Monad/Ghc.hs:49:15:
  No instance for (GHC.MonadIO Ghc)
arising from the 'deriving' clause of a data type declaration
  Possible fix:
use a standalone 'deriving instance' declaration,
  so you can specify the instance context yourself
  When deriving the instance for (GHC.GhcMonad Ghc)
 
 
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Using mutable array after an unsafeFreezeArray, and GC details

2014-05-12 Thread Edward Z . Yang
Excerpts from Brandon Simmons's message of 2014-05-10 13:57:40 -0700:
 Another silly question: when card-marking happens after a write or
 CAS, does that indicate this segment maybe contains old-to-new
 generation references, so be sure to preserve (scavenge?) them from
 collection ? In my initial question I was thinking of the cards as
 indicating here be garbage (e.g. a previous overwritten array
 value), but I think I had the wrong idea about how copying GC works
 generally (shouldn't it really be called Non-Garbage Preservation?).

That's correct.

Cheers,
Edward
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Using mutable array after an unsafeFreezeArray, and GC details

2014-05-09 Thread Edward Z . Yang
Hello Brandon,

Excerpts from Brandon Simmons's message of 2014-05-08 16:18:48 -0700:
 I have an unusual application with some unusual performance problems
 and I'm trying to understand how I might use unsafeFreezeArray to help
 me, as well as understand in detail what's going on with boxed mutable
 arrays and GC. I'm using the interface from 'primitive' below.
 
 First some basic questions, then a bit more background
 
 1) What happens when I do `newArray s x = \a- unsafeFreezeArray a
  return a` and then use `a`? What problems could that cause?

Your code as written wouldn't compile, but assuming you're talking about
the primops newArray# and unsafeFreezeArray#, what this operation does
is allocate a new array of pointers (initially recorded as mutable), and
then freezes it in-place (by changing the info-table associated with
it), but while maintaining a pointer to the original mutable array.  Nothing bad
will happen immediately, but if you use this mutable reference to mutate
the pointer array, you can cause a crash (in particular, if the array
makes it to the old generation, it will not be on the mutable list and
so if you mutate it, you may be missing roots.)

 2) And what if a do a `cloneMutableArray` on `a` and likewise use the
 resulting array?

If you do the clone before freezing, that's fine for all use-cases;
if you do the clone after, you will end up with the same result as (1).

 Background: I've been looking into an issue [1] in a library in which
 as more mutable arrays are allocated, GC dominates (I think I verified
 this?) and all code gets slower in proportion to the number of mutable
 arrays that are hanging around.
 
 I've been trying to understand how this is working internally. I don't
 quite understand how the remembered set works with respect to
 MutableArray. As best I understand: the remembered set in generation G
 points to certain objects in older generations, which objects hold
 references to objects in G. Then for MutableArrays specifically,
 card-marking is used to mark regions of the array with garbage in some
 way.
 
 So my hypothesis is the slowdown is associated with the size of the
 remembered set, and whatever the GC has to do on it. And in my tests,
 freezing the array seems to make that overhead (at least the overhead
 proportional to number of arrays) disappear.

You're basically correct.  In the current GC design, mutable arrays of
pointers are always placed on the mutable list.  The mutable list of
generations which are not being collected are always traversed; thus,
the number of pointer arrays corresponds to a linear overhead for minor GCs.

Here is a feature request tracking many of the infelicities that our
current GC design has:  https://ghc.haskell.org/trac/ghc/ticket/7662
The upshot is that the Haskell GC is very nicely tuned for mostly
immutable workloads, but there are some bad asymptotics when your
heap has lots of mutable objects.  This is generally a hard problem:
tuned GC implementations for mutable languages are a lot of work!
(Just ask the JVM implementors.)

 Now I'm really lost in the woods though. My hope is that I might be
 able to safely use unsafeFreezeArray to help me here [3]. Here are the
 particulars of how I use MutableArray in my algorithm, which are
 somewhat unusual:
   - keep around a small template `MutableArray Nothing`
   - use cloneMutableArray for fast allocation of new arrays
   - for each array only a *single* write (CAS actually) happens at each 
 position
 
 In fact as far as I can reason, there ought to be no garbage to
 collect at all until the entire array becomes garbage (the initial
 value is surely shared, especially since I'm keeping this template
 array around to clone from, right?). In fact I was even playing with
 the idea of rolling a new CAS that skips the card-marking stuff.

I don't understand your full workload, but if you have a workload that
involves creating an array, mutating it over a short period of time,
and then never mutating it afterwards, you should simply freeze it after
you are done writing it.  Once frozen, the array will no longer be kept
on the mutable list and you won't pay for it when doing GC.  However,
the fact that you are doing a CAS makes it seem to me that your workflow
may be more complicated than that...

Cheers,
Edward
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Using mutable array after an unsafeFreezeArray, and GC details

2014-05-09 Thread Edward Z . Yang
Excerpts from Carter Schonwald's message of 2014-05-09 16:49:07 -0700:
 Any chance you could try to use storable or unboxed vectors?

Neither of those will work if, at the end of the day, you need to
store pointers to heap objects

Edward
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


[Haskell] [Haskell-cafe] ANN: Monad.Reader Issue 23

2014-04-24 Thread Edward Z . Yang
I am pleased to announce that Issue 23 of the Monad Reader is now available.

http://themonadreader.files.wordpress.com/2014/04/issue23.pdf
http://themonadreader.wordpress.com/2014/04/23/issue-23/

Issue 23 consists of the following five articles:

  * FizzBuzz in Haskell by Embedding a Domain-Specific Language by
Maciej Pírog
http://themonadreader.files.wordpress.com/2014/04/fizzbuzz.pdf

  * Supercompilation: Ideas and Methods (+appendix) by Ilya Klyuchnikov
and Dimitur Krustev
http://themonadreader.files.wordpress.com/2014/04/super-final.pdf

  * A Haskell sound specification DSL: Ludic support and deep immersion
in Nordic technology-supported LARP by Henrik Bäärnhielm, Daniel
Sundström and Mikael Vejdemo-Johansson
http://themonadreader.files.wordpress.com/2014/04/celestria_main.pdf

  * MFlow, a continuation-based web framework without continuations by
Alberto Gomez Corona
http://themonadreader.files.wordpress.com/2014/04/mflow.pdf

  * Practical Type System Benefits by Neil Brown
http://themonadreader.files.wordpress.com/2014/04/nccb.pdf

This time around, I have individual article files for each (and the
supercompilation article has an extra appendix not included in the full
issue PDF).

Feel free to browse the source files. You can check out the entire
repository using Git:

git clone https://github.com/ezyang/tmr-issue23.git

If you’d like to write something for Issue 24, please get in touch!

Cheers,
Edward
___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


[Haskell] Monad.Reader #24 call for copy

2014-04-24 Thread Edward Z . Yang
Call for Copy: The Monad.Reader - Issue 24


Whether you're an established academic or have only just started
learning Haskell, if you have something to say, please consider
writing an article for The Monad.Reader! The
submission deadline for Issue 24 will be:

**Saturday, July 5, 2014**

The Monad.Reader


The Monad.Reader is a electronic magazine about all things Haskell. It
is less formal than journal, but somehow more enduring than a wiki-
page. There have been a wide variety of articles: exciting code
fragments, intriguing puzzles, book reviews, tutorials, and even
half-baked research ideas.

Submission Details
~~

NEW this issue: for my reviewing sanity, I am setting a soft page limit
of fifteen pages.  If you would like to write something longer, get
in touch, but remember: brevity is the soul of wit.

In any case, contact me if you intend to submit something -- the sooner
you let me know what you're up to, the better.

Please submit articles for the next issue to me by e-mail.

Articles should be written according to the guidelines available from

http://themonadreader.wordpress.com/contributing/

Please submit your article in PDF, together with any source files you
used. The sources will be released together with the magazine under a
BSD license.

If you would like to submit an article, but have trouble with LaTeX
please let me know and we'll work something out.
___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: memory ordering

2013-12-31 Thread Edward Z . Yang
I was thinking about my response, and realized there was one major
misleading thing in my description.  The load reordering I described
applies to load instructions in C-- proper, i.e. things that show up
in the C-- dup as:

W_ x = I64[...addr...]

Reads to IORefs and reads to vectors get compiled inline (as they
eventually translate into inline primops), so my admonitions are
applicable.

However, the story with *foreign primops* (which is how loadLoadBarrier
in atomic-primops is defined, how you might imagine defining a custom
read function as a primop) is a little different.  First, what does a
call to an foreign primop compile into? It is *not* inlined, so it will
eventually get compiled into a jump (this could be a problem if you're
really trying to squeeze out performance!)  Second, the optimizer is a
bit more conservative when it comes to primop calls (internally referred
to as unsafe foreign calls); at the moment, the optimizer assumes
these foreign calls clobber heap memory, so we *automatically* will not
push loads/stores beyond this boundary. (NB: We reserve the right to
change this in the future!)

This is probably why atomic-primops, as it is written today, seems to
work OK, even in the presence of the optimizer.  But I also have a hard
time believing it gives the speedups you want, due to the current
design. (CC'd Ryan Newton, because I would love to be wrong here, and
maybe he can correct me on this note.)

Cheers,
Edward

P.S. loadLoadBarrier compiles to a no-op on x86 architectures, but
because it's not inlined I think you will still end up with a jump (LLVM
might be able to eliminate it).

Excerpts from John Lato's message of 2013-12-31 03:01:58 +0800:
 Hi Edward,
 
 Thanks very much for this reply, it answers a lot of questions I'd had.
  I'd hoped that ordering would be preserved through C--, but c'est la vie.
  Optimizing compilers are ever the bane of concurrent algorithms!
 
 stg/SMP.h does define a loadLoadBarrier, which is exposed in Ryan Newton's
 atomic-primops package.  From the docs, I think that's a general read
 barrier, and should do what I want.  Assuming it works properly, of course.
  If I'm lucky it might even be optimized out.
 
 Thanks,
 John
 
 On Mon, Dec 30, 2013 at 6:04 AM, Edward Z. Yang ezy...@mit.edu wrote:
 
  Hello John,
 
  Here are some prior discussions (which I will attempt to summarize
  below):
 
  http://www.haskell.org/pipermail/haskell-cafe/2011-May/091878.html
  http://www.haskell.org/pipermail/haskell-prime/2006-April/001237.html
  http://www.haskell.org/pipermail/haskell-prime/2006-March/001079.html
 
  The guarantees that Haskell and GHC give in this area are hand-wavy at
  best; at the moment, I don't think Haskell or GHC have a formal memory
  model—this seems to be an open research problem. (Unfortunately, AFAICT
  all the researchers working on relaxed memory models have their hands
  full with things like C++ :-)
 
  If you want to go ahead and build something that /just/ works for a
  /specific version/ of GHC, you will need to answer this question
  separately for every phase of the compiler.  For Core and STG, monads
  will preserve ordering, so there is no trouble.  However, for C--, we
  will almost certainly apply optimizations which reorder reads (look at
  CmmSink.hs).  To properly support your algorithm, you will have to add
  some new read barrier mach-ops, and teach the optimizer to respect them.
  (This could be fiendishly subtle; it might be better to give C-- a
  memory model first.)  These mach-ops would then translate into
  appropriate arch-specific assembly or LLVM instructions, preserving
  the guarantees further.
 
  This is not related to your original question, but the situation is a
  bit better with regards to reordering stores: we have a WriteBarrier
  MachOp, which in principle, prevents store reordering.  In practice, we
  don't seem to actually have any C-- optimizations that reorder stores.
  So, at least you can assume these will work OK!
 
  Hope this helps (and is not too inaccurate),
  Edward
 
  Excerpts from John Lato's message of 2013-12-20 09:36:11 +0800:
   Hello,
  
   I'm working on a lock-free algorithm that's meant to be used in a
   concurrent setting, and I've run into a possible issue.
  
   The crux of the matter is that a particular function needs to perform the
   following:
  
x - MVector.read vec ix
position - readIORef posRef
  
   and the algorithm is only safe if these two reads are not reordered (both
   the vector and IORef are written to by other threads).
  
   My concern is, according to standard Haskell semantics this should be
  safe,
   as IO sequencing should guarantee that the reads happen in-order.  Of
   course this also relies upon the architecture's memory model, but x86
  also
   guarantees that reads happen in order.  However doubts remain; I do not
   have confidence that the code generator will handle this properly.  In
   particular, LLVM may

Re: memory ordering

2013-12-31 Thread Edward Z . Yang
 Second, the optimizer is a bit more conservative when it comes to
 primop calls (internally referred to as unsafe foreign calls)

Sorry, I need to correct myself here.  Foreign primops, and most
out-of-line primops, compile into jumps which end basic blocks, which
constitute hard boundaries since we don't do really do inter-block
optimization.  Unsafe foreign calls are generally reserved for function
calls which use the C calling convention; primops manage the return
convention themselves.

Edward
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: memory ordering

2013-12-30 Thread Edward Z . Yang
Hello John,

Here are some prior discussions (which I will attempt to summarize
below):

http://www.haskell.org/pipermail/haskell-cafe/2011-May/091878.html
http://www.haskell.org/pipermail/haskell-prime/2006-April/001237.html
http://www.haskell.org/pipermail/haskell-prime/2006-March/001079.html

The guarantees that Haskell and GHC give in this area are hand-wavy at
best; at the moment, I don't think Haskell or GHC have a formal memory
model—this seems to be an open research problem. (Unfortunately, AFAICT
all the researchers working on relaxed memory models have their hands
full with things like C++ :-)

If you want to go ahead and build something that /just/ works for a
/specific version/ of GHC, you will need to answer this question
separately for every phase of the compiler.  For Core and STG, monads
will preserve ordering, so there is no trouble.  However, for C--, we
will almost certainly apply optimizations which reorder reads (look at
CmmSink.hs).  To properly support your algorithm, you will have to add
some new read barrier mach-ops, and teach the optimizer to respect them.
(This could be fiendishly subtle; it might be better to give C-- a
memory model first.)  These mach-ops would then translate into
appropriate arch-specific assembly or LLVM instructions, preserving
the guarantees further.

This is not related to your original question, but the situation is a
bit better with regards to reordering stores: we have a WriteBarrier
MachOp, which in principle, prevents store reordering.  In practice, we
don't seem to actually have any C-- optimizations that reorder stores.
So, at least you can assume these will work OK!

Hope this helps (and is not too inaccurate),
Edward

Excerpts from John Lato's message of 2013-12-20 09:36:11 +0800:
 Hello,
 
 I'm working on a lock-free algorithm that's meant to be used in a
 concurrent setting, and I've run into a possible issue.
 
 The crux of the matter is that a particular function needs to perform the
 following:
 
  x - MVector.read vec ix
  position - readIORef posRef
 
 and the algorithm is only safe if these two reads are not reordered (both
 the vector and IORef are written to by other threads).
 
 My concern is, according to standard Haskell semantics this should be safe,
 as IO sequencing should guarantee that the reads happen in-order.  Of
 course this also relies upon the architecture's memory model, but x86 also
 guarantees that reads happen in order.  However doubts remain; I do not
 have confidence that the code generator will handle this properly.  In
 particular, LLVM may freely re-order loads of NotAtomic and Unordered
 values.
 
 The one hope I have is that ghc will preserve IO semantics through the
 entire pipeline.  This seems like it would be necessary for proper handling
 of exceptions, for example.  So, can anyone tell me if my worries are
 unfounded, or if there's any way to ensure the behavior I want?  I could
 change the readIORef to an atomicModifyIORef, which should issue an mfence,
 but that seems a bit heavy-handed as just a read fence would be sufficient
 (although even that seems more than necessary).
 
 Thanks,
 John L.
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


[Haskell] Monad.Reader #23 call for copy

2013-12-18 Thread Edward Z . Yang
Call for Copy: The Monad.Reader - Issue 23


Whether you're an established academic or have only just started
learning Haskell, if you have something to say, please consider writing
an article for The Monad.Reader!  The submission deadline for Issue 23
will be:

**Friday, January 17, 2014**

The Monad.Reader


The Monad.Reader is a electronic magazine about all things Haskell. It
is less formal than journal, but somehow more enduring than a wiki-
page. There have been a wide variety of articles: exciting code
fragments, intriguing puzzles, book reviews, tutorials, and even
half-baked research ideas.

Submission Details
~~

Get in touch with me if you intend to submit something -- the sooner
you let me know what you're up to, the better.

Please submit articles for the next issue to me by e-mail (ezy...@mit.edu).

Articles should be written according to the guidelines available from

http://themonadreader.wordpress.com/contributing/

Please submit your article in PDF, together with any source files you
used. The sources will be released together with the magazine under a
BSD license.

If you would like to submit an article, but have trouble with LaTeX
please let me know and we'll work something out.
___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: blocking parallel program

2013-10-19 Thread Edward Z. Yang
Oh I see; the problem is the GHC RTS is attempting to shut down,
and in order to do this it needs to grab all of the capabilities. However,
one of them is in an uninterruptible loop, so the program hangs (e.g.
if you change the program as follows:

main :: IO ()
main = do
  forkIO $ do
loop (y == [yield])
  threadDelay 1000
)

With a sufficiently recent version of GHC, if you compile with -fno-omit-yields,
that should fix the problem.

Edward

Excerpts from Facundo Domínguez's message of Sat Oct 19 16:05:15 -0700 2013:
 Thanks. I just tried that. Unfortunately, it doesn't seem to help.
 
 Facundo
 
 On Sat, Oct 19, 2013 at 8:47 PM, Edward Z. Yang ezy...@mit.edu wrote:
  Hello Facundo,
 
  The reason is that you have compiled the program to be multithreaded, but it
  is not running with multiple cores. Compile also with -rtsopts and then
  pass +RTS -N2 to the program.
 
  Excerpts from Facundo Domínguez's message of Sat Oct 19 15:19:22 -0700 2013:
  Hello,
 Below is a program that seems to block indefinitely with ghc in a
  multicore machine. This program has a loop that does not produce
  allocations, and I understand that this may grab one of the cores. The
  question is, why can't the other cores take the blocked thread?
 
  The program was compiled with:
 
  $ ghc --make -O -threaded test.hs
 
  and it is run with:
 
  $ ./test
 
  Program text follows.
 
  Thanks,
  Facundo
 
  
 
  import Control.Concurrent
  import Control.Monad
  import System.Environment
 
  main :: IO ()
  main = do
y - getArgs
mv0 - newEmptyMVar
mv1 - newEmptyMVar
forkIO $ do
  takeMVar mv0
  putMVar mv1 ()
  loop (y == [yield])
putMVar mv0 ()
takeMVar mv1
 
  loop :: Bool - IO ()
  loop cooperative = go
where
  go = when cooperative yield  go
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Trying to compile ghc HEAD on xubuntu 13.04-x64

2013-10-04 Thread Edward Z. Yang
As a workaround, add this to your mk/build.mk

HADDOCK_DOCS   = NO
BUILD_DOCBOOK_HTML = NO
BUILD_DOCBOOK_PS   = NO
BUILD_DOCBOOK_PDF  = NO

This is a bug.

Edward

Excerpts from Nathan Hüsken's message of Fri Oct 04 13:55:01 -0700 2013:
 Hey,
 
 because I have touble with ghci and packages with FFI, it was suggested 
 to me to compile and use ghc HEAD.
 
 I am on xubuntu 13.04 64bit and try to do a perf build. It fails with:
 
 compiler/ghc.mk:478: warning: ignoring old commands for target 
 `compiler/stage2/build/libHSghc-7.7.20131004-ghc7.7.20131004.so'
 /home/ls/src/ghc/inplace/bin/haddock 
 --odir=libraries/ghc-prim/dist-install/doc/html/ghc-prim 
 --no-tmp-comp-dir 
 --dump-interface=libraries/ghc-prim/dist-install/doc/html/ghc-prim/ghc-prim.haddock
  
 --html --hoogle --title=ghc-prim-0.3.1.0: GHC primitives 
 --prologue=libraries/ghc-prim/dist-install/haddock-prologue.txt 
 --optghc=-hisuf --optghc=dyn_hi --optghc=-osuf --optghc=dyn_o 
 --optghc=-hcsuf --optghc=dyn_hc --optghc=-fPIC --optghc=-dynamic 
 --optghc=-O --optghc=-H64m --optghc=-package-name 
 --optghc=ghc-prim-0.3.1.0 --optghc=-hide-all-packages --optghc=-i 
 --optghc=-ilibraries/ghc-prim/. 
 --optghc=-ilibraries/ghc-prim/dist-install/build 
 --optghc=-ilibraries/ghc-prim/dist-install/build/autogen 
 --optghc=-Ilibraries/ghc-prim/dist-install/build 
 --optghc=-Ilibraries/ghc-prim/dist-install/build/autogen 
 --optghc=-Ilibraries/ghc-prim/. --optghc=-optP-include 
 --optghc=-optPlibraries/ghc-prim/dist-install/build/autogen/cabal_macros.h 
 --optghc=-package --optghc=rts-1.0 --optghc=-package-name 
 --optghc=ghc-prim --optghc=-XHaskell98 --optghc=-XCPP 
 --optghc=-XMagicHash --optghc=-XForeignFunctionInterface 
 --optghc=-XUnliftedFFITypes --optghc=-XUnboxedTuples 
 --optghc=-XEmptyDataDecls --optghc=-XNoImplicitPrelude --optghc=-O2 
 --optghc=-no-user-package-db --optghc=-rtsopts --optghc=-odir 
 --optghc=libraries/ghc-prim/dist-install/build --optghc=-hidir 
 --optghc=libraries/ghc-prim/dist-install/build --optghc=-stubdir 
 --optghc=libraries/ghc-prim/dist-install/build 
 libraries/ghc-prim/./GHC/Classes.hs  libraries/ghc-prim/./GHC/CString.hs 
   libraries/ghc-prim/./GHC/Debug.hs  libraries/ghc-prim/./GHC/Magic.hs 
 libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs 
 libraries/ghc-prim/./GHC/IntWord64.hs  libraries/ghc-prim/./GHC/Tuple.hs 
   libraries/ghc-prim/./GHC/Types.hs 
 libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs +RTS 
 -tlibraries/ghc-prim/dist-install/doc/html/ghc-prim/ghc-prim.haddock.t 
 --machine-readable
 Haddock coverage:
   100% (  1 /  1) in 'GHC.IntWord64'
80% (  8 / 10) in 'GHC.Types'
17% (  1 /  6) in 'GHC.CString'
 3% (  2 / 63) in 'GHC.Tuple'
 0% (  0 /  3) in 'GHC.Debug'
 0% (  0 /366) in 'GHC.PrimopWrappers'
72% (813 /1132) in 'GHC.Prim'
   100% (  3 /  3) in 'GHC.Magic'
38% (  6 / 16) in 'GHC.Classes'
 haddock: internal error: haddock: panic! (the 'impossible' happened)
(GHC version 7.7.20131004 for x86_64-unknown-linux):
 Static flags have not been initialised!
  Please call GHC.parseStaticFlags early enough.
 
 Please report this as a GHC bug:  http://www.haskell.org/ghc/reportabug
 
 make[1]: *** 
 [libraries/ghc-prim/dist-install/doc/html/ghc-prim/ghc-prim.haddock] Error 1
 make: *** [all] Error 2
 
 Suggestions?
 Thanks!
 Nathan
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: 7.8 Release Update

2013-09-14 Thread Edward Z. Yang
Actually, the situation is pretty bad on Windows, where dynamic-too
does not work.

Edward

Excerpts from Edward Z. Yang's message of Mon Sep 09 16:29:38 -0700 2013:
 Erm, I forgot to mention that profiling would only be enabled if
 the user asked for it.
 
 Yes, we will be producing two sets of objects by default. This is what
 the -dynamic-too flag is for, no?  I suppose you could try to compile
 your static executables using -fPIC, but that would negate the performance
 considerations why we haven't just switched to dynamic for everything.
 
 Edward
 
 Excerpts from Johan Tibell's message of Mon Sep 09 16:15:45 -0700 2013:
  That sounds terrible expensive to do on every `cabal build` and its a
  cost most users won't understand (what was broken before?).
  
  On Mon, Sep 9, 2013 at 4:06 PM, Edward Z. Yang ezy...@mit.edu wrote:
   If I am building some Haskell executable using 'cabal build', the
   result should be *statically linked* by default.
  
   However, subtly, if I am building a Haskell library, I would like to
   be able to load the compiled version into GHCi.
  
   So it seems to me cabal should produce v, dyn (libs only, not final
   executable) and p ways by default (but not dyn_p).
  
   Edward
  
   Excerpts from Kazu Yamamoto (山本和彦)'s message of Mon Sep 09 15:37:10 -0700 
   2013:
   Hi,
  
Kazu (or someone else), can you please file a ticket on the Cabal bug
tracker [1] if you think that this a Cabal bug?
  
   I'm not completely sure yet.
  
   GHCi 7.8 uses dynamic linking. This is true.
  
   So, what is a consensus for GHC 7.8 and cabal-install 1.18? Are they
   supposed to use dynamic linking? Or, static linking?
  
   If dynamic linking is used, GHC should provide dynamic libraries for
   profiling.
  
   If static linking is used, cabal-install should stop using dynamic
   libraries for profiling.
  
   And of course, I can make a ticket when I'm convinced.
  
   P.S.
  
   Since doctest uses GHCi internally, I might misunderstand GHC 7.8
   uses dynamic linking. Anyway, I don't understand what is right yet.
  
   --Kazu
  
  
   ___
   ghc-devs mailing list
   ghc-d...@haskell.org
   http://www.haskell.org/mailman/listinfo/ghc-devs
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: [Haskell-cafe] Proposal: New syntax for Haskell

2013-09-10 Thread Edward Z. Yang
This is completely irrelevant, but the .chs extension is
already taken by the c2hs tool.

Cheers,
Edward

Excerpts from Niklas Hambüchen's message of Tue Sep 10 00:30:41 -0700 2013:
 Impressed by the productivity of my Ruby-writing friends, I have
 recently come across Cucumber: http://cukes.info
 
 
 It is a great tool for specifying tests and programs in natural
 language, and especially easy to learn for beginners.
 
 I propose that we add a Cucumber syntax for Haskell, with the extension
 .chs, next to .hs and .lhs.
 
 
 Code written in cucumber syntax is concise and easy to read: You can
 find some example code in https://gist.github.com/nh2/6505995. Quoting
 from that:
 
   Feature: The Data.List module
 
 In order to be able to use lists
 As a programmer
 I want a module that defines list functions
 
 Scenario: Defining the function foldl
   Given I want do define foldl
   Which has the type (in brackets) a to b to a (end of brackets),
  to a, to list of b, to a
   And my arguments are called f, acc, and l
   When l is empty
   Then the result better be acc
   Otherwise l is x cons xs
   Then the result should be foldl f (in brackets) f acc x
 (end of brackets) xs
 
 
 PS: People even already started a testing framework for Haskell in it:
 https://github.com/sol/cucumber-haskell#cucumber-for-haskell
 
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: 7.8 Release Update

2013-09-09 Thread Edward Z. Yang
Excerpts from Kazu Yamamoto (山本和彦)'s message of Sun Sep 08 19:36:19 -0700 2013:
 
 % make show VALUE=GhcLibWays
 make -r --no-print-directory -f ghc.mk show
 GhcLibWays=v p dyn
 

Yes, it looks like you are missing p_dyn from this list. I think
this is a bug in the build system.  When I look at ghc.mk
it only verifies that the p way is present, not p_dyn; and I don't
see any knobs which turn on p_dyn.

However, I must admit to being a little confused; didn't we abandon
dynamic by default and switch to only using dynamic for GHCi (in which
case the profiling libraries ought not to matter)?

Edward

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: 7.8 Release Update

2013-09-09 Thread Edward Z. Yang
 I think Kazu is saying that when he builds something with profiling 
 using cabal-install, it fails because cabal-install tries to build a 
 dynamic version too.  We don't want dyanmic/profiled libraries (there's 
 no point, you can't load them into GHCi).  Perhaps this is something 
 that needs fixing in cabal-install?

Agreed, sounds like a Cabal install bug.

Edward

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: 7.8 Release Update

2013-09-09 Thread Edward Z. Yang
Hello Mikhail,

It is a known issue that Template Haskell does not work with profiling (because
GHCi and profiling do not work together, and TH uses GHCi's linker). [1] 
Actually,
with the new linker patches that are landing soon we are not too far off from
having this work.

Edward

[1] http://ghc.haskell.org/trac/ghc/ticket/4837

Excerpts from Mikhail Glushenkov's message of Mon Sep 09 14:15:54 -0700 2013:
 Hi,
 
 On Mon, Sep 9, 2013 at 10:11 PM, Simon Marlow marlo...@gmail.com wrote:
 
  I think Kazu is saying that when he builds something with profiling using
  cabal-install, it fails because cabal-install tries to build a dynamic
  version too.  We don't want dyanmic/profiled libraries (there's no point,
  you can't load them into GHCi).  Perhaps this is something that needs fixing
  in cabal-install?
 
 Aren't they needed when compiling libraries that are using Template
 Haskell for profiling? The issue sounds like it could be TH-related.
 

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: 7.8 Release Update

2013-09-09 Thread Edward Z. Yang
If I am building some Haskell executable using 'cabal build', the
result should be *statically linked* by default.

However, subtly, if I am building a Haskell library, I would like to
be able to load the compiled version into GHCi.

So it seems to me cabal should produce v, dyn (libs only, not final
executable) and p ways by default (but not dyn_p).

Edward

Excerpts from Kazu Yamamoto (山本和彦)'s message of Mon Sep 09 15:37:10 -0700 2013:
 Hi,
 
  Kazu (or someone else), can you please file a ticket on the Cabal bug
  tracker [1] if you think that this a Cabal bug?
 
 I'm not completely sure yet.
 
 GHCi 7.8 uses dynamic linking. This is true.
 
 So, what is a consensus for GHC 7.8 and cabal-install 1.18? Are they
 supposed to use dynamic linking? Or, static linking?
 
 If dynamic linking is used, GHC should provide dynamic libraries for
 profiling.
 
 If static linking is used, cabal-install should stop using dynamic
 libraries for profiling.
 
 And of course, I can make a ticket when I'm convinced.
 
 P.S.
 
 Since doctest uses GHCi internally, I might misunderstand GHC 7.8
 uses dynamic linking. Anyway, I don't understand what is right yet.
 
 --Kazu
 

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: 7.8 Release Update

2013-09-09 Thread Edward Z. Yang
Erm, I forgot to mention that profiling would only be enabled if
the user asked for it.

Yes, we will be producing two sets of objects by default. This is what
the -dynamic-too flag is for, no?  I suppose you could try to compile
your static executables using -fPIC, but that would negate the performance
considerations why we haven't just switched to dynamic for everything.

Edward

Excerpts from Johan Tibell's message of Mon Sep 09 16:15:45 -0700 2013:
 That sounds terrible expensive to do on every `cabal build` and its a
 cost most users won't understand (what was broken before?).
 
 On Mon, Sep 9, 2013 at 4:06 PM, Edward Z. Yang ezy...@mit.edu wrote:
  If I am building some Haskell executable using 'cabal build', the
  result should be *statically linked* by default.
 
  However, subtly, if I am building a Haskell library, I would like to
  be able to load the compiled version into GHCi.
 
  So it seems to me cabal should produce v, dyn (libs only, not final
  executable) and p ways by default (but not dyn_p).
 
  Edward
 
  Excerpts from Kazu Yamamoto (山本和彦)'s message of Mon Sep 09 15:37:10 -0700 
  2013:
  Hi,
 
   Kazu (or someone else), can you please file a ticket on the Cabal bug
   tracker [1] if you think that this a Cabal bug?
 
  I'm not completely sure yet.
 
  GHCi 7.8 uses dynamic linking. This is true.
 
  So, what is a consensus for GHC 7.8 and cabal-install 1.18? Are they
  supposed to use dynamic linking? Or, static linking?
 
  If dynamic linking is used, GHC should provide dynamic libraries for
  profiling.
 
  If static linking is used, cabal-install should stop using dynamic
  libraries for profiling.
 
  And of course, I can make a ticket when I'm convinced.
 
  P.S.
 
  Since doctest uses GHCi internally, I might misunderstand GHC 7.8
  uses dynamic linking. Anyway, I don't understand what is right yet.
 
  --Kazu
 
 
  ___
  ghc-devs mailing list
  ghc-d...@haskell.org
  http://www.haskell.org/mailman/listinfo/ghc-devs

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: [Haskell-cafe] starting GHC development -- two questions

2013-08-08 Thread Edward Z. Yang
Hello Ömer,

First off, welcome to the wonderful world of GHC development!  I
recommend that you subscribe to the ghc-devs mailing list and
direct GHC specific questions there:

http://www.haskell.org/mailman/listinfo/ghc-devs

 While doing this, I think one feature would greatly help me finding my
 way through GHC source, which is huge: I want to see definition of
 some symbols. Normally what I would do for this is to load source into
 GHCi and run :info command. But in the case of GHC, even if it's
 possible to load GHC into GHCi, I don't think it will be faster than
 running ack --haskell someSymbol and searching through results
 manually.
 
 First idea came to my mind was to generate tags files and then
 navigate from within vim(my editor of choice). tags file can be added
 to Makefile as a goal and then tags can be regenerated after each
 build. Did anyone else try this before?

GHC has a 'make tags' command but I've never gotten it to work.  I have
always just run 'hasktags .' in the compiler/ directory, which works
pretty well for me.  (If you're in the RTS, run ctags, etc instead)

 My second question is do we have any low-hanging fruits in trac, to
 help new people start contributing to GHC? I know several open source
 projects using that approach and it's really helpful for beginners.
 
 I just skimmed over trac and most issues look way too advanced for a starter.

We've been discussing putting together an easy bugs list.  As a proxy,
you can search on the 'Difficulty' keyword:
http://ghc.haskell.org/trac/ghc/query?status=infoneededstatus=mergestatus=newstatus=patchdifficulty=Easy+(less+than+1+hour)col=idcol=summarycol=statuscol=typecol=prioritycol=milestonecol=componentorder=priority

For example, this bug seems like a good beginner bug to get your feet
wet with the RTS: http://ghc.haskell.org/trac/ghc/ticket/750

This one will give you some experience wrangling the test suite:
http://ghc.haskell.org/trac/ghc/ticket/8079

Moving up to the moderate category, here is a nontrivial bug involving
profiling and the optimizer: http://ghc.haskell.org/trac/ghc/ticket/609

As with all open source projects, there is always lots of
infrastructural work to be done, so if that's your sort of thing, there
are plenty of bugs in that category.

Cheers,
Edward

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] ANN: Monad.Reader Issue 22

2013-08-07 Thread Edward Z. Yang
I am pleased to announce that Issue 22 of the Monad Reader is now available.

http://themonadreader.files.wordpress.com/2013/08/issue22.pdf

Issue 22 consists of the following two articles:

  * Generalized Algebraic Data Types in Haskell by Anton Dergunov
  * Error Reporting Parsers: a Monad Transformer Approach by Matt Fenwick and 
Jay Vyas
  * Two Monoids for Approximating NP-Complete Problems by Mike Izbicki

Feel free to browse the source files. You can check out the entire repository 
using Git:

git clone https://github.com/ezyang/tmr-issue22.git

If you’d like to write something for Issue 23, please get in touch!

Cheers,
Edward

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] what is wrong w my IORef Word32 ?

2013-07-18 Thread Edward Z. Yang
shiftL has the wrong type:  Bits a = a - Int - a
so it is expecting the value in the IORef to be an Int.

Edward

Excerpts from Joerg Fritsch's message of Thu Jul 18 10:08:22 -0700 2013:
 All, what is wrong w the below code?
 
 I get an type error related to the operation shiftL
 
 import Data.Bits
 import Data.Word
 import Data.IORef
 
 data Word32s = Word32s { x :: IORef Word32 }
 
 bitfield :: Word32
 bitfield = 0
 
 mkbitfield :: Word32 - IO Word32s
 mkbitfield i = do the_bf - newIORef i
   return (Word32s the_bf)
 
 sLbitfield :: Integer - Word32s - IO ()
 sLbitfield i (Word32s bf) = do modifyIORef bf (shiftL i)
 
 main::IO()
 main = do
  oper_bf - mkbitfield bitfield 
  sLbitfield 2 oper_bf
 
 
 
 bf_003.hs:15:48:
 Couldn't match type `Int' with `Word32'
 Expected type: Word32 - Word32
   Actual type: Int - Word32
 In the return type of a call of `shiftL'
 In the second argument of `modifyIORef', namely `(shiftL i)'
 In a stmt of a 'do' block: modifyIORef bf (shiftL i)
 
 bf_003.hs:15:55:
 Couldn't match expected type `Word32' with actual type `Integer'
 In the first argument of `shiftL', namely `i'
 In the second argument of `modifyIORef', namely `(shiftL i)'
 In a stmt of a 'do' block: modifyIORef bf (shiftL i)
 
 
 
 Thanks,
 --Joerg
 

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Non-recursive let [Was: GHC bug? Let with guards loops]

2013-07-10 Thread Edward Z. Yang
In my opinion, when you are rebinding a variable with the same name,
there is usually another way to structure your code which eliminates
the variable.

If you would like to write:

let x = foo input in
let x = bar x in
let x = baz x in

instead, write

baz . bar . foo $ input

If you would like to write

let (x,s) = foo 1 [] in
let (y,s) = bar x s in
let (z,s) = baz x y s in

instead, use a state monad.

Clearly this will not work in all cases, but it goes pretty far,
in my experience.

Edward

Excerpts from Andreas Abel's message of Wed Jul 10 00:47:48 -0700 2013:
 Hi Oleg,
 
 just now I wrote a message to haskell-pr...@haskell.org to propose a 
 non-recursive let.  Unfortunately, the default let is recursive, so we 
 only have names like let' for it.  I also mentioned the ugly workaround 
 (- return $) that I was shocked to see the first time, but use myself 
 sometimes now.
 
 Cheers,
 Andreas
 
 On 10.07.2013 09:34, o...@okmij.org wrote:
  Andreas wrote:
  The greater evil is that Haskell does not have a non-recursive let.
  This is source of many non-termination bugs, including this one here.
  let should be non-recursive by default, and for recursion we could have
  the good old let rec.
 
  Hear, hear! In OCaml, I can (and often do) write
 
   let (x,s) = foo 1 [] in
   let (y,s) = bar x s in
   let (z,s) = baz x y s in ...
 
  In Haskell I'll have to uniquely number the s's:
 
   let (x,s1)  = foo 1 [] in
   let (y,s2)  = bar x s1 in
   let (z,s3)  = baz x y s2 in ...
 
  and re-number them if I insert a new statement. BASIC comes to mind. I
  tried to lobby Simon Peyton-Jones for the non-recursive let a couple
  of years ago. He said, write a proposal. It's still being
  written... Perhaps you might want to write it now.
 
  In the meanwhile, there is a very ugly workaround:
 
   test = runIdentity $ do
(x,s) - return $ foo 1 []
(y,s) - return $ bar x s
(z,s) - return $ baz x y s
return (z,s)
 
  After all, bind is non-recursive let.
 
 
 
 

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: executable stack flag

2013-07-09 Thread Edward Z. Yang
I took a look at the logs and none mentioned 'Hey, so it turns out
we need executable stack for this', and as recently as Sep 17, 2011
there are patches for turning off executable stack (courtesy Gentoo).  So 
probably it
is just a regression, someone added some code which didn't turn off
executable stacks...

Edward

Excerpts from Jens Petersen's message of Mon Jul 08 21:36:42 -0700 2013:
 Hi,
 
 We noticed [1] in Fedora that ghc (7.4 and 7.6) are linking executables
 (again [2]) with the executable stack flag set. I haven't starting looking
 at the ghc code yet but wanted to ask first if it is intentional/necessary?
  (ghc-7.0 doesn't seem to do this.) Having the flag set is considered a bit
 of a security risk so it would be better if all generated executable did
 not have it set.
 
 I did some very basic testing of various executables, clearing their
 flags [3] and they all seemed to run ok without the executable stack flag
 set but I can't claim to have tested very exhaustively. (I thought perhaps
 it might be related to TemplateHaskell for example but even those
 executables seem to work, though I am sure I have not exercised all the
 code paths.)
 
 Does someone know the current status of this?
 Will anything break if the flag is not set?
 Is it easy to patch ghc to not set the flag?
 Does it only affect the NCG backend?
 
 Thanks, Jens
 
 [1] https://bugzilla.redhat.com/show_bug.cgi?id=973512
 [2] http://ghc.haskell.org/trac/ghc/ticket/703
 [3] using execstack -c

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: executable stack flag

2013-07-09 Thread Edward Z. Yang
I've gone ahead and fixed it, and referenced the patches in the ticket.

Cheers,
Edward

Excerpts from Jens Petersen's message of Mon Jul 08 21:36:42 -0700 2013:
 Hi,
 
 We noticed [1] in Fedora that ghc (7.4 and 7.6) are linking executables
 (again [2]) with the executable stack flag set. I haven't starting looking
 at the ghc code yet but wanted to ask first if it is intentional/necessary?
  (ghc-7.0 doesn't seem to do this.) Having the flag set is considered a bit
 of a security risk so it would be better if all generated executable did
 not have it set.
 
 I did some very basic testing of various executables, clearing their
 flags [3] and they all seemed to run ok without the executable stack flag
 set but I can't claim to have tested very exhaustively. (I thought perhaps
 it might be related to TemplateHaskell for example but even those
 executables seem to work, though I am sure I have not exercised all the
 code paths.)
 
 Does someone know the current status of this?
 Will anything break if the flag is not set?
 Is it easy to patch ghc to not set the flag?
 Does it only affect the NCG backend?
 
 Thanks, Jens
 
 [1] https://bugzilla.redhat.com/show_bug.cgi?id=973512
 [2] http://ghc.haskell.org/trac/ghc/ticket/703
 [3] using execstack -c

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: [Haskell-cafe] Lambda Calculus question on equivalence

2013-05-02 Thread Edward Z. Yang
The notion of equivalence you are talking about (normally L is referred
to as a context) is 'extensional equality'; that is, functions f
and g are equal if forall x, f x = g x.  It's pretty easy to give
a pair of functions which are not alpha equivalent but are observationally
equivalent:

if collatz_conjecture then true else bottom
true / bottom (Depending on whether or not you think the collatz conjecture 
is true...)

Cheers,
Edward

Excerpts from Ian Price's message of Thu May 02 12:47:07 -0700 2013:
 Hi,
 
 I know this isn't perhaps the best forum for this, but maybe you can
 give me some pointers.
 
 Earlier today I was thinking about De Bruijn Indices, and they have the
 property that two lambda terms that are alpha-equivalent, are expressed
 in the same way, and I got to wondering if it was possible to find a
 useful notion of function equality, such that it would be equivalent to
 structural equality (aside from just defining it this way), though
 obviously we cannot do this in general.
 
 So the question I came up with was:
 
 Can two normalised (i.e. no subterm can be beta or eta reduced) lambda
 terms be observationally equivalent, but not alpha equivalent?
 
 By observationally equivalent, I mean A and B are observationally
 equivalent if for all lambda terms L: (L A) is equivalent to (L B) and
 (A L) is equivalent to (B L). The definition is admittedly circular, but
 I hope it conveys enough to understand what I'm after.
 
 My intuition is no, but I am not sure how to prove it, and it seems to
 me this sort of question has likely been answered before.
 
 Cheers

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Lambda Calculus question on equivalence

2013-05-02 Thread Edward Z. Yang
Excerpts from Timon Gehr's message of Thu May 02 14:16:45 -0700 2013:
 Those are not lambda terms.
 Furthermore, if those terms are rewritten to operate on church numerals, 
 they have the same unique normal form, namely λλλ 3 2 (3 2 1).

The trick is to define the second one as x * 2 (and assume the fixpoint
operates on the first argument). Now they are not equal.

Edward

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Stream fusion and span/break/group/init/tails

2013-04-22 Thread Edward Z. Yang
So, if I understand correctly, you're using the online/offline
criterion to resolve non-directed cycles in pipelines?  (I couldn't
tell how the Shivers paper was related.)

Cheers,
Edward

Excerpts from Ben Lippmeier's message of Sun Apr 21 19:29:29 -0700 2013:
 
 On 22/04/2013, at 12:23 , Edward Z. Yang ezy...@mit.edu wrote:
 
  I've got a solution for this problem and it will form the basis of
  Repa 4, which I'm hoping to finish a paper about for  the upcoming
  Haskell Symposium.
  
  Sounds great! You should forward me a preprint when you have something
  in presentable shape. I suppose before then, I should look at 
  repa-head/repa-stream
  to figure out what the details are?
 
 The basic approach is already described in:
 
 Automatic Transformation of Series Expressions into Loops
 Richard Waters, TOPLAS 1991
 
 The Anatomy of a Loop
 Olin Shivers, ICFP 2005
 
 
 The contribution of the HS paper is planning to be:
  1) How to extend the approach to the combinators we need for DPH
  2) How to package it nicely into a Haskell library.
 
 I'm still working on the above...
 
 Ben.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] conditional branching vs pattern matching: pwn3d by GHC

2013-04-22 Thread Edward Z. Yang
Note that, unfortunately, GHC's exhaustiveness checker is *not* good
enough to figure out that your predicates are covering. :o)  Perhaps
there is an improvement to be had here.

Edward

Excerpts from Albert Y. C. Lai's message of Mon Apr 22 00:51:46 -0700 2013:
 When I was writing
 http://www.vex.net/~trebla/haskell/crossroad.xhtml
 I wanted to write: branching on predicates and then using selectors is 
 less efficient than pattern matching, since selectors repeat the tests 
 already done by predicates.
 
 It is only ethical to verify this claim before writing it. So here it 
 goes, eval uses pattern matching, fval uses predicates and selectors:
 
 module E where
 
 data E = Val{fromVal::Integer} | Neg{fromNeg::E}
| Add{fromAdd0, fromAdd1 :: E}
 isVal Val{} = True
 isVal _ = False
 isNeg Neg{} = True
 isNeg _ = False
 isAdd Add{} = True
 isAdd _ = False
 
 eval (Val n) = n
 eval (Neg e0) = - eval e0
 eval (Add e0 e1) = eval e0 + eval e1
 
 fval e | isVal e = fromVal e
 | isNeg e = - fval (fromNeg e)
 | isAdd e = fval (fromAdd0 e) + fval (fromAdd1 e)
 
 Simple and clear. What could possibly go wrong!
 
 $ ghc -O -c -ddump-simpl -dsuppress-all -dsuppress-uniques E.hs
 
 ...
 
 Rec {
 fval
 fval =
\ e -
  case e of _ {
Val ds - ds;
Neg ds - negateInteger (fval ds);
Add ipv ipv1 - plusInteger (fval ipv) (fval ipv1)
  }
 end Rec }
 
 Rec {
 eval
 eval =
\ ds -
  case ds of _ {
Val n - n;
Neg e0 - negateInteger (eval e0);
Add e0 e1 - plusInteger (eval e0) (eval e1)
  }
 end Rec }
 
 Which of the following best describes my feeling?
 [ ] wait, what?
 [ ] lol
 [ ] speechless
 [ ] oh man
 [ ] I am so pwn3d
 [ ] I can't believe it
 [ ] what can GHC not do?!
 [ ] but what am I going to say in my article?!
 [ ] why is GHC making my life hard?!
 [X] all of the above
 

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Stream fusion and span/break/group/init/tails

2013-04-21 Thread Edward Z. Yang
Hello all, (cc'd stream fusion paper authors)

I noticed that the current implementation of stream fusion does
not support multiple-return stream combinators, e.g.
break :: (a - Bool) - [a] - ([a], [a]).  I thought a little
bit about how might one go about implement this, but the problem
seems nontrivial. (One possibility is to extend the definition
of Step to support multiple return, but the details are a mess!)
Nor, as far as I can tell, does the paper give any treatment of
the subject.  Has anyone thought about this subject in some detail?

Thanks,
Edward

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Stream fusion and span/break/group/init/tails

2013-04-21 Thread Edward Z. Yang
 I've got a solution for this problem and it will form the basis of
 Repa 4, which I'm hoping to finish a paper about for  the upcoming
 Haskell Symposium.

Sounds great! You should forward me a preprint when you have something
in presentable shape. I suppose before then, I should look at 
repa-head/repa-stream
to figure out what the details are?

Edward

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe



Re: Why is GHC so much worse than JHC when computing the Ackermann function?

2013-04-20 Thread Edward Z. Yang
I don't seem to get the leak on latest GHC head.  Running the program
in GC debug mode in 7.6.2 is quite telling; the program is allocating
*a lot* of megablocks.  We probably fixed it though?

Edward

Excerpts from Mikhail Glushenkov's message of Sat Apr 20 01:55:10 -0700 2013:
 Hi all,
 
 This came up on StackOverflow [1]. When compiled with GHC (7.4.2 
 7.6.2), this simple program:
 
 main = print $ ack 4 1
   where ack :: Int - Int - Int
 ack 0 n = n+1
 ack m 0 = ack (m-1) 1
 ack m n = ack (m-1) (ack m (n-1))
 
 consumes all available memory on my machine and slows down to a crawl.
 However, when compiled with JHC it runs in constant space and is about
 as fast as the straightforward Ocaml version (see the SO question for
 benchmark numbers).
 
 I was able to fix the space leak by using CPS-conversion, but the
 CPS-converted version is still about 10 times slower than the naive
 version compiled with JHC.
 
 I looked both at the Core and Cmm, but couldn't find anything
 obviously wrong with the generated code - 'ack' is compiled to a
 simple loop of type 'Int# - Int# - Int#'. What's more frustrating is
 that running the program with +RTS -hc makes the space leak
 mysteriously vanish.
 
 Can someone please explain where the space leak comes from and if it's
 possible to further improve the runtime of this program with GHC?
 Apparently it's somehow connected to the stack management strategy,
 since running the program with a larger stack chunk size (+RTS -kc1M)
 makes the space leak go away. Interestingly, choosing smaller stack
 chunk sizes (256K, 512K) causes it to die with an OOM exception:
 
 $ time ./Test +RTS -kc256K
 Test: out of memory (requested 2097152 bytes)
 
 
 [1] 
 http://stackoverflow.com/questions/16115815/ackermann-very-inefficient-with-haskell-ghc/16116074#16116074
 

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: mask, catch, myThreadId, throwTo

2013-04-16 Thread Edward Z. Yang
OK, I've updated the docus.

Excerpts from Felipe Almeida Lessa's message of Mon Apr 15 13:34:50 -0700 2013:
 Thanks a lot, you're correct!  The trouble is, I was misguided by the
 Interruptible operations note [1] which states that
 
 The following operations are guaranteed not to be interruptible:
 ... * everything from Control.Exception ...
 
 Well, it seems that not everything from Control.Exception fits the bill.
 
 Thanks, =)
 
 [1] 
 http://www.haskell.org/ghc/docs/latest/html/libraries/base/Control-Exception.html#g:14
 
 On Mon, Apr 15, 2013 at 5:25 PM, Bertram Felgenhauer
 bertram.felgenha...@googlemail.com wrote:
  Felipe Almeida Lessa wrote:
  I have some code that is not behaving the way I thought it should.
 
  The gist of it is
 
sleeper =
  mask_ $
  forkIOWithUnmask $ \restore -
forever $
  restore sleep `catch` throwBack
 
throwBack (Ping tid) = myThreadId = throwTo tid . Pong
throwBack (Pong tid) = myThreadId = throwTo tid . Ping
 
  Since (a) throwBack is executed on a masked state, (b) myThreadId is
  uninterruptible, and (c) throwTo is uninterruptible, my understanding
  is that the sleeper thread should catch all PingPong exceptions and
  never let any one of them through.
 
  (c) is wrong, throwTo may block, and blocking operations are interruptible.
 

  http://www.haskell.org/ghc/docs/latest/html/libraries/base/Control-Exception.html#v:throwTo
 
  explains this in some more detail.
 
  The simplest way that throwTo can actually block in your program, as
  far as I can see, and one that will only affect the threaded RTS, is
  if the sleeper thread and whichever thread is running the other
  throwBack are executing on different capabilities; this will always
  cause throwTo to block. (You could try looking at a ghc event log to
  find out more.)
 
  I last ran into trouble like that with System.Timeout.timeout; for
  that function I finally convinced myself that uninterruptibleMask
  is the only way to avoid such problems; then throwTo will not be
  interrupted by exceptions even when it blocks. Maybe this is the
  solution for your problem, too.
 
  Hope that helps,
 
  Bertram
 
 
  ___
  Glasgow-haskell-users mailing list
  Glasgow-haskell-users@haskell.org
  http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
 

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: mask, catch, myThreadId, throwTo

2013-04-15 Thread Edward Z. Yang
Sounds like those docs need to be fixed, in that case.

Edward

Excerpts from Felipe Almeida Lessa's message of Mon Apr 15 13:34:50 -0700 2013:
 Thanks a lot, you're correct!  The trouble is, I was misguided by the
 Interruptible operations note [1] which states that
 
 The following operations are guaranteed not to be interruptible:
 ... * everything from Control.Exception ...
 
 Well, it seems that not everything from Control.Exception fits the bill.
 
 Thanks, =)
 
 [1] 
 http://www.haskell.org/ghc/docs/latest/html/libraries/base/Control-Exception.html#g:14
 
 On Mon, Apr 15, 2013 at 5:25 PM, Bertram Felgenhauer
 bertram.felgenha...@googlemail.com wrote:
  Felipe Almeida Lessa wrote:
  I have some code that is not behaving the way I thought it should.
 
  The gist of it is
 
sleeper =
  mask_ $
  forkIOWithUnmask $ \restore -
forever $
  restore sleep `catch` throwBack
 
throwBack (Ping tid) = myThreadId = throwTo tid . Pong
throwBack (Pong tid) = myThreadId = throwTo tid . Ping
 
  Since (a) throwBack is executed on a masked state, (b) myThreadId is
  uninterruptible, and (c) throwTo is uninterruptible, my understanding
  is that the sleeper thread should catch all PingPong exceptions and
  never let any one of them through.
 
  (c) is wrong, throwTo may block, and blocking operations are interruptible.
 

  http://www.haskell.org/ghc/docs/latest/html/libraries/base/Control-Exception.html#v:throwTo
 
  explains this in some more detail.
 
  The simplest way that throwTo can actually block in your program, as
  far as I can see, and one that will only affect the threaded RTS, is
  if the sleeper thread and whichever thread is running the other
  throwBack are executing on different capabilities; this will always
  cause throwTo to block. (You could try looking at a ghc event log to
  find out more.)
 
  I last ran into trouble like that with System.Timeout.timeout; for
  that function I finally convinced myself that uninterruptibleMask
  is the only way to avoid such problems; then throwTo will not be
  interrupted by exceptions even when it blocks. Maybe this is the
  solution for your problem, too.
 
  Hope that helps,
 
  Bertram
 
 
  ___
  Glasgow-haskell-users mailing list
  Glasgow-haskell-users@haskell.org
  http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
 

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: [Haskell-cafe] Resource Limits for Haskell

2013-04-01 Thread Edward Z. Yang
I now have a paper draft describing the system in more detail.  It also
comes with a brief explanation of how GHC's profiling works, which should
also be helpful for people who haven't read the original profiling
paper.

http://ezyang.com/papers/ezyang13-rlimits.pdf

Edward

Excerpts from Edward Z. Yang's message of Fri Mar 15 14:17:39 -0700 2013:
 Hey folks,
 
 Have you ever wanted to implement this function in Haskell?
 
 -- | Forks a thread, but kills it if it has more than 'limit'
 -- bytes resident on the heap.
 forkIOWithSpaceLimit :: IO () - {- limit -} Int - IO ThreadId
 
 Well, now you can! I have a proposal and set of patches here:
 
 http://hackage.haskell.org/trac/ghc/wiki/Commentary/ResourceLimits
 http://hackage.haskell.org/trac/ghc/ticket/7763
 
 There is a lot of subtlety in this space, largely derived from the
 complexity of interpreting GHC's current profiling information.  Your
 questions, comments and suggestions are greatly appreciated!
 
 Cheers,
 Edward

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell] [Haskell-cafe] Monad.Reader #22 call for copy

2013-03-29 Thread Edward Z. Yang
Call for Copy: The Monad.Reader - Issue 22


Another ICFP submission deadline has come and gone: why not celebrate by
submitting something to The Monad.Reader?  Whether you're an established
academic or have only just started learning Haskell, if you have
something to say, please consider writing an article for The
Monad.Reader!  The submission deadline for Issue 22 will be:

**Saturday, June 1**

The Monad.Reader


The Monad.Reader is a electronic magazine about all things Haskell. It
is less formal than journal, but somehow more enduring than a wiki-
page. There have been a wide variety of articles: exciting code
fragments, intriguing puzzles, book reviews, tutorials, and even
half-baked research ideas.

Submission Details
~~

Get in touch with me if you intend to submit something -- the sooner
you let me know what you're up to, the better.

Please submit articles for the next issue to me by e-mail (ezy...@mit.edu).

Articles should be written according to the guidelines available from

http://themonadreader.wordpress.com/contributing/

Please submit your article in PDF, together with any source files you
used. The sources will be released together with the magazine under a
BSD license.

If you would like to submit an article, but have trouble with LaTeX
please let me know and we'll work something out.

___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


[Haskell-cafe] Monad.Reader #22 call for copy

2013-03-29 Thread Edward Z. Yang
Call for Copy: The Monad.Reader - Issue 22


Another ICFP submission deadline has come and gone: why not celebrate by
submitting something to The Monad.Reader?  Whether you're an established
academic or have only just started learning Haskell, if you have
something to say, please consider writing an article for The
Monad.Reader!  The submission deadline for Issue 22 will be:

**Saturday, June 1**

The Monad.Reader


The Monad.Reader is a electronic magazine about all things Haskell. It
is less formal than journal, but somehow more enduring than a wiki-
page. There have been a wide variety of articles: exciting code
fragments, intriguing puzzles, book reviews, tutorials, and even
half-baked research ideas.

Submission Details
~~

Get in touch with me if you intend to submit something -- the sooner
you let me know what you're up to, the better.

Please submit articles for the next issue to me by e-mail (ezy...@mit.edu).

Articles should be written according to the guidelines available from

http://themonadreader.wordpress.com/contributing/

Please submit your article in PDF, together with any source files you
used. The sources will be released together with the magazine under a
BSD license.

If you would like to submit an article, but have trouble with LaTeX
please let me know and we'll work something out.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Future of MonadCatchIO

2013-03-26 Thread Edward Z. Yang
While block and unblock have been removed from base, they are still 
implementable
in modern GHC.  So another possible future is to deprecate MonadCatchIO
(which should have been done a while ago, honestly!), but manually redefine
the functions so that old code keeps working.

Edward

Excerpts from Arie Peterson's message of Sun Mar 03 07:40:06 -0800 2013:
 Hi all,
 
 
 The function 'block' and 'unblock' (from Control.Exception) have been 
 deprecated for some time, and are apparantly now being removed (in favour of 
 'mask').
 
 Generalisations of these functions are (part of) the interface of 
 MonadCatchIO-transformers (the 'MonadCatchIO' class has methods 'block' and 
 'unblock'). So, the interface would have to change to keep up with base.
 
 I'm inclined to deprecate MonadCatchIO-transformers itself, in favour of 
 monad-control.
 
 I suspect that most clients do not use 'block' or 'unblock' directly, but use 
 only derived functions, like 'bracket'. (I have partly confirmed this, by 
 inspecting some reverse dependencies on hackage.) This allow an easy 
 transition to monad-control: in many cases, only imports will need to be 
 changed. In the minority of cases where 'block' and 'unblock' are used and/or 
 instances of MonadCatchIO are defined, code will need to be updated.
 
 There is a difference in functionality between MonadCatchIO and 
 monad-control. 
 In the former, 'bracket' will not perform the final action if the main action 
 is an ErrorT that throws an error (in contrast with exceptions in the 
 underlying IO monad). In monad-control, 'bracket' will perform the final 
 action 
 in this case. (See this discussion for background:
 http://www.haskell.org/pipermail/haskell-cafe/2010-October/084890.html.)
 
 Probably, in most use cases the behaviour of monad-control is preferred. This 
 seems to be the case also for snap, which uses MonadCatchIO-transformers, but 
 defines its own variant of 'bracket' to get the right behaviour.
 
 
 Would anyone have a problem with a deprecation of MonadCatchIO-transformers, 
 and a failure to update it to work with a base without 'block' and 'unblock'?
 
 
 Regards,
 
 Arie
 

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] MVar which can not be null ?

2013-03-18 Thread Edward Z. Yang
If you are doing IO operations, then the operation is hardly atomic, is it?

Just take from the MVar, compute, and when you're done, put a value
back on the MVar.  So long as you can guarantee all users of the MVar
take before putting, you will have the desired semantics.

Something worth considering: what are the desired semantics if an
asynchronous exception is thrown on the thread servicing the MVar?
If the answer is to just quit, what if it has already performed
externally visible IO actions?  If the answer is to ignore it, what
if the thread gets wedged?

Edward

Excerpts from s9gf4ult's message of Mon Mar 18 01:07:42 -0700 2013:
 18.03.2013 13:26, Alexander V Vershilov ?:
 
 I can not use atomicModifyIORef because it works with pure computation
 
 atomicModifyIORef :: IORef
 http://hackage.haskell.org/packages/archive/base/latest/doc/html/Data-IORef.html#t:IORef
 a - (a - (a, b)) - IO
 http://hackage.haskell.org/packages/archive/base/latest/doc/html/System-IO.html#t:IO
 b
 
 nor STM, becuase IO is not acceptable inside STM transaction.
 
 I just need some thread-safe blocking variable like MVar
 
 modifyMVar :: MVar
 http://hackage.haskell.org/packages/archive/base/4.6.0.1/doc/html/Control-Concurrent-MVar.html#t:MVar
 a - (a - IO
 http://hackage.haskell.org/packages/archive/base/4.6.0.1/doc/html/System-IO.html#t:IO
 (a, b)) - IO
 http://hackage.haskell.org/packages/archive/base/4.6.0.1/doc/html/System-IO.html#t:IO
 b

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell] ANN: Monad.Reader Issue 21

2013-03-16 Thread Edward Z. Yang
I am pleased to announce that Issue 21 of the Monad Reader is now available.

http://themonadreader.files.wordpress.com/2013/03/issue21.pdf

Issue 21 consists of the following two articles:

* A Functional Approach to Neural Networks by Amy de Buitléir, Michael 
Russell, Mark Daly
* Haskell ab initio: the Hartree-Fock Method in Haskell by Felipe Zapata, 
Angel J. Alvarez

Feel free to browse the source files. You can check out the entire repository 
using Git:

git clone https://github.com/ezyang/tmr-issue21.git

If you’d like to write something for Issue 22, please get in touch!

Cheers,
Edward

___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


[Haskell-cafe] ANN: Monad.Reader Issue 21

2013-03-16 Thread Edward Z. Yang
I am pleased to announce that Issue 21 of the Monad Reader is now available.

http://themonadreader.files.wordpress.com/2013/03/issue21.pdf

Issue 21 consists of the following two articles:

* A Functional Approach to Neural Networks by Amy de Buitléir, Michael 
Russell, Mark Daly
* Haskell ab initio: the Hartree-Fock Method in Haskell by Felipe Zapata, 
Angel J. Alvarez

Feel free to browse the source files. You can check out the entire repository 
using Git:

git clone https://github.com/ezyang/tmr-issue21.git

If you’d like to write something for Issue 22, please get in touch!

Cheers,
Edward

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Resource Limits for Haskell

2013-03-15 Thread Edward Z. Yang
Hey folks,

Have you ever wanted to implement this function in Haskell?

-- | Forks a thread, but kills it if it has more than 'limit'
-- bytes resident on the heap.
forkIOWithSpaceLimit :: IO () - {- limit -} Int - IO ThreadId

Well, now you can! I have a proposal and set of patches here:

http://hackage.haskell.org/trac/ghc/wiki/Commentary/ResourceLimits
http://hackage.haskell.org/trac/ghc/ticket/7763

There is a lot of subtlety in this space, largely derived from the
complexity of interpreting GHC's current profiling information.  Your
questions, comments and suggestions are greatly appreciated!

Cheers,
Edward

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Resource Limits for Haskell

2013-03-15 Thread Edward Z. Yang
The particular problem you're referring to is fixed if you compile all
your libraries with -falways-yield; see 
http://hackage.haskell.org/trac/ghc/ticket/367

I believe that it is possible to give a guarantee that the kill
signal will hit the thread in a timely fashion.  The obvious gap in
our coverage at the moment is that there may be some primops that infinite
loop, and there are probably other bugs, but I do not believe they are
insurmountable.

Edward

Excerpts from Gwern Branwen's message of Fri Mar 15 14:39:50 -0700 2013:
 On Fri, Mar 15, 2013 at 5:17 PM, Edward Z. Yang ezy...@mit.edu wrote:
  There is a lot of subtlety in this space, largely derived from the
  complexity of interpreting GHC's current profiling information.  Your
  questions, comments and suggestions are greatly appreciated!
 
 How secure is this? One of the reasons for forking a process and then
 killing it after a timeout in lambdabot/mueval is because a thread can
 apparently block the GC from running with a tight enough loop and the
 normal in-GHC method of killing threads doesn't work. Can one
 simultaneously in a thread allocate ever more memory and suppress kill
 signals?
 

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Foreign.StablePtr: nullPtr double-free questions.

2013-03-13 Thread Edward Z. Yang
Excerpts from Remi Turk's message of Wed Mar 13 13:09:18 -0700 2013:
 Thanks for your quick reply. Could you elaborate on what a bit of
 overhead means?
 As a bit of context, I'm working on a small library for working with
 (im)mutable extendable
 tuples/records based on Storable and ForeignPtr, and I'm using
 StablePtr's as back-references
 to Haskell-land. Would you expect StablePtr's to have serious
 performance implications
 in such a scenario compared to, say, an IORef?

Yes, they will. Every stable pointer that is active has to be stuffed
into a giant array, and the entire array must be traversed during every
GC.  See also: http://hackage.haskell.org/trac/ghc/ticket/7670

Edward

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: [Haskell-cafe] Open-source projects for beginning Haskell students?

2013-03-12 Thread Edward Z. Yang
I also support this suggestion.  Although, do we have the build infrastructure
for this?!

Edward

Excerpts from Michael Orlitzky's message of Mon Mar 11 19:52:12 -0700 2013:
 On 03/11/2013 11:48 AM, Brent Yorgey wrote:
  
  So I'd like to do it again this time around, and am looking for
  particular projects I can suggest to them.  Do you have an open-source
  project with a few well-specified tasks that a relative beginner (see
  below) could reasonably make a contribution towards in the space of
  about four weeks? I'm aware that most tasks don't fit that profile,
  but even complex projects usually have a few simple-ish tasks that
  haven't yet been done just because no one has gotten around to it
  yet.
 
 It's not exciting, but adding doctest suites with examples to existing
 packages would be a great help.
 
   * Good return on investment.
 
   * Not too hard.
 
   * The project is complete when you stop typing.
 

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


RE: runghc -fdefer-type-errors

2013-03-11 Thread Edward Z. Yang
Excerpts from Simon Peyton-Jones's message of Mon Mar 11 16:04:31 -0700 2013:
 Aha.  It is indeed true that
 
 ghc -fdefer-type-errors -w
 
 does not suppress the warnings that arise from the type errors; indeed there 
 is no current way to do so.  How to do that?
 
 To be kosher there should really be a flag to switch off those warnings 
 alone, perhaps
 -fno-warn-type-errors
 
 So then -fwarn-type-errors is on by default, but is only relevant when 
 -fdefer-type-errors is on.  Once -fdefer-type-errors is on, 
 -fno-warn-type-errors and -fwarn-type-errors suppress or enable the warnings. 
  -w would then include -fno-warn-type-errors.
 
 Is that a design everyone would like?  If so, woudl someone like to open a 
 ticket, implement it, update the documentation, and send a patch?

SGTM.

Edward

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: [Haskell-cafe] To seq or not to seq, that is the question

2013-03-09 Thread Edward Z. Yang
Excerpts from Tom Ellis's message of Sat Mar 09 00:34:41 -0800 2013:
 I've never looked at evaluate before but I've just found it's haddock and
 given it some thought.
 
 
 http://hackage.haskell.org/packages/archive/base/latest/doc/html/Control-Exception-Base.html#v:evaluate
 
 Since it is asserted that
 
 evaluate x = (return $! x) = return
 
 is it right to say (on an informal level at least) that evaluating an IO
 action to WHNF means evaluating it to the outermost = or return?

Sure.

Prelude let x = undefined :: IO a
Prelude x `seq` ()
*** Exception: Prelude.undefined
Prelude (x = undefined) `seq` ()
()

  For non-IO monads, since everything is imprecise anyway, it doesn't
  matter.
 
 Could you explain what you mean by imprecise?

Imprecise as in imprecise exceptions, 
http://research.microsoft.com/en-us/um/people/simonpj/papers/imprecise-exn.htm

Edward

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Foreign.StablePtr: nullPtr double-free questions.

2013-03-08 Thread Edward Z. Yang
Excerpts from Remi Turk's message of Fri Mar 08 18:28:56 -0800 2013:
 Good night everyone,
 
 I have two questions with regards to some details of the
 Foreign.StablePtr module. [1]
 
 1) The documentation suggests, but does not explicitly state, that
   castStablePtrToPtr `liftM` newStablePtr x
 will never yield a nullPtr. Is this guaranteed to be the case or not?
 It would conveniently allow me to store a Maybe for free, using
 nullPtr for Nothing, but I am hesitant about relying on something that
 isn't actually guaranteed by the documentation.

No, you cannot assume that.  In fact, stable pointer zero is
base_GHCziTopHandler_runIO_info:

ezyang@javelin:~/Dev/haskell$ cat sptr.hs
import Foreign.StablePtr
import Foreign.Ptr

main = do
let x = castPtrToStablePtr nullPtr
freeStablePtr x
ezyang@javelin:~/Dev/haskell$ ~/Dev/ghc-build-tick/inplace/bin/ghc-stage2 
--make sptr.hs -debug 
[1 of 1] Compiling Main ( sptr.hs, sptr.o )
Linking sptr ...
ezyang@javelin:~/Dev/haskell$ gdb ./sptr
GNU gdb (GDB) 7.5-ubuntu
Copyright (C) 2012 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later 
http://gnu.org/licenses/gpl.html
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type show copying
and show warranty for details.
This GDB was configured as x86_64-linux-gnu.
For bug reporting instructions, please see:
http://www.gnu.org/software/gdb/bugs/...
Reading symbols from /srv/code/haskell/sptr...done.
(gdb) b freeStablePtrUnsafe
Breakpoint 1 at 0x73f8a7: file rts/Stable.c, line 263.
(gdb) r
Starting program: /srv/code/haskell/sptr 
[Thread debugging using libthread_db enabled]
Using host libthread_db library /lib/x86_64-linux-gnu/libthread_db.so.1.

Breakpoint 1, freeStablePtrUnsafe (sp=0x0) at rts/Stable.c:263
263 ASSERT((StgWord)sp  SPT_size);
(gdb) list 
258 }
259 
260 void
261 freeStablePtrUnsafe(StgStablePtr sp)
262 {
263 ASSERT((StgWord)sp  SPT_size);
264 freeSpEntry(stable_ptr_table[(StgWord)sp]);
265 }
266 
267 void
(gdb) p stable_ptr_table[(StgWord)sp]
$1 = {addr = 0x9d38e0}
(gdb) p *(StgClosure*)stable_ptr_table[(StgWord)sp]
$2 = {header = {info = 0x4e89c8 base_GHCziTopHandler_runIO_info}, payload 
= 0x9d38e8}

Regardless, you don't want to do that anyway, because stable pointers
have a bit of overhead.

 2) If I read the documentation correctly, when using StablePtr it is
 actually quite difficult to avoid undefined behaviour, at least in
 GHC(i). In particular, a double-free on a StablePtr yields undefined
 behaviour. However, when called twice on the same value, newStablePtr
 yields the same StablePtr in GHC(i).
 E.g.:
 
 module Main where
 
 import Foreign
 
 foo x y = do
 p1 - newStablePtr x
 p2 - newStablePtr y
 print $ castStablePtrToPtr p1 == castStablePtrToPtr p2
 freeStablePtr p1
 freeStablePtr p2 -- potential double free!
 
 main = let x = Hello, world! in foo x x -- undefined behaviour!
 
 prints True under GHC(i), False from Hugs. Considering that foo
 and main might be in different packages written by different authors,
 this makes correct use rather complicated. Is this behaviour (and the
 consequential undefinedness) intentional?

I think this bug was inadvertently fixed in the latest version of GHC;
see:

commit 7e7a4e4d7e9e84b2c57d3d55e372e738b5f8dbf5
Author: Simon Marlow marlo...@gmail.com
Date:   Thu Feb 14 08:46:55 2013 +

Separate StablePtr and StableName tables (#7674)

To improve performance of StablePtr.

Cheers,
Edward

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


[Haskell-cafe] To seq or not to seq, that is the question

2013-03-08 Thread Edward Z. Yang
Are these equivalent? If not, under what circumstances are they not
equivalent? When should you use each?

evaluate a  return b
a `seq` return b
return (a `seq` b)

Furthermore, consider:

- Does the answer change when a = b? In such a case, is 'return $! b' 
permissible?
- What about when b = () (e.g. unit)?
- What about when 'return b' is some arbitrary monadic value?
- Does the underlying monad (e.g. if it is IO) make a difference?
- What if you use pseq instead of seq?

In http://hackage.haskell.org/trac/ghc/ticket/5129 we a bug in
'evaluate' deriving precisely from this confusion.  Unfortunately, the
insights from this conversation were never distilled into a widely
publicized set of guidelines... largely because we never really figured
out was going on! The purpose of this thread is to figure out what is
really going on here, and develop a concrete set of guidelines which we
can disseminate widely.  Here is one strawman answer (which is too
complicated to use in practice):

- Use 'evaluate' when you mean to say, Evaluate this thunk to HNF
  before doing any other IO actions, please.  Use it as much as
  possible in IO.

- Use 'return (a `seq` b)' for strictness concerns that have no
  relation to the monad.  It avoids unnecessary strictness when the
  value ends up never being used and is good hygiene if the space
  leak only occurs when 'b' is evaluated but not 'a'.

- Use 'return $! a' when you mean to say, Eventually evaluate this
  thunk to HNF, but if you have other thunks which you need to
  evaluate to HNF, it's OK to do those first.  In particular,

(return $! a)  (return $! b) === a `seq` (return $! b)
   === a `seq` b `seq` return b
   === b `seq` a `seq` return b [1]

  This situation is similar for 'a `seq` return ()' and 'a `seq` m'.
  Avoid using this form in IO; empirically, you're far more likely
  to run into stupid interactions with the optimizer, and when later
  monadic values maybe bottoms, the optimizer will be justified in
  its choice.  Prefer using this form when you don't care about
  ordering, or if you don't mind thunks not getting evaluated when
  bottoms show up. For non-IO monads, since everything is imprecise
  anyway, it doesn't matter.

- Use 'pseq' only when 'par' is involved.

Edward

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


RE: Cloud Haskell and network latency issues with -threaded

2013-02-07 Thread Edward Z. Yang
Hey folks,

The latency changes sound relevant to some work on the scheduler I'm doing;
is there a place I can see the changes?

Thanks,
Edward

Excerpts from Simon Peyton-Jones's message of Wed Feb 06 10:10:10 -0800 2013:
 I (with help from Kazu and helpful comments from Bryan and Johan) have nearly 
 completed an overhaul to the IO manager based on my observations and we are 
 in the final stages of getting it into GHC
 
 This is really helpful. Thank you very much Andreas, Kazu, Bryan, Johan.
 
 Simon
 
 From: parallel-hask...@googlegroups.com 
 [mailto:parallel-hask...@googlegroups.com] On Behalf Of Andreas Voellmy
 Sent: 06 February 2013 14:28
 To: watson.timo...@gmail.com
 Cc: kosti...@gmail.com; parallel-haskell; glasgow-haskell-users@haskell.org
 Subject: Re: Cloud Haskell and network latency issues with -threaded
 
 Hi all,
 
 I haven't followed the conversations around CloudHaskell closely, but I 
 noticed the discussion around latency using the threaded runtime system, and 
 I thought I'd jump in here.
 
 I've been developing a server in Haskell that serves hundreds to thousands of 
 clients over very long-lived TCP sockets. I also had latency problems with 
 GHC. For example, with 100 clients I had a 10 ms (millisecond) latency and 
 with 500 clients I had a 29ms latency. I looked into the problem and found 
 that some bottlenecks in the threaded IO manager were the cause. I made some 
 hacks there and got the latency for 100 and 500 clients down to under 0.2 ms. 
 I (with help from Kazu and helpful comments from Bryan and Johan) have nearly 
 completed an overhaul to the IO manager based on my observations and we are 
 in the final stages of getting it into GHC. Hopefully our work will also fix 
 the latency issues in CloudHaskell programs :)
 
 It would be very helpful if someone has some benchmark CloudHaskell 
 applications and workloads to test with. Does anyone have these handy?
 
 Cheers,
 Andi
 
 On Wed, Feb 6, 2013 at 9:09 AM, Tim Watson 
 watson.timo...@gmail.commailto:watson.timo...@gmail.com wrote:
 Hi Kostirya,
 
 I'm putting the parallel-haskell and ghc-users lists on cc, just in case 
 other (better informed) folks want to chip in here.
 
 
 
 First of all, I'm assuming you're talking about network latency when 
 compiling with -threaded - if not I apologise for misunderstanding!
 
 There is apparently an outstanding network latency issue when compiling with 
 -threaded, but according to a conversation I had with the other developers on 
 #haskell-distributed, this is not something that's specific to Cloud Haskell. 
 It is something to do with the threaded runtime system, so would need to be 
 solved for GHC (or is it just the Network package!?) in general. Writing up a 
 simple C program and equivalent socket use in Haskell and comparing the 
 latency using -threaded will show this up.
 
 See the latency section in 
 http://haskell-distributed.github.com/wiki/networktransport.html for some 
 more details. According to that, there *are* some things we might be able to 
 do, but the 20% latency isn't going to change significantly on the face of 
 things.
 
 We have an open ticket to look into this 
 (https://cloud-haskell.atlassian.net/browse/NTTCP-4) and at some point we'll 
 try and put together the sample programs in a github repository (if that's 
 not already done - I might've missed previous spikes done by Edsko or others) 
 and investigate further.
 
 One of the other (more experienced!) devs might be able to chip in and 
 proffer a better explanation.
 
 Cheers,
 Tim
 
 On 6 Feb 2013, at 13:27, kosti...@gmail.commailto:kosti...@gmail.com wrote:
 
  Haven't you had a necessity to launch Haskell in no-threaded mode during 
  the intense network data exchange?
  I am getting the double performance penalty in threaded mode. But I must 
  use threaded mode because epoll and kevent are available in the threaded 
  mode only.
 
 
 [snip]
 
 
 
  среда, 6 февраля 2013 г., 12:33:36 UTC+2 пользователь Tim Watson написал:
  Hello all,
 
  It's been a busy week for Cloud Haskell and I wanted to share a few of
  our news items with you all.
 
  Firstly, we have a new home page at http://haskell-distributed.github.com,
  into which most of the documentation and wiki pages have been merged. Making
  sassy looking websites is not really my bag, so I'm very grateful to the
  various author's whose Creative Commons licensed designs and layouts made
  it easy to put together. We've already had some pull requests to fix minor
  problems on the site, so thanks very much to those who've contributed 
  already!
 
  As well as the new site, you will find a few of us hanging out on the
  #haskell-distributed channel on freenode. Please do come along and join in
  the conversation.
 
  We also recently split up the distributed-process project into separate
  git repositories, one for each component that makes up Cloud Haskell. This
  was done partly for administrative purposes and partly 

Re: Cloud Haskell and network latency issues with -threaded

2013-02-07 Thread Edward Z. Yang
OK. I think it is high priority for us to get some latency benchmarks
into nofib so that GHC devs (including me) can start measuring changes
off them.  I know Edsko has some benchmarks here:
http://www.edsko.net/2013/02/06/performance-problems-with-threaded/
but they depend on network which makes it a little difficult to move into nofib.
I'm working on other scheduler changes that may help you guys out; we
should keep each other updated.

I noticed your patch also incorporates the make yield actually work patch;
do you think the improvement in 7.4.1 was due to that specific change?
(Have you instrumented the run queues and checked how your patch changes
the distribution of jobs over your runtime?)

Somewhat unrelatedly, if you have some good latency tests already,
it may be worth a try compiling your copy of GHC -fno-omit-yields, so that
forced context switches get serviced more predictably.

Cheers,
Edward

Excerpts from Andreas Voellmy's message of Thu Feb 07 21:20:25 -0800 2013:
 Hi Edward,
 
 I did two things to improve latency for my application: (1) rework the IO
 manager and (2) stabilize the work pushing. (1) seems like a big win and we
 are almost done with the work on that part. It is less clear whether (2)
 will generally help much. It helped me when I developed it against 7.4.1,
 but it doesn't seem to have much impact on HEAD on the few measurements I
 did. The idea of (2) was to keep running averages of the run queue length
 of each capability, then push work when these running averages get too
 out-of-balance. The desired effect (which seems to work on my particular
 application) is to avoid cases in which threads are pushed back and forth
 among cores, which may make cache usage worse. You can see my patch here:
 https://github.com/AndreasVoellmy/ghc-arv/commits/push-work-exchange-squashed
 .
 
 -Andi
 
 On Fri, Feb 8, 2013 at 12:10 AM, Edward Z. Yang ezy...@mit.edu wrote:
 
  Hey folks,
 
  The latency changes sound relevant to some work on the scheduler I'm doing;
  is there a place I can see the changes?
 
  Thanks,
  Edward
 
  Excerpts from Simon Peyton-Jones's message of Wed Feb 06 10:10:10 -0800
  2013:
   I (with help from Kazu and helpful comments from Bryan and Johan) have
  nearly completed an overhaul to the IO manager based on my observations and
  we are in the final stages of getting it into GHC
  
   This is really helpful. Thank you very much Andreas, Kazu, Bryan, Johan.
  
   Simon
  
   From: parallel-hask...@googlegroups.com [mailto:
  parallel-hask...@googlegroups.com] On Behalf Of Andreas Voellmy
   Sent: 06 February 2013 14:28
   To: watson.timo...@gmail.com
   Cc: kosti...@gmail.com; parallel-haskell;
  glasgow-haskell-users@haskell.org
   Subject: Re: Cloud Haskell and network latency issues with -threaded
  
   Hi all,
  
   I haven't followed the conversations around CloudHaskell closely, but I
  noticed the discussion around latency using the threaded runtime system,
  and I thought I'd jump in here.
  
   I've been developing a server in Haskell that serves hundreds to
  thousands of clients over very long-lived TCP sockets. I also had latency
  problems with GHC. For example, with 100 clients I had a 10 ms
  (millisecond) latency and with 500 clients I had a 29ms latency. I looked
  into the problem and found that some bottlenecks in the threaded IO manager
  were the cause. I made some hacks there and got the latency for 100 and 500
  clients down to under 0.2 ms. I (with help from Kazu and helpful comments
  from Bryan and Johan) have nearly completed an overhaul to the IO manager
  based on my observations and we are in the final stages of getting it into
  GHC. Hopefully our work will also fix the latency issues in CloudHaskell
  programs :)
  
   It would be very helpful if someone has some benchmark CloudHaskell
  applications and workloads to test with. Does anyone have these handy?
  
   Cheers,
   Andi
  
   On Wed, Feb 6, 2013 at 9:09 AM, Tim Watson watson.timo...@gmail.com
  mailto:watson.timo...@gmail.com wrote:
   Hi Kostirya,
  
   I'm putting the parallel-haskell and ghc-users lists on cc, just in case
  other (better informed) folks want to chip in here.
  
   
  
   First of all, I'm assuming you're talking about network latency when
  compiling with -threaded - if not I apologise for misunderstanding!
  
   There is apparently an outstanding network latency issue when compiling
  with -threaded, but according to a conversation I had with the other
  developers on #haskell-distributed, this is not something that's specific
  to Cloud Haskell. It is something to do with the threaded runtime system,
  so would need to be solved for GHC (or is it just the Network package!?) in
  general. Writing up a simple C program and equivalent socket use in Haskell
  and comparing the latency using -threaded will show this up.
  
   See the latency section in
  http://haskell-distributed.github.com/wiki/networktransport.html for some
  more details

[Haskell-cafe] Ticking time bomb

2013-01-30 Thread Edward Z. Yang
https://status.heroku.com/incidents/489

Unsigned Hackage packages are a ticking time bomb.

Cheers,
Edward

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Ticking time bomb

2013-01-30 Thread Edward Z. Yang
 As long as we upload packages via plain HTTP, signing won't help though.

I don't think that's true?  If the package is tampered with, then the
signature will be invalid; if the signature is also forged, then the
private key is compromised and we can blacklist it.  We care only
about integrity, not secrecy.

Edward

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Ticking time bomb

2013-01-30 Thread Edward Z. Yang
Excerpts from Joachim Breitner's message of Wed Jan 30 12:59:48 -0800 2013:
 another reason why Cabal is no package manager¹.

Based on the linked post, it seems that you are arguing that cabal-install is
not a package manager, and thus it is not necessary for it to duplicate
the work that real package managers e.g. Debian or Ubuntu put into
vetting, signing and releasing software.  (Though I am not sure, so please
correct me if I am wrong.)

This argument seems specious.  Whether or not cabal-install is or not
intended to be a package manager, users expect it to act like one (as
users expect rubygems to be a package manager), and, at the end of the
day, that is what matters.

Edward

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Ticking time bomb

2013-01-30 Thread Edward Z. Yang
Excerpts from Ramana Kumar's message of Wed Jan 30 14:46:26 -0800 2013:
  This argument seems specious.  Whether or not cabal-install is or not
  intended to be a package manager, users expect it to act like one (as
  users expect rubygems to be a package manager), and, at the end of the
  day, that is what matters.
 
 
 But playing along with their delusion might make it harder to change their
 minds.

Looking at the library ecosystems of the most popular programming languages,
I think this ship has already sailed.

Edward

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Ticking time bomb

2013-01-30 Thread Edward Z. Yang
Excerpts from Joachim Breitner's message of Wed Jan 30 14:57:28 -0800 2013:
 I’m not against cryptographically signed packages on hackage. In fact, I
 would whole-heatedly appreciate it, as it would make my work as a
 package maintainer easier.
 
 I was taking the opportunity to point out an advantage of established
 package management systems, to shamelessly advertise my work there, as
 not everyone sees distro-packaged libraries as a useful thing.

Yes. In fact, I am a sysadmin for a large shared hosting environment, and
the fact that programming language libraries tend not to be distro-packaged
is an endless headache for us.  We would like it if everything were just
packaged properly!

On the other hand, working in these circumstances has made me realize
that there is a huge tension between the goals of package library
authors and distribution managers (a package library author is desires
ease of installation of their packages, keeping everyone up-to-date as
possible and tends to be selfish when it comes to the rest of the
ecosystem, whereas the distribution manager values stability, security,
and global consistency of the ecosystem.)  So there is a lot of work to
be done here.  Nevertheless, I believe we are in violent agreement that
cryptographically signed Hackage packages should happen as soon as
possible!

Edward

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: What is the scheduler type of GHC?

2013-01-16 Thread Edward Z. Yang
Excerpts from Magicloud Magiclouds's message of Wed Jan 16 00:32:00 -0800 2013:
 Hi,
   Just read a post about schedulers in erlang and go lang, which informed
 me that erlang is preemptive and go lang is cooperative.
   So which is used by GHC? From ghc wiki about rts, if the question is only
 within haskell threads, it seems like cooperative.

Additionally, the current scheduler is round-robin with some heuristics for
when threads get to cut the line, so we do not have priorities for threads.
I'm currently working on a patch which allows for more flexible scheduling.

Edward

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: [Haskell-cafe] Example programs with ample use of deepseq?

2013-01-07 Thread Edward Z. Yang
There are two senses in which deepseq can be overkill:

1. The structure was already strict, and deepseq just forces another
no-op traversal of the entire structure.  This hypothetically affects
seq too, although seq is quite cheap so it's not a problem.

2. deepseq evaluates too much, when it was actually sufficient only to
force parts of the structure, e.g. the spine of a list.  This is less
common for the common use-cases of deepseq; e.g. if I want to force pending
exceptions I am usually interested in all exceptions in a (finite) data
structure; a space leak may be due to an errant closure---if I don't
know which it is, deepseq will force all of them, ditto with work in
parallel programs.  Certainly there will be cases where you will want snip
evaluation at some point, but that is somewhat difficult to encode
as a typeclass, since the criterion varies from structure to structure.
(Though, perhaps, this structure would be useful:

data Indirection a = Indirection a
class DeepSeq Indirection
rnf _ = ()
)

Cheers,
Edward

Excerpts from Joachim Breitner's message of Mon Jan 07 04:06:35 -0800 2013:
 Dear Haskellers,
 
 I’m wondering if the use of deepseq to avoid unwanted lazyness might be
 a too large hammer in some use cases. Therefore, I’m looking for real
 world programs with ample use of deepseq, and ideally easy ways to test
 performance (so preferably no GUI applications).
 
 I’ll try to find out, by runtime observerations, which of the calls ot
 deepseq could be replaced by id, seq, or „shallow seqs“ that, for
 example, calls seq on the elements of a tuple.
 
 Thanks,
 Joachim
 

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell] Second Call for Copy: Monad.Reader #21

2012-12-13 Thread Edward Z. Yang
Second Call for Copy: The Monad.Reader - Issue 21
-

Whether you're an established academic or have only just started
learning Haskell, if you have something to say, please consider
writing an article for The Monad.Reader!  The submission deadline
for Issue 21 will be:

**Tuesday, January 1**

Less than half a month away, but that's what Christmas break is for,
right? :-)

The Monad.Reader


The Monad.Reader is a electronic magazine about all things Haskell. It
is less formal than journal, but somehow more enduring than a wiki-
page. There have been a wide variety of articles: exciting code
fragments, intriguing puzzles, book reviews, tutorials, and even
half-baked research ideas.

Submission Details
~~

Get in touch with me if you intend to submit something -- the sooner
you let me know what you're up to, the better.

Please submit articles for the next issue to me by e-mail (ezy...@mit.edu).

Articles should be written according to the guidelines available from

http://themonadreader.wordpress.com/contributing/

Please submit your article in PDF, together with any source files you
used. The sources will be released together with the magazine under a
BSD license.

If you would like to submit an article, but have trouble with LaTeX
please let me know and we'll work something out.

___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


[Haskell-cafe] Second Call for Copy: Monad.Reader #21

2012-12-13 Thread Edward Z. Yang
Second Call for Copy: The Monad.Reader - Issue 21
-

Whether you're an established academic or have only just started
learning Haskell, if you have something to say, please consider
writing an article for The Monad.Reader!  The submission deadline
for Issue 21 will be:

**Tuesday, January 1**

Less than half a month away, but that's what Christmas break is for,
right? :-)

The Monad.Reader


The Monad.Reader is a electronic magazine about all things Haskell. It
is less formal than journal, but somehow more enduring than a wiki-
page. There have been a wide variety of articles: exciting code
fragments, intriguing puzzles, book reviews, tutorials, and even
half-baked research ideas.

Submission Details
~~

Get in touch with me if you intend to submit something -- the sooner
you let me know what you're up to, the better.

Please submit articles for the next issue to me by e-mail (ezy...@mit.edu).

Articles should be written according to the guidelines available from

http://themonadreader.wordpress.com/contributing/

Please submit your article in PDF, together with any source files you
used. The sources will be released together with the magazine under a
BSD license.

If you would like to submit an article, but have trouble with LaTeX
please let me know and we'll work something out.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Hoopl vs LLVM?

2012-12-10 Thread Edward Z. Yang
Hello Greg,

Hoopl passes live in compiler/cmm; searching for DataflowLattice will
turn up lattice definitions which are the core of the analyses and rewrites.
Unfortunately, the number of true Hoopl optimizations was somewhat reduced
when Simon Marlow did aggressive performance optimizations to get the
new code generator shipped with GHC by default, but I think we hope to
add some more interesting passes for -O3, etc.

Hoopl and LLVM's approaches to optimization are quite different.  LLVM
uses SSA representation, whereas Hoopl uses the Chamber-Lerner-Grove
algorithm to do analyses without requiring single-assignment.  The other
barrier you're likely to run into is the fact that GHC generated C-- code
looks very different from conventional compiler output.

Hope that helps,
Edward

Excerpts from Greg Fitzgerald's message of Mon Dec 10 14:24:02 -0800 2012:
 I don't know my way around the GHC source tree.  How can I get the list of
 optimizations implemented with Hoopl?  Is there overlap with LLVM's
 optimization passes?  If so, has anyone compared the implementations at
 all?  Should one group be stealing ideas from the other?  Or apples and
 oranges?
 
 Thanks,
 Greg

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: [Haskell-cafe] mtl: Why there is Monoid w constraint in the definition of class MonadWriter?

2012-12-08 Thread Edward Z. Yang
The monoid instance is necessary to ensure adherence to the monad laws.

Cheers,
Edward

Excerpts from Petr P's message of Sat Dec 08 10:59:25 -0800 2012:
 The class is defined as
 
  class (Monoid w, Monad m) = MonadWriter w m | m - w where
...
 
 What is the reason for the Monoid constrait? It seems superfluous to me. I
 recompiled the whole package without it, with no problems.
 
 
 Of course, the Monoid constraint is necessary for most _instances_, like in
 
  instance (Monoid w, Monad m) = MonadWriter w (Lazy.WriterT w m) where
  ...
 
 but this is a different thing - it depends on how the particular instance
 is implemented.
 
 I encountered the problem when I needed to define an instance where the
 monoidal structure is fixed (Last) and I didn't want to expose it to the
 user. I wanted to spare the user of of having to write Last/getLast
 everywhere. (I have an instance of MonadWriter independent of WriterT, its
 'tell' saves values to a MVar. Functions 'listen' and 'pass' create a new
 temporary MVar. I can post the detail, if anybody is interested.)
 
 Would anything break by removing the constraint? I think the type class
 would get a bit more general this way.
 
   Thanks for help,
   Petr Pudlak

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] mtl: Why there is Monoid w constraint in the definition of class MonadWriter?

2012-12-08 Thread Edward Z. Yang
Excerpts from Roman Cheplyaka's message of Sat Dec 08 14:00:52 -0800 2012:
 * Edward Z. Yang ezy...@mit.edu [2012-12-08 11:19:01-0800]
  The monoid instance is necessary to ensure adherence to the monad laws.
 
 This doesn't make any sense to me. Are you sure you're talking about the
 MonadWriter class and not about the Writer monad?

Well, I assume the rules for Writer generalize for MonadWriter, no?

Here's an example.  Haskell monads have the associativity law:

(f = g) = h === f = (g = h)

From this, we can see that

(m1  m2)  m3 === m1  (m2  m3)

Now, consider tell. We'd expect it to obey a law like this:

tell w1  tell w2 === tell (w1  w2)

Combine this with the monad associativity law:

(tell w1  tell w2)  tell w3 === tell w1  (tell w2  tell w3)

And it's easy to see that '' must be associative in order for this law
to be upheld.  Additionally, the existence of identities in monads means
that there must be a corresponding identity for the monoid.

So anything that is writer-like and also satisfies the monad laws...
is going to be a monoid.

Now, it's possible what GP is actually asking about is more a question of
encapsulation.  Well, one answer is, Well, just give the user specialized
functions which do the appropriate wrapping/unwrapping; another answer is,
if you let the user run a writer action and extract the resulting written
value, then he can always reverse engineer the monoid instance out of it.

Edward

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] mtl: Why there is Monoid w constraint in the definition of class MonadWriter?

2012-12-08 Thread Edward Z. Yang
 First of all, I don't see why two tells should be equivalent to one
 tell. Imagine a MonadWriter that additionally records the number of
 times 'tell' has been called. (You might argue that your last equation
 should be a MonadWriter class law, but that's a different story — we're
 talking about the Monad laws here.)

Yes, I think I would argue that my equation should be a MonadWriter class
law, and if you don't grant me that, I don't have a leg to stand on.

 Second, even *if* the above holds (two tells are equivalent to one
 tell), then there is *some* function f such that
 
 tell w1  tell w2 == tell (f w1 w2)
 
 It isn't necessary that f coincides with mappend, or even that the type
 w is declared as a Monoid at all. The only thing we can tell from the
 Monad laws is that that function f should be associative.

Well, the function is associative: that's half of the way there to
a monoid; all you need is the identity!  But we have those too:
whatever the value of the execWriter (return ()) is...

Cheers,
Edward

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


  1   2   3   4   >