Re: GHC documentation outdated

2020-04-16 Thread Sven Panne
Am Do., 16. Apr. 2020 um 16:38 Uhr schrieb Wolfgang Jeltsch <
wolfgang...@jeltsch.info>:

> the URL
>
>
> https://downloads.haskell.org/~ghc/latest/docs/html/users_guide/index.html
>
> seems to still point to the GHC 8.8.3 documentation.
>

Even worse: http://hackage.haskell.org/package/base has only documentation
for base up to 8.6.x (from Sep 2018, 2 major releases behind), which makes
a devastating first impression for newcomers and/or people I'm trying to
convince about Haskell. :-(
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Gitlab workflow

2019-07-11 Thread Sven Panne
Am Do., 11. Juli 2019 um 14:32 Uhr schrieb Bryan Richter :

> [...] When references to commits (in emails etc.) get invalidated,
> it adds confusion and extra work. Seeing this happen is what led me to
> wonder why people even prefer this strategy.
>

I think there is a misunderstanding here: You never ever force-push rebased
commits to a public repo, this would indeed change commit hashes and annoy
all your collaborators like hell. In a rebase-only workflow you rebase
locally, pushing your then fast-forward-only merge to the public repo. You
can even disable force-pushed on the receiving side, an that's what is
normally done (well, not on GitHub...).


> [...] One final thing I like about merges is conflict resolution. Resolving
> conflicts via rebase is something I get wrong 40% of the time. It's
> hard. Even resolving a conflict during a merge is hard, but it's
> easier.


Hmmm, I don't see a difference with conflict resolution in both cases, the
work involved is equivalent.


> Plus, the eventual merge commit keeps a record of the
> resolution! (I only learned this recently, since `git log` doesn't
> show it by default.) Keeping a public record of how a conflict was
> resolved seems like a huge benefit. [...]
>

To me it is quite the opposite: In a collaborative environment, I don't
care even the tiniest bit about how somebody resolved the conflicts of his
branch: This is a technical artifact about when the branch was made and
when it is merged.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Gitlab workflow

2019-07-07 Thread Sven Panne
Am So., 7. Juli 2019 um 17:06 Uhr schrieb Bryan Richter :

> How does the scaling argument reconcile with the massive scope of the
> Linux kernel, the project for which git was created? I can find some middle
> ground with the more specific points you made in your email, but I have yet
> to understand how the scaling argument holds water when Linux trucks along
> with "4000 developers, 450 different companies, and 200 new developers each
> release"[1]. What makes Linux special in this regard? Is there some second
> inflection point?
>

Well, somehow I saw that example coming... :-D I think the main reason why
things work for Linux is IMHO the amount of highly specialized high-quality
maintainers, i.e. the people who pick the patches into the (parts of) the
releases they maintain, and who do it as their main (sole?) job. In
addition they have a brutal review system plus an army of people
continuously testing *and* they have Linus.

Now look at your usual company: You have average people there (at best),
silly deadlines for silly features, no real maintainers with the power to
reject/revert stuff (regardless of any deadline), your testing is far from
where it should be etc. etc. Then you do everything to keep things as
simple as possible, and having a repository with no merge commits *is* much
easier to handle than one with merges. If you are happy with merge commits,
by all means continue to use them. The "right" way of doing things depends
on so many factors (project size/complexity, number/qualification of
people/maintainers, release strategy/frequency, ...) that there is probably
no silver bullet. The good thing is: Git doesn't prescribe you a special
kind of workflow, it is more of a toolbox to build your own.

I would very much like to turn the question around: I never fully
understood why some people like merge-based workflows so much. OK, you can
see that e.g. commits A, B, and C together implement feature X, but to be
honest: After the feature X landed, probably nobody really cares about the
feature's history anymore, you normally care much more about: Which commit
broke feature Y? Which commit slowed down things? Which commit introduced a
space leak/race condition?
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Gitlab workflow

2019-07-06 Thread Sven Panne
Am Sa., 6. Juli 2019 um 19:06 Uhr schrieb Bryan Richter :

> [...] Rather than argue against GHC's current practices, however, I would
> like
> to understand them better. What issues led to a rebase-only workflow?
> Which expert opinions were considered? What happy stories can people
> relate? We recently switched away from a rebase-only workflow at
> $workplace, and it's already made life so much nicer for us -- so I'm
> curious what unforeseen pain we might be in for. :)


I've worked for several companies of several sizes, and from my experience
the rule is: The bigger the company, the more there is a tendency to use a
rebase-only workflow, with big/huge projects exclusively relying on
rebases, explicitly forbidding (non-fast-forward) merges. There are several
good reasons for this IMHO:

   * Clarity: Even with a single release branch, merges tend to create an
incomprehensible mess in the history. Things get totally unmanageable when
you have to support several releases in various branches. IMHO this reason
alone is enough to rule out non-fast-forward merges in bigger projects.

   * Bisecting: With merges you will have a very, very hard time bisecting
your history to find a bug (or a fix). With a linear (single release) or
tree-shaped (for several supported releases) history, this gets trivial and
can easily be automated.

   * Hash instability: Simply relying on a hash to find out if a
fix/feature is in some branch is an illusion: Sooner or later you get a
merge conflict and need to modify your commit.

   * Tool integration via IDs: For the reason stated above, you will have
some kind of bug/feature/issue/...-ID e.g. in your commit message, anyway.
This ID is then used in your issue tracker/release management tool/..., not
the hash of a commit in some branch.

Of course your mileage may vary, depending on your team and project size,
the additional tools you use, how good your CI/testing/release framework
is, etc.  GitLab's machinery may still be in it's infancy, but some kind of
bot picking/testing/committing (even reverting, if necessary) your changes
is a very common and scalable way of doing things. Or the other way round:
If you don't do this, and your project gets bigger, you have an almost 100%
guarantee that the code in your repository is broken in some way. :-}
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Why align all pinned array payloads on 16 bytes?

2018-10-17 Thread Sven Panne
Am Di., 16. Okt. 2018 um 23:18 Uhr schrieb Simon Marlow :

> I vaguely recall that this was because 16 byte alignment is the minimum
> you need for certain foreign types, and it's what malloc() does.  Perhaps
> check the FFI spec and the guarantees that mallocForeignPtrBytes and
> friends provide?
>

mallocForeignPtrBytes is defined in terms of malloc (
https://www.haskell.org/onlinereport/haskell2010/haskellch29.html#x37-28400029.1.3),
which in turn has the following guarantee (
https://www.haskell.org/onlinereport/haskell2010/haskellch31.html#x39-28700031.1
):

   "... All storage allocated by functions that allocate based on a size in
bytes must be sufficiently aligned for any of the basic foreign types that
fits into the newly allocated storage. ..."

The largest basic foreign types are Word64/Double and probably
Ptr/FunPtr/StablePtr (
https://www.haskell.org/onlinereport/haskell2010/haskellch8.html#x15-178.7),
so per spec you need at least an 8-byte alignement. But in an SSE-world I
would be *very* reluctant to use an alignment less strict than 16 bytes,
otherwise people will probably hate you... :-]

Cheers,
   S.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Parser.y rewrite with parser combinators

2018-10-09 Thread Sven Panne
Am Di., 9. Okt. 2018 um 15:45 Uhr schrieb Richard Eisenberg <
r...@cs.brynmawr.edu>:

> [...] What I'm trying to say here is that tracking the backtracking level
> in types doesn't seem like it will fly (tempting though it may be).
>

... and even if it did fly, parser combinators with backtracking have a
strong tendency to introduce space leaks: To backtrack, you have too keep
previous input somehow, at least up to some point. So to keep the memory
requirements sane, you have to explicitly commit to one parse or another at
some point. Different combinator libraries have different ways to do that,
but you have to do that by hand somehow, and that's where the beauty and
maintainability of the combinator approach really suffers.

Note that I'm not against parser combinators, far from it, but I don't
think they are necessarily the right tool for the problem at hand. The
basic problem is: Haskell's syntax, especially with all those extensions,
is quite tricky, and this will be reflected in any parser for it. IMHO a
parser generator is the lesser evil here, at least it points you to the
ugly places of your language (on a syntactic level). If Haskell had a few
more syntactic hints, reading code would be easier, not only for a
compiler, but (more importantly) for humans, too. Richard's code snippet is
a good example where some hint would be very useful for the casual reader,
in some sense humans have to "backtrack", too, when reading such code.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Parser.y rewrite with parser combinators

2018-10-09 Thread Sven Panne
Am Di., 9. Okt. 2018 um 09:18 Uhr schrieb Vladislav Zavialov <
vlad.z.4...@gmail.com>:

> [...] With parser combinators
>
> 1. Parse into an expression (linear in the amount of tokens)
> 2. If it turns out we needed a pattern, backtrack and parse into a
> pattern (linear in the amount of tokens) [...]
>

In a larger grammar implemented by parser combinators it is quite hard to
guarantee that you don't backtrack while backtracking, which would easily
result in exponential runtime. And given the size of the language GHC
recognizes, I can almost guarantee that this will happen unless you use
formal methods. :-)

Cheers,
   S.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Parser.y rewrite with parser combinators

2018-10-09 Thread Sven Panne
Am Di., 9. Okt. 2018 um 00:25 Uhr schrieb Vladislav Zavialov <
vlad.z.4...@gmail.com>:

> [...] That's true regardless of implementation technique, parsers are
> rather
> delicate.


I think it's not the parsers themselves which are delicate, it is the
language that they should recognize.


> A LALR-based parser generator does provide more information
> when it detects shift/reduce and reduce/reduce conflicts, but I never
> found this information useful. It was always quite the opposite of
> being helpful - an indication that a LALR parser could not handle my
> change and I had to look for workarounds. [...]
>

Not that this would help at this point, but: The conflicts reported by
parser generators like Happy are *extremely* valuable, they hint at
tricky/ambiguous points in the grammar, which in turn is a strong hint that
the language you're trying to parse has dark corners. IMHO every language
designer and e.g. everybody proposing a syntactic extension to GHC should
try to fit this into a grammar for Happy *before* proposing that extension.
If you get conflicts, it is a very strong hint that the language is hard to
parse by *humans*, too, which is the most important thing to consider.
Haskell already has tons of syntactic warts which can only be parsed by
infinite lookahead, which is only a minor technical problem, but a major
usablity problem. "Programs are meant to be read by humans and only
incidentally for computers to execute." (D.E.K.)  ;-)

The situation is a bit strange: We all love strong guarantees offered by
type checking, but somehow most people shy away from "syntactic type
checking" offered by parser generators. Parser combinators are the Python
of parsing: Easy to use initially, but a maintenance hell in the long run
for larger projects...

Cheers,
   S.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: GHC 8.6.1 release status

2018-08-21 Thread Sven Panne
Am Mo., 20. Aug. 2018 um 23:40 Uhr schrieb Matthew Pickering <
matthewtpicker...@gmail.com>:

> I think the release should include this fix to haddock. Will that happen?
>
> https://github.com/haskell/haddock/pull/905


Will the release contain the fix

https://github.com/haskell/haddock/pull/893

for the issue

   https://github.com/haskell/haddock/issues/462

too? Without that fix, Haddock is basically unusable for lots of packages
under the memory restrictions of Travis CI.

Cheers,
   S.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: [Haskell-cafe] Access violation when stack haddock haskell-src-exts since LTS 12.0

2018-07-23 Thread Sven Panne
Am Mo., 23. Juli 2018 um 05:49 Uhr schrieb Yuji Yamamoto <
whosekitenever...@gmail.com>:

> Thank you very much!
>
> I confirmed the replaced haddock executable can successfully generate the
> docs!
>

Yesterday I had quite some trouble because of the Haddock problem, too, and
I guess I'm not alone: haskell-src-exts has 165 direct reverse
dependencies, so probably hundreds of Hackage packages are affected by
this. The workaround is simple (don't use --haddock with stack), but far
from satisfying and not very obvious.

Given the fact that this affects a very central piece of the Haskell
infrastructure in its latest stable incarnation (GHC 8.4.3): Can we have an
8.4.4 with a fixed Haddock?
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Re: potential for GHC benchmarks w.r.t. optimisations being incorrect

2018-05-07 Thread Sven Panne
2018-05-06 16:41 GMT+02:00 Andreas Klebinger :

> [...] If we only consider 16byte (DSB Buffer) and 32 Byte (Cache Lines)
> relevant this reduces the possibilities by a lot after all. [...]
>

Nitpick: Cache lines on basically all Intel/AMD processors contain 64
bytes, see e.g. http://www.agner.org/optimize/microarchitecture.pdf
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Basic Block Layout in the NCG

2018-05-06 Thread Sven Panne
2018-05-05 21:23 GMT+02:00 Andreas Klebinger :

> [...] I came across cases where inverting conditions lead to big
> performance losses since suddenly block layout
> got all messed up. (~4% slowdown for the worst offenders). [...]
>

4% is far from being "big", look e.g. at https://dendibakh.github.io/
blog/2018/01/18/Code_alignment_issues where changing just the alignment of
the code lead to a 10% difference. :-/ The code itself or its layout wasn't
changed at all. The "Producing Wrong Data Without Doing Anything Obviously
Wrong!" paper gives more funny examples.

I'm not saying that code layout has no impact, quite the opposite. The main
point is: Do we really have a benchmarking machinery in place which can
tell you if you've improved the real run time or made it worse? I doubt
that, at least at the scale of a few percent. To reach just that simple
yes/no conclusion, you would need quite a heavy machinery involving
randomized linking order, varying environments (in the sense of "number and
contents of environment variables"), various CPU models etc. If you do not
do that, modern HW will leave you with a lot of "WTF?!" moments and wrong
conclusions.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: [ANNOUNCE] GHC 8.4.1 released

2018-03-09 Thread Sven Panne
2018-03-08 17:57 GMT+01:00 Ben Gamari :

> The GHC developers are very happy to announce the 8.4.1 release of
> Glasgow Haskell Compiler. [...]


Just a few tiny remarks regarding "base":

   *
https://downloads.haskell.org/~ghc/8.4.1/docs/html/users_guide/8.4.1-notes.html#included-libraries
says that the shipped "base" has version 2.1, I guess that should be
4.11.0.0.

   * https://wiki.haskell.org/Base_package needs an update.

   * Hackage has no 4.11.0.0 yet, that would be very helpful for the docs.
Yes, there is
https://downloads.haskell.org/~ghc/8.4.1/docs/html/libraries/index.html,
but Hackage is somehow the more canonical place to look up the package docs.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Can't push to haddock

2017-12-19 Thread Sven Panne
2017-12-19 12:47 GMT+01:00 Phyx :

> Cool, then let's turn to media reports then such as
> https://techcrunch.com/2017/07/31/github-goes-down-and-takes-developer-
> productivity-with-it/ do you have one for git.haskell.org going down?


Of course this question is a classic example of "the absence of evidence is
not the evidence of absence" fallacy, but anyway:

*
https://www.reddit.com/r/haskell/comments/4gppm8/ann_hackagehaskellorg_is_down/
* http://blog.haskell.org/post/4/outages_and_improvements.../
* Searchs ghc-devs@ for posts regarding Phabricator updates, Server moves,
problems with arc... (not exactly all downtimes, but in effect of the
incidents are the same)

I am not saying that the haskell.org infrastructure is bad, far from it,
but it would be an illusion to think that it has a much higher effective
uptime than GitHub. Furthermore: I don't think that the argument should
revolve around uptime. We have a distributed version control system where
people can happily work for an extended time span without *any* network at
all, and the GHC source repository is not a financial application which
would cause the loss of millions of dollars per minute if it's temporarily
unavailable. The arguments should be about simplicity, ease of use, etc.

Anyway, for my part the discussion is over, there *is* more or less open
hostility towards GitHub/more standardized environments here. Is it an
instance of the common "not invented here" syndrome or general mistrust in
any kind of organization? I don't know... :-/
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Can't push to haddock

2017-12-19 Thread Sven Panne
2017-12-19 11:07 GMT+01:00 Phyx :

> These are just a few of the times github has been down in 2017
> http://currentlydown.com/github.com compared to haskell.org http://
> currentlydown.com/haskell.org [...]
>

I can't see any data for haskell.org on that page, apart from the fact that
it is up right now. Furthermore, I very much question the data on
currentlydown.com: According to it, Google, Facebook, YouTube, Yahoo! and
Amazon were down on March 25th for roughly an hour. A much more probable
explanation: currentlydown.com had problems, not the five of the biggest
sites in the world. This undermines the trust in the rest of the outage
reports a bit...
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Can't push to haddock

2017-12-19 Thread Sven Panne
2017-12-19 9:50 GMT+01:00 Herbert Valerio Riedel :

> We'd need mirroring anyway, as we want to keep control over our
> infrastructure and not have to trust a 3rd party infrastructure to
> safely handle our family jewels: GHC's source tree.
>

I think this is a question of perspective: Having the master repository on
GitHub doesn't mean you are in immediate danger or lose your "family
jewels". IMHO it's quite the contrary: I'm e.g. sure that in case that
something goes wrong with GitHub, there is far more manpower behind it to
fix that than for any self-hosted repository. And you can of course have
some mirror of your GitHub repo in case of e.g. an earthquake/meteor/... in
the San Francisco area... ;-)

It seems to me that there is some hostility towards GitHub in GHC HQ, but I
don't really understand why. GitHub serves other similar projects quite
well, e.g. Rust, and I can't see why we should be special.


> Also, catching bad commits "a bit later" is just asking for trouble --
> by the time they're caught the git repos have already lost their
> invariant and its a big mess to recover;


This is by no means different than saying: "I want to run 'validate' in the
commit hook, otherwise it's a big mess." We don't do this for obvious
reasons, and what is the "big mess" if there is some incorrect submodule
reference for a short time span? How is that different from somebody
introducing e.g. a subtle compiler bug in a commit?


> the invariant I devised and
> whose validation I implemented 4 years ago has served us pretty well,
> and has ensured that we never glitched into incorrectness; I'm also not
> sure why it's being suggested to switch to a less principled and more
> fragile scheme now. [...]


Because the whole repository structure is overly complicated and simply
hosting everything on GitHub would simplify things. Again: I'm well aware
that there are tradeoffs involved, but I would really appreciate
simplifications. I have the impression that the entry barrier to GHC
development has become larger and larger over the years, partly because of
very non-standard tooling, partly because of the increasingly arcane
repository organization. There are reasons that other projects like Rust
attract far more developers... :-/

___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Can't push to haddock

2017-12-18 Thread Sven Panne
2017-12-18 17:01 GMT+01:00 Simon Peyton Jones via ghc-devs <
ghc-devs@haskell.org>:

> |  It's for technical reasons, and the strongest one being: GitHub doesn't
> |  allow us to establish strong invariants regarding submodule gitlink
> |  referential integrity for submodules (which I implemented a couple
> years ago
> |  for git.haskell.org).
>
> Interesting.  It'd be good to document what the technical reasons are.
> For example I don’t know what the strong invariants are. [...]
>

Me neither. :-] Looking at the repositories Wiki page, it seems to be
related to the fact that GitHub doesn't offer git hooks, which are used to
check the invariants. This leads to another question: Is it *really*
necessary to have the invariant checks implemented as a git hook? If you
use any kind of continuous integration, which GHC obviously does, one can
move the checks to e.g. CircleCI (or whatever CI is used). This is a
tradeoff: Doing it that way, you catch incorrect commits a little bit
later, but it makes the overall arcane repository magic quite a bit
simpler, probably removing the need for mirroring. This seems to be a good
tradeoff, but of course I might be missing some details here.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: We need to add role annotations for 7.8

2017-11-23 Thread Sven Panne
2014-03-24 15:14 GMT+01:00 Mark Lentczner :
> Speaking from the vantage point of platform This pair of comments
> (emphasis mine) have my alarm index on high:
>
> On Fri, Mar 14, 2014 at 2:36 AM, Johan Tibell 
> wrote: [...]
>> So, the best thing we came up with is this: Libraries that wish to export
>> abstract data types must do two things:
>> 1. Manage export lists carefully.
>> 2. Use role annotations.
>
> This is huge order, and one that will produce significant strain on the
> ecosystem. For one, this will set back Haskell Platform months: We have 250k
> lines of Haskell by 30+ authors that will need to be reviewed and updated. 
> [...]

Hmmm, I didn't follow role annotations at all so far, because I
assumed it was of no concern to me as a library author *unless* I use
some shiny new machinery in my library. Did I misunderstand that? How
can I find out if I have to do something? What happens if I don't do
something? What about Haskell systems which are not GHC? Do I have to
#ifdef around those annotations? :-P

I'm a bit clueless, and I'm guess I'm not alone... :-/ If this new
feature really implies that massive amount of library code review, we
should discuss first if it's really worth the trouble. The GHC release
and the HP are already late...

Cheers,
   S.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: A modest proposal (re the Platform)

2017-11-23 Thread Sven Panne
Just a quick +1 for including GHC 7.8 in the next HP release.
Regarding compiler features, shipping GHC 7.6.3 again would mean that
the HP is still roughly at September 2012 (the first release of GHC
7.6.x). Furthermore, I don't fully buy into the argument that we
should wait for 7.8 to stabilize: Power users will use something near
HEAD, anyway, almost all other users will probably use the HP.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: [Haskell-cafe] [ANNOUNCE] GHC 8.2.2 release candidate 2

2017-11-06 Thread Sven Panne
2017-11-06 17:54 GMT+01:00 Ben Gamari :

> Next time something like this arises please do open a ticket.
>

Yep, will do...


> Yes, I have opened a differential adding such a flag. See D4164 [1].
> Please bikeshed to taste.
>

Thanks for the quick fix!


> In general I would really prefer that we *not* consider GHCi's REPL to be
> a stable programmatic interface.


I fully understand that, and that's definitely the way to go. Nevertheless,
parsing tool/compiler output is still one of the most used hacks^H^H^H
techniques for lots of Emacs modes (and probably other IDEs). Not every
project is as open to suggestions and changes as GHC is, so this is often
the only way out.


> That being said, we cannot always preemptively add complexity to the
> project out of fear that a given change might break a hypothetical
> mechanical consumer.


That's of course not what was proposed. :-)


> GHCi is first-and-foremost a REPL for users.
> When evaluating a change, if we feel it is likely that we will break a
> mechanical user then we will likely guard the change with a flag.
> However, if not, we won't.
>

I think the main problem here was communication. I can't speak for the
haskell-mode maintainers, but for my part I didn't notice the problems
because I mainly use LTS Stackage and that is still(!) at 8.0.2 (Why? This
is definitely part of the whole problem.). I tried the 8.2 series only
sparingly and only via the command line, so this is perhaps what others
did, too, so the interaction bug went unnoticed for such a long time.

Cheers,
   Sven
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: [Haskell-cafe] [ANNOUNCE] GHC 8.2.2 release candidate 2

2017-11-05 Thread Sven Panne
2017-11-05 15:37 GMT+01:00 :

> A better approach might be to develop a "machine-readable" output format
> which then is kept stable, and can be enabled with a flag. Git has a
> similar solution.
>

Without doubt, this is definitely the better approach, but this is hardly
what can be achieved for 8.2.2. Adding some flag to get the old behavior
back when wanted *is* achievable.


> It would be a shame to avoid changes which make the user experience better
> simply because other projects cannot sync their development cycle,
>

Don't get me wrong: I'm all for improving user experience, but making ad
hoc changes without enough thought or even a chance to get the old behavior
back is probably not the right way to proceed. All SW lives in some kind of
ecosystem, so it should behave well in that. And for Emacs users, the user
experience has been made much worse.


> especially if those projects are not universally used or required.
>

This is highly a matter of personal taste: No project is "universally
used", so this is tautological statement. The question is: Is a minor
cosmetic change really worth breaking things in one of the major IDEs?

Cheers,
   S.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: [ANNOUNCE] GHC 8.2.2 release candidate 2

2017-11-05 Thread Sven Panne
This is not an issue about 8.2.2 per se, but 8.2 changes in general: Recent
discussions on Haskell Cafe showed serious problems with Emacs'
haskell-mode due to some ad hoc changes like
https://phabricator.haskell.org/D3651. Related GitHub issues:

   https://github.com/haskell/haskell-mode/issues/1553
   https://github.com/haskell/haskell-mode/issues/1496

It should be noted that the output of GHC(i) is actually part of GHC's
interface, so in this light, there have been some breaking changes,
probably unintended, but nevertheless. So my question is: Is there a chance
to revert some of these ad hoc changes and/or add some flags to get the old
behavior back? I guess that Emacs + haskell-mode is one of the most
important IDEs, so it would be a pity to worsen the situation there.

I'm quite aware that it is very late in the release cycle, but it would be
extremely nice if there was something which can be done. In the future it
might make sense to co-operate with the haskell-mode team a bit more,
perhaps adding some tests regarding the required output format etc. to
GHC's test suite.

Cheers,
   S.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: perf.haskell.org update: Now using cachegrind

2017-09-30 Thread Sven Panne
2017-09-30 17:56 GMT+02:00 Joachim Breitner :

> [...] I also wonder whether, when using cachegrind, the results from
> different machines are actually comparable. [...]
>

In general, they are not really comparable: cachegrind doesn't collect
*actual* cache statistics, it emulates a simplified version of the caching
machinery, trying to auto-detect parameters, see e.g.:

   http://valgrind.org/docs/manual/cg-manual.html#cg-manual.overview

This doesn't mean that the numbers are useless, but they are only (good)
general hints, and in rare extreme cases, they can be totally off from the
real numbers. For the "real stuff", one has to use "perf", but then you can
only compare numbers from the same CPU models.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: RTS changes affect runtime when they shouldn’t

2017-09-27 Thread Sven Panne
2017-09-26 18:35 GMT+02:00 Ben Gamari :

> While it's not a bad idea, I think it's easy to drown in information. Of
> course, it's also fairly easy to hide information that we don't care
> about, so perhaps this is worth doing regardless.
>

The point is: You don't know in advance which of the many performance
characteristics "perf" spits out is relevant. If e.g. you see a regression
in runtime although you really didn't expect one (tiny RTS change etc.), a
quick look at the diffs of all perf values can often give a hint (e.g.
branch prediction was screwed up by different code layout etc.).

So I think it's best to collect all data, but make the user-relevant data
(runtime, code size) more prominent than the technical/internal data (cache
hit ratio, branch prediction hit ratio, etc.), which is for analysis only.
Although the latter is a cause for the former, from a compiler user's
perspective it's irrelevant. So there is no actual risk in drowning in
data, because you primarily care only for a small subset of it.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: RTS changes affect runtime when they shouldn’t

2017-09-24 Thread Sven Panne
2017-09-23 21:06 GMT+02:00 Joachim Breitner :

> what I want to do is to reliably catch regressions.


The main question is: Which kind of regressions do you want to catch? Do
you care about runtime as experienced by the user? Measure the runtime. Do
you care abou code size? Measure the code size. etc. etc. Measuring things
like the number of fetched instructions as an indicator for the experienced
runtime is basically a useless exercise, unless you do this on ancient RISC
processors, where each instruction takes a fixed number of cycles.


> What are the odds that a change to the Haskell compiler (in particular to
> Core2Core
> transformations) will cause a significant increase in runtime without a
>  significant increase in instruction count?
> (Honest question, not rhetoric).
>

The odds are actually quite high, especially when you define "significant"
as "changing a few percent" (which we do!). Just a few examples from
current CPUs:

   * If branch prediction has not enough information to do this better, it
assumes that backward branches are taken (think: loops) and forward
branches are not taken (so you should put "exceptional" code out of the
common, straight-line code). If by some innocent looking change the code
layout changes, you can easily get a very measurable difference in runtime
even if the number of executed instructions stays exactly the same.

   * Even if the number of instructions changes only a tiny bit, it could
be the case that it is just enough to make caching much worse and/or make
the loop stream detector fail to detect a loop.

There are lots of other scenarios, so in a nutshell: Measure what you
really care about, not something you think might be related to that.

As already mentioned in another reply, "perf" can give you very detailed
hints about how good your program uses the pipeline, caches, branch
prediction etc. Perhaps the performance dashboard should really collect
these, too, this would remove a lot of guesswork.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: RTS changes affect runtime when they shouldn’t

2017-09-23 Thread Sven Panne
2017-09-21 0:34 GMT+02:00 Sebastian Graf :

> [...] The only real drawback I see is that instruction count might skew
> results, because AFAIK it doesn't properly take the architecture (pipeline,
> latencies, etc.) into account. It might be just OK for the average program,
> though.
>

It really depends on what you're trying to measure: The raw instruction
count is basically useless if you want to have a number which has any
connection to the real time taken by the program. The average number of
cycles per CPU instruction varies by 2 orders of magnitude on modern
architectures, see e.g. the Skylake section in
http://www.agner.org/optimize/instruction_tables.pdf (IMHO a must-read for
anyone doing serious optimizations/measurements on the assembly level). And
these numbers don't even include the effects of the caches, pipeline
stalls, branch prediction, execution units/ports, etc. etc. which can
easily add another 1 or 2 orders of magnitude.

So what can one do? It basically boils down to a choice:

   * Use a stable number like the instruction count (the "Instructions
Read" (Ir) events), which has no real connection to the speed of a program.

   * Use a relatively volatile number like real time and/or cycles used,
which is what your users will care about. If you put a non-trivial amount
of work into your compiler, you can make these numbers a bit more stable
(e.g. by making the code layout/alignment more stable), but you will still
get quite different numbers if you switch to another CPU
generation/manufacturer.

A bit tragic, but that's life in 2017... :-}
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Which stable GHC release is expected to have support for linear types?

2017-07-11 Thread Sven Panne
2017-07-11 8:39 GMT+02:00 Joachim Breitner <m...@joachim-breitner.de>:

> Am Montag, den 10.07.2017, 16:31 +0200 schrieb Sven Panne:
> > You can happily move around any cloned repository, Git has absolutely
> > no problem with that. What often breaks is some imperfect tooling on
> > top of Git itself, which might be the case here.
>
> I found that this is not true for git modules, which can be very
> annoying; see https://stackoverflow.com/a/11298947/946226 [...]


This is only true for repos created with ancient Git versions. With newer
versions (>= 1.7.10) there is no problem, see e.g.
https://stackoverflow.com/questions/17568543/git-add-doesnt-work/17747571#17747571
.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Which stable GHC release is expected to have support for linear types?

2017-07-10 Thread Sven Panne
2017-07-10 13:41 GMT+02:00 Wolfgang Jeltsch :

> I renamed the local directory after cloning. If Git really cannot deal
> with this, then this is yet another reason for preferring darcs over Git.
> [...]
>

You can happily move around any cloned repository, Git has absolutely no
problem with that. What often breaks is some imperfect tooling on top of
Git itself, which might be the case here.

Just my 2c,
   S.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: [ANNOUNCE] GHC tarballs for Windows 10 Creators Update

2017-04-15 Thread Sven Panne
2017-04-15 4:58 GMT+02:00 Ben Gamari :

> [...} The new tarballs
> are distinguished from the original releases with a `-win10` suffix. For
> instance, the 64-bit 8.0.2 binary distribution can be found at,
>
> https://downloads.haskell.org/~ghc/8.0.2/ghc-8.0.2-x86_64-
> unknown-mingw32-win10.tar.xz
> [...]
>

I heavily rely on "stack" to install GHC version, so what is the intended
way of telling "stack setup" about the suffix? If I understand things
correctly, "stack setup" will install non-working versions on Windows with
the Creators Update.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: DeriveFoldable treatment of tuples is surprising

2017-03-22 Thread Sven Panne
2017-03-21 22:29 GMT+01:00 Edward Kmett :

> [... In general I think the current behavior is the least surprising as it
> "walks all the a's it can" and is the only definition compatible with
> further extension with Traversable. [...]
>

OTOH, the current behavior contradicts my intuition that wrapping a type
into data/newtype plus using the deriving machinery is basically a no-op
(modulo bottoms etc.). When I e.g. wrap a type t, I would be very surprised
if the Eq/Ord instances of the wrapped type would behave differently than
the one on t. I know that this is very handwavy argument, but I think the
current behavior is *very* surprising.

Somehow the current behavior seems to be incompatible with the FTP, where
pairs are given a special treatment (if that't the right/intuitive choice
is a completely different topic, though).

Given the fact that "deriving Foldable" is quite old and therefore hard to
change, I would at least suggest a big, fat warning in the documentation,
including various examples where intuition and implementation do not
necessarily meet.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: How, precisely, can we improve?

2016-09-27 Thread Sven Panne
Just a remark from my side: The documentation/tooling landscape is a bit
more fragmented than it needs to be IMHO. More concretely:

   * We currently have *3* wikis:

https://wiki.haskell.org/Haskell
https://ghc.haskell.org/trac/ghc
https://phabricator.haskell.org/w/

 It's clear to me that they have different emphases and different
origins, but in the end this results in valuable information being
scattered around. Wikis in general are already quite hard to navigate (due
to their inherent chaotic "structure"), so having 3 of them makes things
even worse. It would be great to have *the* single Haskell Wiki directly on
haskell.org in an easily reachable place.

   * To be an active Haskell community member, you need quite a few
different logins: Some for the Wikis mentioned above, one for Hackage,
another one for Phabricator, perhaps an SSH key here and there...
Phabricator is a notable exception: It accepts your GitHub/Google+/...
logins. It would be great if the other parts of the Haskell ecosystem
accepted those kinds of logins, too.

   * https://haskell-lang.org/ has great stuff on it, but its relationship
to haskell.org is unclear to me. Their "documentation" sub-pages look
extremely similar, but haskell-lang.org has various (great!) tutorials and
a nice overview of common libraries on it. From an external POV it seems to
me that haskell-lang.org should be seamlessly integrated into haskell.org,
i.e. merged into it. Having an endless sea of links on haskell.org is not
the same as having content nicely integrated into it, sorted by topic, etc.

All those points are not show-stoppers for people trying to be more active
in the Haskell community, but nevertheless they make things harder than
they need to be, so I fear we lose people quite early. To draw an analogy:
As probably everybody who actively monitors their web shop/customer site
knows, even seemlingy small things moves customers totally away from your
site. One unclear payment form? The vast majority of your potential
customers aborts the purchase immediately and forever. One confusing
interstitial web page? Say goodbye to lots of people. One hard-to-find
button/link? A forced login/new account? => Commercial disaster, etc. etc.

Furthermore, I'm quite aware of the technical/social difficulties of my
proposals, but that shouldn't let us stop trying to improve...

Cheers,
   S.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: ghc command line arguments parsing

2016-08-19 Thread Sven Panne
2016-08-19 10:58 GMT+02:00 Harendra Kumar :

> Funnily consistency was my point too and not convenience. I totally agree
> that consistency is more important. The end objective is less referring to
> manuals and rely more on intuition and consistency. Its all about making it
> easier to remember and not about typing less or more. [...]
>

OK, then I probably misunderstood you and we're actually in the same
boat... :-)


> [...] As you said I would be happier with an inconvenient to type but
> consistent alternative where we use '--ghc-arg=' for everything and get rid
> of the other ways. [...]
>

Hmmm, do we need '--ghc-args=' at all when we have '--' as the border
between the 2 "argument worlds"? Without thinking too much about it, :-} I
guess we don't. This would be consistent with how e.g. "stack exec" or
"gdb" work. An explicit option --pass-this-to-some-progXY is only needed
when there are more than 2 "argument worlds" involved, otherwise '--' is
enough, and also easier to type/remember.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: ghc command line arguments parsing

2016-08-19 Thread Sven Panne
2016-08-18 19:59 GMT+02:00 Harendra Kumar :

> [...]  It only parses a flag which takes an argument. [...]
>

o_O AFAICT, this is even more non-standard and quite surprising...


> As I explained above, I would prefer to keep this bug :-) and document it
> especially for runghc as a better alternative to --ghc-arg=foo . [...]
>

While I partly understand your motivation, convenience is often the worst
reason for doing something. Consistency is IMHO infinitely more valuable
than having various inconsistent ad-hoc "convenient" shortcuts here and
there. If you look around the tool landscape, commandline argument parsing
is already less consistent than it should be (try e.g.
ps/tar/unzip/df/java/git/...), and I don't think we should make this
situation worse by adding yet another variation.

As long as we have a way to pass arguments down the pipeline (as we
obviously have), thing are fine IMHO. This situation is not much different
from the common -Wl,foo option for GCC and friends to pass arguments to the
linker. And to be honest: How often do you really *type* long commandlines
compared to putting them into some kind of script?

Cheers,
   S.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: [ANNOUNCE] GHC 8.0.1 release candidate 2

2016-02-16 Thread Sven Panne
2016-02-16 10:49 GMT+01:00 Ben Gamari :

> [...] To this end I recommend the following,
>
>  * Someone propose a consistent vocabulary for warning flag names
>
>  * We keep -fwarn- flags as they are currently
>
>  * We keep the inconsistently named -W flags corresponding to these
>-fwarn- flags
>
>  * We add consistently named -W flags alongside these
>
>  * We set a timeline for deprecating the inconsistent flags
>

This plan looks perfect.


> Sven, perhaps you would like to pick up this task?
>

Alas, I don't have many spare development cycles at the moment, especially
given the relatively tight timeline for 8.0.1. I have just enough time to
grumble about some GHC details on this list. ;-) More seriously, after
Herbert's option survey my proposal is quite short:

   * Use "sigs" or "signatures" consistently (doesn't really matter which
one)

   * Use "pattern-synonyms", not "pat-syn"
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: [ANNOUNCE] GHC 8.0.1 release candidate 2

2016-02-16 Thread Sven Panne
2016-02-16 10:56 GMT+01:00 Herbert Valerio Riedel :

> [...] but `sig(nature)s` has a precedent, so using `-sigs` wouldn't
> introduce anything new.
>

I'm fine with "sigs", my point was only the fact that non-abbreviated words
seem to be much more common in the flags names (and are easier to
remember). IMHO it doesn't really matter if the flag names are long: One
probably doesn't type them on the command line often, they typically live
in .cabal files, .travis.yml and pragmas where you type them once.

Well... the  -Wnoncanonical-*-instances flag family was the best I could
> come up with which is reasonably self-descriptive... do you have any
> better suggestions?
>

No, and I actually like the long names, see above. :-)


> Fwiw, `ghc --show-options | grep binding` come ups empty
>

Then the docs are out-of-sync:
http://downloads.haskell.org/~ghc/master/users-guide/using-warnings.html#ghc-flag--Wlazy-unlifted-bindings
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: [ANNOUNCE] GHC 8.0.1 release candidate 2

2016-02-15 Thread Sven Panne
2016-02-16 0:35 GMT+01:00 Matthew Pickering :

> I have renamed it to -Wmissing-pat-syn-signatures.
>

Hmmm, things are still wildly inconsistent:

   * "pat" is spelled "pattern" in other flags.

   * We still have both "sigs" and "signatures" as parts of the names.

   * Why is "synonyms" too long, but OTOH we have monsters like
"-Wnoncanonical-monadfail-instances"?

   * We have both "binds" and "bindings" as parts of the names.

My proposal would be: The -Wfoo option syntax is new, anyway, so let's fix
all those inconsistencies in one big sweep before 8.0.1 is out, it only
gets harder later. At the moment you need #ifdef magic in the code and "If
impl(foo)" in .cabal, anyway, but doing these changes later will only keep
this sorry state for longer than necessary. I don't really care if we use
abbreviations like "sigs" or not, but whatever we use, we should use it
consistently (personally I would prefer the whole words, not the
abbreviations).

Cheers,
   S.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: [ANNOUNCE] GHC 8.0.1 release candidate 2

2016-02-15 Thread Sven Panne
2016-02-15 20:16 GMT+01:00 Ben Gamari <b...@smart-cactus.org>:

> Sven Panne <svenpa...@gmail.com> writes:
> The reason for this is that the things missing signatures are pattern
> synonyms, which have their warnings controlled by -Wmissing-pat-syn-sigs
> [1], which is enabled in -Wall by default.
>

OK, I missed that in the release notes. Two points here:

   * The naming of the options is horrible, sometimes it's "sigs",
sometimes it's "signatures". I would prefer if we named them consistently
(probably "signatures", it's easier to search for).

   * Given the myriad of warning-related options, It is *extremely* hard to
figure out which one caused the actual warning in question. The solution to
this is very easy and done this way in clang/gcc (don't remember which one,
I'm switching quite often): Just suffix all warnings consistently with the
option causing it, e.g.

Top-level binding with no type signature: [ -Wmissing-pat-syn-sigs]:


Cheers,
   S.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: [ANNOUNCE] GHC 8.0.1 release candidate 2

2016-02-15 Thread Sven Panne
I'm a little bit late to the 8.0.1 show, but nevertheless: Motivated by the
current discussion about -Wcompat and friends I decided to take a detailed
look at the warnings in my projects and hit a regression(?): Somehow I'm
unable to suppress the "Top-level binding with no type signature" warnings
from 8.0.1 onwards.

The gory details: In my .cabal file I set -Wall (
https://github.com/haskell-opengl/OpenGLRaw/blob/master/OpenGLRaw.cabal#L618),
and in my .travis.yml I set -Werror (
https://github.com/haskell-opengl/OpenGLRaw/blob/master/.travis.yml#L76).
But the -Wno-missing-signatures pragma (
https://github.com/haskell-opengl/OpenGLRaw/blob/master/src/Graphics/GL/Tokens.hs#L2)
doesn't work, see
https://travis-ci.org/haskell-opengl/OpenGLRaw/jobs/109400373. Using
-fno-warn-missing-signatures didn't work, either, see
https://travis-ci.org/haskell-opengl/OpenGLRaw/jobs/109396738.

Am I doing something wrong here or is this really a regression? I'm quite
sure that suppressions via pragmas worked in the past... :-(

Cheers,
   S.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Reconsidering -Wall and -Wcompat

2016-02-14 Thread Sven Panne
2016-02-14 17:12 GMT+01:00 Ben Gamari :

> [...] This proposal is motivated by concern expressed by some that -Wcompat
> would see little usage unless it is placed in one of the warning sets
> typically used during development. One such set is -Wall, which enables
> a generous fraction of GHC's warning collectionand is is intended [2]
> for use during development.
>

IMHO, the distinction between "during development" and "outside of it" is
purely hypothetical.  A typical workflow is: Develop your code locally
against one GHC/set of libraries, commit to GitHub and let Travis CI do the
real work of testing against a matrix of configurations. If things work
well and the changes are worth it, tag your current state and release it.
Where exactly in this scenario is the code leaving the "during development"
state? I definitely want to enable -Wall for the Travis CI builds, because
that's the place where they are most valuable. As stated on the Wiki, stuff
in -Wcompat will often be non-actionable, so the only option I see if
-Wcompat is included in -Wall will be -Wno-compat for all my projects.
-Wcompat would be restricted to a few manual local builds to see where
things are heading.


> Unfortunately, despite the (albeit only recently stated) intent of
> flag, -Wall is widely used outside of development [3], often with the
> expectation that the result be warning-clean across multiple GHC
> versions. While we hope that -Wall will see less use in this context in
> the future, [...]


Seeing -Wall this way is quite unusual, especially for people coming from
C/C++ (and the numbers quoted from Hackage seem to be a hint that others
think so, too). Normally, -Wall -Wextra -pedantic etc. are life-savers and
should be kept enabled all the time, unless you like endless debugging
hours, of course.

In a nutshell: I would consider -Wall implying -Wcompat an annoyance, but
as long as it can be disabled by 2 lines in .travis.yml, I don't really
care. ;-)

Cheers,
   S.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


What is causing the "Unrecognized field abi" warning?

2015-11-29 Thread Sven Panne
This shows up in more recent builds of my packages on Travis CI, e.g.
https://travis-ci.org/haskell-opengl/OpenGL/jobs/93814917#L296, but I don't
have a clue what's causing it (GHC? cabal? Something else?), if I should
ignore it or if I should somehow act on that. :-/ My gut feeling is that
it's GHC, but I might be wrong...

Cheers,
   S.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: What is causing the "Unrecognized field abi" warning?

2015-11-29 Thread Sven Panne
2015-11-29 21:42 GMT+01:00 Edward Z. Yang :

> [...] If the messages are bothersome, we could setup Cabal to not print out
> fields if it knows that GHC doesn't support them.
>

I would very much appreciate that, especially given the fact that "old
versions of GHC" include GHC HEAD from Herbert's ppa. ;-)

Cheers,
   S.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Pattern Synonym Signature Confusion

2015-10-01 Thread Sven Panne
2015-10-01 13:23 GMT+02:00 Matthew Pickering :

> I think that the current state of pattern synonym signatures is quite
> confusing, especially regarding the constraints. [...]


Thanks to an off-list email from Matthew (thanks for that!) I found out that

   pattern FOO = 1234 :: Int

behaves differently from

   pattern FOO :: Int
   pattern FOO = 1234

In the former case one has to use ScopedTypeVariables, in the latter case
it works without it. This is not really intuitive, although I'll have to
admit that I've had only a cursory look at the "Typing of pattern synonyms"
section in the GHC manual. But even after re-reading it, it's not really
clear where the difference in the above example comes from.

So in a nutshell: +1 for the "quite confusing" argument.

Cheers,
   S.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: HEADS UP: Need 7.10.3?

2015-09-17 Thread Sven Panne
Building Haddock documentation on Windows for larger packages (e.g.
OpenGLRaw) is broken in 7.10.2, similar to linking: The reason is once
again the silly Windows command line length limitation, so we need response
files here, too. Haddock 2.16.1 already has support for this, but this
seems to be broken (probably
https://github.com/haskell/haddock/commit/9affe8f6b3a9b07367c8c14162aecea8b15856a6
is missing), see the corresponding check in cabal (
https://github.com/haskell/cabal/blob/master/Cabal/Distribution/Simple/Haddock.hs#L470
).

So in a nutshell: We would need a new Haddock release (bundled with GHC
7.10.3) and a new cabal release with support for Haddock response files (in
cabal's HEAD, but not yet released). Would this be possible?
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Making compilation results deterministic (#4012)

2015-09-15 Thread Sven Panne
2015-09-14 19:04 GMT+02:00 Bartosz Nitka :

> [...] Uniques are non-deterministic [...]
>

Just out of curiosity: Why is this the case? Naively, I would assume that
you can't say much about the value returned by getKey, but at least I would
think that in repeated program runs, the same sequence of values would be
produced. Well, unless somehow the values depend on pointer values, which
will screw this up because of ASLR.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Strange Changelog.md formatting on Hackage

2015-08-06 Thread Sven Panne
2015-08-06 9:48 GMT+02:00 Joachim Breitner m...@joachim-breitner.de:

 Dear Sven,

 Am Mittwoch, den 05.08.2015, 16:57 +0200 schrieb Sven Panne:
  The formatting of
  https://hackage.haskell.org/package/StateVar-1.1.0.1/changelog is
  garbled, while the corresponding GitHub page
  https://github.com/haskell-opengl/StateVar/blob/master/CHANGELOG.md l
  ooks OK. Can somebody give me a hint why this happens?
  https://hackage.haskell.org/package/lens-4.12.3/changelog e.g. looks
  nice, but the markdown seems to be similar.

 one difference is that StateVar’s changelog has CRLF-terminated lines,
 while lenses’ does not. This is likely a bug in hackage-server, you
 might want to open a issue there.


That's a likely explanation, because I did the 'cabal sdist' on Windows.
It's a bit funny that nobody noticed that so far, Windows seems to be
highly under-represented for Haskell developers compared to Linux/Mac (I
regularly switch). :-/ I've opened
https://github.com/haskell/hackage-server/issues/402, I wasn't even aware
of that GitHub project.


 Also, it is technically off-topic on ghc-dev; haskell-cafe might have
 been more suited for this question.


Granted, but to me the distinction between
haskell/haskell-cafe/ghc-users/ghc-dev is always a bit blurry and IMHO
there are too many fragmented lists, especially given that more and more
tools are involved in the Haskell ecosystem, so this will get worse. But
that's just my personal view :-)
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Strange Changelog.md formatting on Hackage

2015-08-05 Thread Sven Panne
The formatting of
https://hackage.haskell.org/package/StateVar-1.1.0.1/changelog is garbled,
while the corresponding GitHub page
https://github.com/haskell-opengl/StateVar/blob/master/CHANGELOG.md looks
OK. Can somebody give me a hint why this happens?
https://hackage.haskell.org/package/lens-4.12.3/changelog e.g. looks nice,
but the markdown seems to be similar.

Cheers,
   S.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: ANNOUNCE: GHC 7.10.2 Release Candidate 2

2015-07-07 Thread Sven Panne
2015-07-07 7:26 GMT+02:00 Mark Lentczner mark.lentcz...@gmail.com:

 And now Windows RC2 for Haksell Platform is also here:

 http://www.ozonehouse.com/mark/platform/

 [...]


I noticed 2 problems so far:

* The package cache is still always out of date (I thought there was a fix
for that):

--
Sven@SCOTTY /d/Repositories/ObjectName (master)
$ ghc-pkg list
WARNING: cache is out of date: c:/Program Files/Haskell
Platform/7.10.2\lib\package.conf.d\package.cache
ghc will see an old view of this package db. Use 'ghc-pkg recache' to fix.
c:/Program Files/Haskell Platform/7.10.2\lib\package.conf.d:
Cabal-1.22.4.0
GLURaw-1.5.0.1
GLUT-2.7.0.1
[...]
--

* Something is missing/misconfigured for Haddock (note the funny non-local
path in the error message):

--
Sven@SCOTTY /d/Repositories/ObjectName (master)
$ cabal sandbox init
Writing a default package environment file to
d:\Repositories\ObjectName\cabal.sandbox.config
Creating a new sandbox at D:\Repositories\ObjectName\.cabal-sandbox

Sven@SCOTTY /d/Repositories/ObjectName (master)
$ cabal configure
Resolving dependencies...
Configuring ObjectName-1.1.0.0...

Sven@SCOTTY /d/Repositories/ObjectName (master)
$ cabal haddock
Running Haddock for ObjectName-1.1.0.0...
Preprocessing library ObjectName-1.1.0.0...
Haddock coverage:
 100% (  3 /  3) in 'Data.ObjectName'
Haddock's resource directory
(G:\GitHub\haddock\.cabal-sandbox\x86_64-windows-ghc-7.10.1.20150630\haddock-api-2.16.1)
does not exist!
--
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: ANNOUNCE: GHC 7.10.2 Release Candidate 2

2015-07-07 Thread Sven Panne
2015-07-07 13:30 GMT+02:00 Thomas Miedema thomasmied...@gmail.com:



 On Tue, Jul 7, 2015 at 10:54 AM, Sven Panne svenpa...@gmail.com wrote:

 * The package cache is still always out of date (I thought there was a
 fix for that):


 Please reopen https://ghc.haskell.org/trac/ghc/ticket/10205 with the
 output of `which ghc-pkg` and `ghc-pkg list -v`.


Done.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Abstract FilePath Proposal

2015-07-04 Thread Sven Panne
2015-07-04 22:48 GMT+02:00 amin...@gmail.com:

 I'd argue that Haskell and GHC's history clearly shows we've answered that
 question and that overalll we value frequent small breaking changes over
 giant change roadblocks like Perl's or Python's. [...]


I'm not sure that value is the right word. My impression is more that
this somehow happened accidentally and was not the result of careful
planning or broad consensus. And even if in the past this might have been
the right thing, I consider today's state of affairs as something totally
different: In the past it was only GHC, small parts of the language or a
handful of packages (or even just a few modules, in the pre-package times).
Today every change resonates through thousands of packages on Hackage and
elsewhere. IMHO some approach similar to e.g. C++03 = C++11 = C++14 makes
more sense in world like this than a constantly fluctuating base, but
others might see this differently. My fear is that this will inevitably
lead to the necessity of having an autoconf-like feature detection
machinery to compile a package, and looking at a few packages, we are
already halfway there. :-/
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Abstract FilePath Proposal

2015-07-04 Thread Sven Panne
2015-07-04 4:28 GMT+02:00 Carter Schonwald carter.schonw...@gmail.com:

 [...] What fraction of currently build able hackage breaks with such an
 Api change, and how complex will fixing those breaks.  [...]


I think it is highly irrelevant how complex fixing the breakage is, it will
probably almost always be trivial, but that's not the point: Think e.g.
about a package which didn't really need any update for a few years, its
maintainer is inactive (nothing to recently, so that's OK), and which is a
transitive dependency of a number of other packages. This will effectively
mean lots of broken packages for weeks or even longer. Fixing breakage from
the AMP or FTP proposals was trivial, too, but nevertheless a bit painful.

This should be evaluated.  And to what extent can the appropriate
 migrations be mechanically assisted.
 Would some of this breakage be mitigated by changing ++ to be monoid or
 semigroup merge?


To me the fundamental question which should be answered before any detail
question is: Should we go on and continuously break minor things (i.e.
basically give up any stability guarantees) or should we collect a bunch of
changes first (leaving vital things untouched for that time) and release
all those changes together, in longer intervals? That's IMHO a tough
question which we somehow avoided to answer up to now. I would like to see
a broader discussion like this first, both approaches have their pros and
cons, and whatever we do, there should be some kind of consensus behind it.

Cheers,
   S.

P.S.: Just for the record: I'm leaning towards the
lots-of-changes-after-a-longer-time approach, otherwise I see a flood of
#ifdefs and tons of failing builds coming our way... :-P
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Abstract FilePath Proposal

2015-06-28 Thread Sven Panne
2015-06-28 16:47 GMT+02:00 Boespflug, Mathieu m...@tweag.io:

 Notice that the kind of normalization I'm talking about, specified in
 the link I provided, does not include this kind of normalization.
 Because it requires the IO monad to perform correctly, and only on
 real paths.

 Here is the link again:


 https://hackage.haskell.org/package/filepath-1.1.0.2/docs/System-FilePath-Posix.html#v:normalise
 [...]


OK, then I misunderstood what you meant by normalizing. But a question
remains then: What is a use case for having equality modulo normalise? It
throws together a few more paths which plain equality on strings would
consider different, but it is still not semantic equality in the sense of
these 2 paths refer to the same dir/file. So unless there is a compelling
use case (which I don't see), I don't see a point in always doing
normalise. Or do I miss something?
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: [Haskell-cafe] RFC: Native -XCPP Proposal

2015-05-06 Thread Sven Panne
2015-05-06 16:21 GMT+02:00 Bardur Arantsson s...@scientician.net:
 +1, I'll wager that the vast majority of usages are just for version
 range checks.

The OpenGL-related packages used macros to generate some binding magic
(a foreign import plus some helper functions for each API entry),
not just range checks. I had serious trouble when Apple switched to
clang, so as a quick fix, the macro-expanded (via GCC's CPP) sources
had been checked in. :-P Nowadays the binding is generated from the
OpenGL XML registry file, so this is not an issue anymore.

 If there are packages that require more, they could just keep using the
 system-cpp or, I, guess cpphs if it gets baked into GHC. Like you, I'd
 want to see real evidence that that's actually worth the
 effort/complication.

Simply relying on the system CPP doesn't work due to the various
differences between GCC's CPP and the one from clang, see e.g.
https://github.com/haskell-opengl/OpenGLRaw/issues/18#issuecomment-31428380.
Ignoring the problem doesn't make it go away... ;-)

Note that we still need CPP to handle the various calling conventions
on the different platforms when the FFI is used, so it's not only
range checks, see e.g.
https://github.com/haskell-opengl/OpenGLRaw/blob/master/OpenGLRaw.cabal#L588.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Branchless implementation for literal case – is it worth it?

2015-04-20 Thread Sven Panne
2015-04-19 9:44 GMT+02:00 Joachim Breitner m...@joachim-breitner.de:
 [...] So my question to the general audience: Is such branchless code really
 better than the current, branching code? Can someone provide us with an
 example that shows that it is better? Do I need to produce different
 branchless assembly? [...]

Just a few war stories regarding this from the trenches (= Chrome's
JavaScript JIT):

   * Branchless code in itself is a non-goal. What you care about is
performance and/or code size, but both don't have a direct
relationship to using branches or not.

   * Hacker's Delight is a cool book, but don't use the bit fiddling
tricks in there blindly. We actually reverted to straightforward code
with branches from some seemingly better branchless code, because
even with branches the performance was better.

   * Even within a processor family like x64, performance
characteristics vary vastly. What can be a performance improvement for
the modern beefy Xeon machine you're benchmarking on, can make things
worse for a low-end Atom. The same holds in the other direction.

   * The same holds for different architectures, i.e. an
optimization which makes things fast on most Intel cores could make
things worse on e.g. ARM cores (and vice versa).

   * On more powerful cores with heavy out-of-order execution, it's
hard to beat a well-predicted branch.

   * On Linux, the perf tool is your best friend. Without it you don't
have a clue what's making your code slower than expected, it could be
bad branch prediction, stalled units with the CPU, bad luck with
caches, etc.

   * Micro-benchmarks can be highly misleading, e.g. due to totally
different branching patterns, cache usage, etc.

In a nutshell: If you don't know the details of the architecture
you're compiling for, you simply don't know if the optimization you
have in mind actually makes things better or worse. Therefore these
kind of decision have to be pushed very far towards the end of the
compiler pipeline. Having some kind of feedback about previous runs of
the same code is very helpful, too, but this is a bit complicated in a
batch compiler (but doable).
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Branchless implementation for literal case – is it worth it?

2015-04-20 Thread Sven Panne
2015-04-20 16:41 GMT+02:00 Joachim Breitner m...@joachim-breitner.de:
 The conclusion I draw from your mail, at last for our case, is:
 Don’t bother (and keep the compiler code simple). Is that a correct
 reading?

Yes, I think that's the right approach. Do simple things like e.g. a
distinction between sparse cases (= if cascade), lots of sparse
cases (= some kind of decision tree) and a non-trivial number of
dense cases (jump table). For the jump table, consider computed jumps
(e.g. to a table of relative jumps, which is a common implementation
for PIC) and indirect jumps (the usual address table) in the backend.
The former could be faster than the latter (for some ARM cores IIRC),
which is not exactly intuitive. Your mileage may vary. :-)

As I said: This is our experience in a JIT, so I'd be interested in
other people's experience, too, especially for real-world programs,
not micro-benchmarks.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: GHC 7.10.1 html docs all flat?

2015-04-01 Thread Sven Panne
2015-03-31 20:20 GMT+02:00 Randy Polen randyhask...@outlook.com:
 [...] Just want to make sure this is what is expected, and then change the HP
 build code accordingly. [...]

Hmmm, unless there is a strong reason to change this into a flat
layout, I would propose to keep the docs hierarchical. I could
envision clashes and tooling problems with the flat layout, but I fail
to see why it should be better. To me, it just looks like a bug. If
it's not a bug, could somebody point me to the discussion regarding
the rationale behind the flat layout?

Another related point: All Source links on haskell.org are currently
broken, so the docs need to be regenerated an re-uploaded, anyway.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: HP 2015.2.0.0 and GHC 7.10

2015-03-25 Thread Sven Panne
2015-03-25 7:31 GMT+01:00 Mark Lentczner mark.lentcz...@gmail.com:
 [...] look over the the source file at GitHub that defines the release
 see if the version of your stuff looks right

The OpenGLRaw and GLUTRaw versions are OK, for OpenGL and GLUT we
should use newer versions I just released (OpenGL-2.12.0.0 and
GLUT-2.7.0.0). These contain much requested API
additions/generalizations.

Furthermore, due to other popular requests like generalizing things
and making OpenAL stuff independent from OpenGL, I split off improved
StateVar (thanks to Edward Kmett!) and ObjectName packages again,
which are now additional dependencies. This move was made in spite of
all those bikeshedding discussions around them, because several actual
package users requested them. (Listen to your customers!) I simply
don't want to be held back by eternal theoretical discussions anymore,
instead let's ship what people actually want.

 [...] look near the end where there is a bunch of stuff I kinda just added to 
 get
 it all to compile [...]

As mentioned above, we need StateVar-1.1.0.0 and ObjectName-1.1.0.0 now, too.

A +1 for including exceptions, but why version 0.6.1, which is quite
old? I would propose including the latest and greatest 0.8.0.2
version.

rantRegarding the random packages: Shipping 2 packages for basically
the same thing is silly IMHO and a bad sign for something which claims
to be a coherent set of APIs. Either these packages should be merged
or the dependencies adapted. Offering choice for the same task has no
place in the platform IMHO, we should ship what is considered the
best/most widely used for each area. For the more arcane needs,
there's always Hackage.../rant
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: FYI: Cabal-1.22.1.0 has been released

2015-02-22 Thread Sven Panne
2015-02-22 13:35 GMT+01:00 Johan Tibell johan.tib...@gmail.com:
 We will probably want to ship that with GHC 7.10.

My usual request: A Ubuntu package for this in Herbert's Ubuntu PPA.
:-) This way we can thoroughly test things in various combinations on
Travis CI.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Put Error: before error output

2015-01-25 Thread Sven Panne
2015-01-23 12:55 GMT+01:00 Konstantine Rybnikov k...@k-bx.com:
 If warnings will be treated as errors it's fine to have Error: shown for
 them, I think.

+1 for this, it is how e.g. gcc behaves with -Werror, too. So unless
there is a compelling reason to do things differently (which I don't
see here), I would just follow conventional behavior instead of being
creative. :-)
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: ANNOUNCE: GHC 7.8.4 Release Candidate 1

2014-11-26 Thread Sven Panne
2014-11-25 20:46 GMT+01:00 Austin Seipp aus...@well-typed.com:
 We are pleased to announce the first release candidate for GHC 7.8.4:

 https://downloads.haskell.org/~ghc/7.8.4-rc1/ [...]

Would it be possible to get the RC on
https://launchpad.net/~hvr/+archive/ubuntu/ghc? This way one could
easily test things on Travis CI.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: More flexible literate Haskell extensions (Trac #9789), summary on wiki

2014-11-20 Thread Sven Panne
2014-11-20 9:36 GMT+01:00 Joachim Breitner m...@joachim-breitner.de:
 [...] With your extensions it will have to read the directory contents. In
 most situations, that should be fine, but it might cause minor
 inconveniences with very large directories, many search paths (-i flags)
 and/or very weird file systems (compiling from a FUSE-mounted
 HTTP-Server that does not support directory listing? Would work now...)

Hmmm, IMHO reading directory contents is not a good idea for a
compiler, for just the reasons you mentioned.

 A fixed set of extensions (e.g. just md and tex) would avoid this
 problem, but goes against the spirit of the proposal.

I think we can get the best of both worlds by adding a compiler flag,
e.g. --literate-extensions=md,tex. This way the compiler still has to
probe only a finite set of filenames *and* we are still flexible.

Cheers,
   S.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: Can't install packages with my inplace compiler

2014-11-05 Thread Sven Panne
Hmmm, is this cabal mess the reason for the problems with GHC head and
Cabal head on https://travis-ci.org/haskell-opengl/StateVar/jobs/39533455#L102?
I've brought up the problem in another thread, but there was no
conclusion. As it is, there seems to be no way to test things with GHC
head on Travis CI, which is really bad. :-/ What can be done here?
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: cabal sdist trouble with GHC from head

2014-10-30 Thread Sven Panne
2014-10-29 23:55 GMT+01:00 Herbert Valerio Riedel hvrie...@gmail.com:
 Fyi, there's a `cabal-install-head` package now[1] (may take a few
 minutes till its properly published in the PPA though); please test it
 and lemme know if it works as expected...

  [1]: 
 https://launchpad.net/~hvr/+archive/ubuntu/ghc/+sourcepub/4539223/+listing-archive-extra

Thanks for uploading this. Nevertheless, GHC from head and cabal from
head still don't like each other:

  https://travis-ci.org/haskell-opengl/StateVar/jobs/39470537#L103

I get cabal: Prelude.chr: bad argument: 3031027... o_O
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: cabal sdist trouble with GHC from head

2014-10-30 Thread Sven Panne
2014-10-30 17:20 GMT+01:00 Austin Seipp aus...@well-typed.com:
 [...] So this just means that Cabal isn't necessarily *future compatible*
 with future GHCs - they may change the package format, etc. But it is
 backwards compatible with existing ones.

OK, that's good to know. To be sure, I've just tested Cabal head + GHC
7.8.3, and it works. But as I've mentioned already, there seems to be
*no* Cabal version which works with GHC head:
https://travis-ci.org/haskell-opengl/StateVar/builds/39533448 Is this
known to the Cabal people?
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: cabal sdist trouble with GHC from head

2014-10-23 Thread Sven Panne
2014-10-22 15:16 GMT+02:00 Sven Panne svenpa...@gmail.com:
 Does anybody have a clue what's going wrong at the sdist step here?

https://travis-ci.org/haskell-opengl/OpenGLRaw/jobs/38707011#L104

 This only happens with a GHC from head, a build with GHC 7.8.3 is fine:

https://travis-ci.org/haskell-opengl/OpenGLRaw/jobs/38707010

 Any help highly appreciated...

I would really need some help here, even adding a few more diagnostic
things to the Travis CI configuration didn't give me a clue what's
going wrong:

   https://travis-ci.org/haskell-opengl/OpenGLRaw/jobs/38813449#L110

I totally fail to understand why Cabal's sdist step works with every
released compiler, but not with a GHC from head. I don't even know if
this is a Cabal issue or a GHC issue. The relevant part from the
Travis CI log is:

   ...
   cabal-1.18 sdist --verbose=3
   creating dist/src
   creating dist/src/sdist.-3586/OpenGLRaw-1.5.0.0
   Using internal setup method with build-type Simple and args:
   
[sdist,--verbose=3,--builddir=dist,--output-directory=dist/src/sdist.-3586/OpenGLRaw-1.5.0.0]
   cabal-1.18: dist/setup-config: invalid argument
   The command cabal-1.18 sdist --verbose=3 exited with 1.
   ...

As can be seen from the log, dist/setup-config is there and can be accessed.

Confused,
   S.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: cabal sdist trouble with GHC from head

2014-10-23 Thread Sven Panne
2014-10-23 15:01 GMT+02:00 Alan  Kim Zimmerman alan.z...@gmail.com:
 cabal has changed for HEAD, you need to install 1.21.1.0

Hmmm, so we *force* people to update? o_O Perhaps I've missed an
announcement, and I really have a hard time deducing this from the
output on Travis CI. Is 1.21.1.0 backwards-compatible to previous
GHCs? Or do I have to set up something more or less complicated
depending on the GHC version (which would be unfortunate)?
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: Making GHCi awesomer?

2014-10-22 Thread Sven Panne
2014-10-22 3:20 GMT+02:00 Carter Schonwald carter.schonw...@gmail.com:
 i'm pretty sure they're usable in ghci... i think theres just certain flags
 that need to be invoked for one reason or another, but I could be wrong (and
 i've not tried in a while)

I just gave a few OpenGL/GLUT examples a try with the 2014.2.0.0
platform on Ubuntu 14.04.1 (x64), and things work nicely using plain
ghci  without any flags. I remember there were some threading issues
(OpenGL uses TLS), but obviously that's not the case anymore. Hmmm,
has something changed?

Anyway, I would be interested in any concrete problems, too.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


cabal sdist trouble with GHC from head

2014-10-22 Thread Sven Panne
Does anybody have a clue what's going wrong at the sdist step here?

   https://travis-ci.org/haskell-opengl/OpenGLRaw/jobs/38707011#L104

This only happens with a GHC from head, a build with GHC 7.8.3 is fine:

   https://travis-ci.org/haskell-opengl/OpenGLRaw/jobs/38707010

Any help highly appreciated...

Cheers,
   S.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: [Haskell] ANNOUNCE: GHC version 7.8.3

2014-09-01 Thread Sven Panne
2014-09-01 9:26 GMT+02:00 Simon Marlow marlo...@gmail.com:
 Hi Sven - you would need to compile the module with -dynamic or -dynamic-too
 to have it be picked up by the new dynamically-linked GHCi in 7.8.

Ah, OK... Adding -dynamic makes this work, but with -dynamic-too, ghci
still loads the interpreted version. I didn't follow the linking story
in detail, so this is all a bit confusing. I think that at least
http://www.haskell.org/ghc/docs/7.8.3/html/users_guide/ghci-compiled.html
needs some update, because as it is, it doesn't reflect reality.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: Release building for Windows

2014-08-04 Thread Sven Panne
2014-08-04 14:59 GMT+02:00 Mikhail Glushenkov the.dead.shall.r...@gmail.com:
 https://ghc.haskell.org/trac/ghc/wiki/GHC-7.8-FAQ

Hmmm, this isn't very specific, it just says that there are probably
bugs, but that's true for almost all code. :-) Are there any concrete
issues with --enable-split-objs?

 One of the problems is that split-objs is extremely slow, especially
 on Windows. I had to disable split-objs for OpenGL-related libraries
 when building the HP installer in the past because of this.

I think it's perfectly fine if the the compilation of the library
itself takes ages if it pays off later: You compile the library once,
but link against it multiple times. Or do the link times against e.g.
OpenGL stuff suffer? My point is: Do we make the right trade-off here?
A quick search brought up e.g.
https://github.com/gentoo-haskell/gentoo-haskell/issues/169 which
seems to be a request to split everything.

 Randy also said that libraries built with split-objs don't work well
 in ghci on Windows x64.

Is there an issue for this?
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: At long last: 2014.2.0.0 Release Candidate 1

2014-07-25 Thread Sven Panne
2014-07-24 16:24 GMT+02:00 Mark Lentczner mark.lentcz...@gmail.com:
 On Thu, Jul 24, 2014 at 1:25 AM, Sven Panne svenpa...@gmail.com wrote:
 The source tarball is missing a few files for hptool:
 I'll try to catch them the next round... or pull requests on github welcome!

The structure on GitHub is a bit confusing: It took me some time to
figure out that master is probably irrelevant by now, and
new-build contains the stuff for the upcoming HP. Is that correct?
If yes, one should probably merge back things to master and base the
HP releases directly on that. As it is, for new people things are
quite puzzling...

 On Thu, Jul 24, 2014 at 4:22 AM, Sven Panne svenpa...@gmail.com wrote:
 ...But I'm a bit clueless how to proceed from that point: What I actually 
 want is a
 complete installation of the HP under a given prefix.
 Your build is using the genericOS set up in src/OS/Internal.hs. In the past,
 linux/unix distribution packagers have worked from the src tarball, and when
 I queired in the past, none said they used the build scripts HP had, but
 used their own. And in turn, no one put in any effort over the last six
 months to add a build decription using the new build system other than OS X
 and Windows.

Ah, OK, that was unclear to me, and it is a rather serious regression
compared to the previous platform release: With 2013.2.0.0 one could
easily specify a --prefix in the configure step, and everything worked
as expected. I really think that this use case should be resurrected
before the new HP is released: There are various reasons why pre-built
Linux packages are not an option for many people (having to wait until
a package is ready/fixed/etc. for one's distro, no root access, etc.),
so the --prefix option was and still is important here.

What about packages being unregistered? This seems to be a bug in
hptool, and it effectively makes the stuff below build/target
unusable.

 SO - what you, and we, need is a src/OS/GenericLinux.hs file so that it does
 you/we think is the generic layout for where things should live. For your
 own, peronsal install, you could start by just hacking on the genericOS
 structure in src/OS/Internal.hs. [...]

Guess what I did... ;-)


 [...] Furthermore, the executables arescattered around:

 build/target/usr/local/haskell-platform/2014.2.0.0/lib/alex-3.1.3/bin/alex
 build/target/usr/local/haskell-platform/2014.2.0.0/lib/cabal-install-1.18.0.5/bin/cabal
 build/target/usr/local/haskell-platform/2014.2.0.0/lib/happy-1.19.4/bin/happy
 build/target/usr/local/haskell-platform/2014.2.0.0/lib/hscolour-1.20.3/bin/HsColour

 My rationale is that if you install things under /usr/bin, /usr/lib,
 /usr/share/doc - then it becomes essentially impossible to remove easily:
 There are just far too many files in those trees intermingled with files
 from other things. Same is true if you do /usr/local/...

If you have either --prefix or use the tree below build/target to
assemble a package for your distro, that's a non-issue. As mentioned
above, we need --prefix anyway, so we should have a single
{bin,lib,share,...} structure directly below it. Even keeping the
distinction between the GHC parts and the rest of the platform is not
useful from a user perspective and to large parts totally artificial:
Why e.g. is haddock below GHC part, but alex below the HP part? This
is by pure technical accident, and it's unimportant when one installs
the HP as a whole.

 [...] Does this make sense for simple untar, run a script kind of 
 distributions?

Partly, yes. It doesn't matter in detail how things happen, but we
need the ability of the simple and easy ./configure --prefix=foo 
make  make install from the 2013.2.0.0 HP back in the new HP source
release.

I hope that I don't sound too negative, you're doing great (and
unpleasant) work, I just want to make sure that we don't regress on
Linux platforms...
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: At long last: 2014.2.0.0 Release Candidate 1

2014-07-24 Thread Sven Panne
The source tarball is missing a few files for hptool:

   hptool/src/HaddockMaster.hs
   hptool/src/OS/Win.hs
   hptool/src/Releases.hs
   hptool/src/Releases2012.hs
   hptool/src/Releases2013.hs
   hptool/src/Templates.hs
   hptool/src/Website.hs

I guess these are missing from
https://github.com/haskell/haskell-platform/blob/2014.2.0.0-RC1/hptool/hptool.cabal.
I've just copied those missing files from GitHub, let's see how the
installation from source continues on an x64 Linux...
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: Folding constants for floats

2014-01-14 Thread Sven Panne
2014/1/14 Carter Schonwald carter.schonw...@gmail.com:
 maybe so, but having a semantics by default is huge, and honestly i'm not
 super interested in optimizations that merely change one infinity for
 another. What would the alternative semantics be?

I'm not sure that I understood your reply: My example regarding -0 was
only demonstrating the status quo of GHCi and is IEEE-754-conformant.
The 1/foo is only used to distinguish between 0 and -0, it is not
about infinities per se.

My point was: As much as I propose to keep these current semantics,
there might be users who care more about performance than
IEEE-754-conformance. For those, relatively simple semantics could be:
Regarding optimizations, numbers are considered mathematical
numbers, ignoring any rounding and precision issues, and everything
involving -0, NaN, and infinities is undefined. This would open up
optimizations like easy constant folding, transforming 0 + x to x, x -
x to 0, x `op` y to y `op` x for mathematically commutative operators,
associativity, etc.

I'm not 100% sure how useful this would really be, but I think we
agree that this shouldn't be the default.

Cheers,
   S.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: Building GHC head with clang on Mavericks

2014-01-03 Thread Sven Panne
2014/1/2 Carter Schonwald carter.schonw...@gmail.com:
 it looks like their work around is using ## rather than /**/

Well, actually lens is bypassing the problem by using cpphs, not the C
preprocessor. :-P OpenGLRaw is part of the Haskell Platform, and cpphs
is not, so I can't simply depend on it. (Licensing issues IIRC?)
Don't do that is not an option, either, at least not until the
binding is auto-generated. If I see this correctly, I really have to
do some preprocessor magic (slightly simplified output):

-
svenpanne@svenpanne:~$ cat preprocess.hs
#define FOO(x) bla/**/x x
#define BAR(x) bla##x #x
FOO(baz)
BAR(boo)
svenpanne@svenpanne:~$ gcc -traditional -E -x c preprocess.hs
blabaz baz
bla##boo #boo
svenpanne@svenpanne:~$ gcc -E -x c preprocess.hs
bla baz x
blaboo boo
svenpanne@svenpanne:~$ clang -traditional -E -x c preprocess.hs
bla baz x
bla##boo #boo
svenpanne@svenpanne:~$ clang -E -x c preprocess.hs
bla baz x
blaboo boo
-

If -traditional is not used, things are simple and consistent, and we
can simply use ## and #. Alas, -traditional *is* used, so we can't use
## and # with gcc an we are out of luck with clang. This really sucks,
and I consider the clang -traditional behavior a bug: How can you do
concatenation/stringification with clang -traditional? One can detect
clang via defined(__clang__) and the absence of -traditional via
defined(__STDC__), but this doesn't really help here.

Any suggestions? I am testing with a local clang 3.4 version (trunk
193323), but I am not sure if this matters.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: Building GHC head with clang on Mavericks

2014-01-02 Thread Sven Panne
Although it is not really GHC-related, this thread is sufficiently
close to the problem described in
https://github.com/haskell-opengl/OpenGLRaw/issues/18: AFAICT, Mac OS
X 10.9's clang doesn't really honor -traditional, so what can I do to
make things work with recent Macs without breaking all other
platforms? I guess the right #if in
https://github.com/haskell-opengl/OpenGLRaw/blob/master/include/HsOpenGLRaw.h
will do the trick, but I don't have access to a Mac. Hints are highly
appreciated, the whole current Mac situation is a bit of a mystery to
me...

Cheers,
   S.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: Building GHC head with clang on Mavericks

2014-01-02 Thread Sven Panne
2014/1/2 Carter Schonwald carter.schonw...@gmail.com:
 sven,http://www.haskell.org/platform/mac.html  has a wrapper script that
 makes clang play nice with CPP, though a simpler alternative one can be
 found on manuel's page [...]

I've seen the wrappers before, but do they really solve the problem
for OpenGLRaw (concatenation via /**/ and replacement in strings)? As
I said, I don't have access to a Mac, but the mangled options don't
look like if they have anything to do with that. Can somebody confirm
that?
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs