Re: Broken Data.Data instances

2014-07-27 Thread Edward Kmett
Im mostly looking at the Data.Data stuff as "nice to have" at this point.
Well, it really is need to have for some users, but they can typically get
by by writing a few hundred lines of boilerplate when its not there.

If you need to break something internally and it costs us a Data instance
for something? Have at it.

If we can still hack around the changes with Data then great.

Otherwise the Data machinery has always been for expert users who already
deal with a great deal of breakage anyways, so thrashing on that API seems
fine to me. Not desirable, but not unexpected.

-Edward


On Sun, Jul 27, 2014 at 9:49 PM, Richard Eisenberg 
wrote:

> What if there is a good reason for a missing/broken Data.Data instance?
> I'm specifically thinking of GADTs. There are few currently, but I, for
> one, have toyed with the idea of adding more. My recollection is that
> Data.Data doesn't work with GADTs. As a concrete, existent example, see
> CoAxiom.BranchList, which allows for type-level reification of singleton
> lists as distinct from other, not-necessarily-singleton lists.
>
> I would very much like to support API usage that would benefit from
> working Data.Data instances, but I also want to be sure we're not
> eliminating other possible futures without due discussion.
>
> Richard
>
> On Jul 27, 2014, at 2:04 PM, "Alan & Kim Zimmerman" 
> wrote:
>
> Philip
>
> How would you like to take this forward? From my side I would appreciate
> all guidance/help to get it resolved, it is a huge hindrance for HaRe.
>
> Alan
>
>
> On Sun, Jul 27, 2014 at 7:27 PM, Edward Kmett  wrote:
>
>> Philip, Alan,
>>
>> If you need a hand, I'm happy to pitch in guidance.
>>
>> I've had to mangle a bunch of hand-written Data instances and push out
>> patches to a dozen packages that used to be built this way before I
>> convinced the authors to switch to safer versions of Data. Using virtual
>> smart constructors like we do now in containers and Text where needed can
>> be used to preserve internal invariants, etc.
>>
>> This works far better for users of the API than just randomly throwing
>> them a live hand grenade. As I recall, these little grenades in generic
>> programming over the GHC API have been a constant source of pain for
>> libraries like haddock.
>>
>> Simon,
>>
>> It seems to me that regarding circular data structures, nothing prevents
>> you from walking a circular data structure with Data.Data. You can generate
>> a new one productively that looks just like the old with the contents
>> swapped out, it is indistinguishable to an observer if the fixed point is
>> lost, and a clever observer can use observable sharing to get it back,
>> supposing that they are allowed to try.
>>
>> Alternately, we could use the 'virtual constructor' trick there to break
>> the cycle and reintroduce it, but I'm less enthusiastic about that idea,
>> even if it is simpler in many ways.
>>
>> -Edward
>>
>>
>> On Sun, Jul 27, 2014 at 10:17 AM,  wrote:
>>
>>>  Alan,
>>>
>>> In that case, let's have a short feedback-loop between the two of us. It
>>> seems many of these files (Name.lhs, for example) are really stable through
>>> the repo-history. It would be nice to have one bigger refactoring all in
>>> one go (some of the code could use a polish, a lot of code seems removable).
>>>
>>> Regards,
>>> Philip
>>>
>>>  --
>>> *Van:* Alan & Kim Zimmerman [alan.z...@gmail.com]
>>> *Verzonden:* vrijdag 25 juli 2014 13:44
>>> *Aan:* Simon Peyton Jones
>>> *CC:* Holzenspies, P.K.F. (EWI); ghc-devs@haskell.org
>>> *Onderwerp:* Re: Broken Data.Data instances
>>>
>>>   By the way, I would be happy to attempt this task, if the concept is
>>> viable.
>>>
>>>
>>> On Thu, Jul 24, 2014 at 11:23 PM, Alan & Kim Zimmerman <
>>> alan.z...@gmail.com> wrote:
>>>
>>>>While we are talking about fixing traversals, how about getting rid
>>>> of the phase specific panic initialisers for placeHolderType,
>>>> placeHolderKind and friends?
>>>>
>>>>  In order to safely traverse with SYB, the following needs to be
>>>> inserted into all the SYB schemes (see
>>>>
>>>> https://github.com/alanz/HaRe/blob/master/src/Language/Haskell/Refact/Utils/GhcUtils.hs
>>>> )
>>>>
>>>> -- Check the Typeable items
>>>> checkItemStage1 :: (Typeable

Re: Overlapping and incoherent instances

2014-07-31 Thread Edward Kmett
Now if only we could somehow find a way to do the same thing for
AllowAmbiguousTypes. :)

I have a 2500 line file that I'm forced to turn on AllowAmbiguousTypes in
for 3 definitions, and checking that I didn't accidentally make something
else ambiguous to GHC's eyes is a rather brutal affair. (I can't break up
the file without inducing orphans)

This is just a passing comment, while I'm thinking about it, not a serious
attempt to derail the topic!

-Edward


On Thu, Jul 31, 2014 at 4:13 AM, Simon Peyton Jones 
wrote:

> Andreas, remember that GHC 7.8 already implements (essentially) the same
> algorithm.  The difference is that 7.8 offers only the brutal
> -XOverlappingInstances to control it.  In your example of the decision you
> make when writing
>instance Bla a => Bla [a]
> vs
>instance {-# OVERLAPPABLE #-} Bla a => Bla [a]
> you are, with GHC 7.8, making precisely the same decision when you decide
> whether or not to add {-# LANGUAGE OverlappingInstances #-} to that module.
>  Perhaps that wasn't clear in what I wrote; apologies.
>
> So your proposal seems to be this
>
> don't remove -XOverlappingInstances, because that will prevent
> programmers from "flipping on/off pragmas until their program
> goes through".
>
> It's hard to argue AGAINST providing the opportunity for more careful
> programmers to express their intentions more precisely, which is what the
> OVERLAP/OVERLAPPABLE pragmas do.
>
> Concerning deprecating OverlappingInstances, my gut feel is that it is
> positively a good thing to guide programmers towards a more robust
> programming style.  But my reason for starting this thread was to see
> whether or not others' gut feel is similar.
>
> Simon
>
> | -Original Message-
> | From: Libraries [mailto:libraries-boun...@haskell.org] On Behalf Of
> | Andreas Abel
> | Sent: 31 July 2014 08:59
> | To: Simon Peyton Jones; ghc-devs; GHC users; Haskell Libraries
> | (librar...@haskell.org)
> | Subject: Re: Overlapping and incoherent instances
> |
> | On 31.07.2014 09:20, Simon Peyton Jones wrote:
> | > Friends, in sending my message below, I should also have sent a link
> | > to
> | >
> | > https://ghc.haskell.org/trac/ghc/ticket/9242#comment:25
> |
> | Indeed.
> |
> | Quoting from the spec:
> |
> |   * Eliminate any candidate IX for which both of the following hold:
> | * There is another candidate IY that is strictly more specific;
> |   that is, IY is a substitution instance of IX but not vice versa.
> |
> | * Either IX is overlappable or IY is overlapping.
> |
> | Mathematically, this makes a lot of sense.  But put on the hat of
> | library writers, and users, and users that don't rtfm.  Looking out
> | from under this hat, the one may always wonder whether one should make
> | one's generic instances OVERLAPPABLE or not.
> |
> | If I create a library with type class Bla and
> |
> |instance Bla a => Bla [a]
> |
> | I could be a nice library writer and spare my users from declaring
> | their Bla String instances as OVERLAPPING, so I'd write
> |
> |instance {-# OVERLAPPABLE #-} Bla a => Bla [a]
> |
> | Or maybe that would be malicious?
> |
> | I think the current proposal is too sophisticated.  There are no
> | convincing examples given in the discussion so far that demonstrate
> | where this sophistication pays off in practice.
> |
> | Keep in mind that 99% of the Haskell users will never study the
> | instance resolution algorithm or its specification, but just flip
> | on/off pragmas until their code goes through.  [At least that was my
> | approach: whenever GHC asks for one more LANGUAGE pragma, just throw it
> | in.]
> |
> | Cheers,
> | Andreas
> |
> |
> | > Comment 25 describes the semantics of OVERLAPPING/OVERLAPPABLE etc,
> | > which I signally failed to do in my message below, leading to
> | > confusion in the follow up messages.  My apologies for that.
> | >
> | > Some key points:
> | >
> | > *There is a useful distinction between /overlapping/ and
> | > /overlappable/, but if you don't want to be bothered with it you can
> | > just say OVERLAPS (which means both).
> | >
> | > *Overlap between two candidate instances is allowed if /either/ has
> | > the relevant property.  This is a bit sloppy, but reduces the
> | > annotation burden.  Actually, with this per-instance stuff I think
> | > it'd be perfectly defensible to require both to be annotated, but
> | > that's a different discussion.
> | >
> | > I hope that helps clarify.
> | >
> | > I'm really pretty certain that the basic proposal here is good: it
> | > implements the current semantics in a more fine-grained manner.  My
> | > main motivation was to signal the proposed deprecation of the global
> | > per-module flag -XoverlappingInstances.  Happily people generally
> | seem
> | > fine with this.   It is, after all, precisely what deprecations are
> | for
> | > ("the old thing still works for now, but it won't do so for ever, and
> | > you should change as soon as is conv

Re: Forcing apps to collect GC stats?

2014-07-31 Thread Edward Kmett
Interesting.

I suppose ekg could also (ab)use this.

Johan?

-Edward


On Thu, Jul 31, 2014 at 5:51 AM, Simon Marlow  wrote:

> Hey Bryan,
>
> Sorry for the delay.
>
>
> On 15/07/14 01:57, Bryan O'Sullivan wrote:
>
>> I spent a bit of time over the weekend trying to figure out how to force
>> the RTS to collect GC statistics, but was unable to do so.
>>
>> I'm currently working on enriching criterion's ability to gather data,
>> among which I'd like to see GC statistics. If I try to obtain GC stats
>> using criterion when I'm not running the benchmark app with +RTS -T, I
>> get an exception.
>>
>> Is there a way to allow criterion to forcibly enable stats collection?
>> My efforts to do so have gotten me nowhere. It would be unfortunate if I
>> had to tell users of criterion that they should always run with +RTS -T
>> or add a -rtsopts clause, as they'll simply forget.
>>
>> And while I'm asking, why does GHC not simply collect GC stats by
>> default? Collecting them seems to have zero cost, from what I can see?
>>
>
> So you can do this in the same way as GHC. See
>
> https://phabricator.haskell.org/diffusion/GHC/browse/
> master/ghc/hschooks.c;6fa6caad0cb4ba99b2c0b444b0583190e743dd63$18-28
>
> Which is imported into Haskell like this:
>
> https://phabricator.haskell.org/diffusion/GHC/browse/master/ghc/Main.hs;
> 6fa6caad0cb4ba99b2c0b444b0583190e743dd63$847-848
>
> I'm not sure why it's marked "safe", but it doesn't hurt.
>
> This API is kind-of public, in the sense that we deliberately expose it
> via the Rts.h header, and I'll try not to break it gratuitously.
>
> Cheers,
> Simon
>
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://www.haskell.org/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: [core libraries] RE: Core libraries bug tracker

2014-08-19 Thread Edward Kmett
Hi Simon,

If you don't mind the extra traffic in the ghc trac, I'm open to the plan
to work there.

I was talking to Eric Mertens a few days ago about this and he agreed to
take lead on getting us set up to actually build tickets for items that go
into the libraries@ proposal process, so we have something helping to force
us to come to a definitive conclusion rather than letting things trail off.

-Edward


On Tue, Aug 19, 2014 at 10:31 AM, Simon Peyton Jones 
wrote:

>  Edward, and core library colleagues,
>
> Any views on this?  It would be good to make progress.
>
> Thanks
>
> Simon
>
>
>
> *From:* ghc-devs [mailto:ghc-devs-boun...@haskell.org] *On Behalf Of *Simon
> Peyton Jones
> *Sent:* 04 August 2014 16:01
> *To:* core-libraries-commit...@haskell.org
> *Cc:* ghc-devs@haskell.org
> *Subject:* Core libraries bug tracker
>
>
>
> Edward, and core library colleagues,
>
> This came up in our weekly GHC discussion
>
> · Does the Core Libraries Committee have a Trac?  Surely, surely
> you should, else you’ll lose track of issues.
>
> · Would you like to use GHC’s Trac for the purpose?   Advantages:
>
> o   People often report core library issues on GHC’s Trac anyway, so
> telling them to move it somewhere else just creates busy-work --- and maybe
> they won’t bother, which leaves it in our pile.
>
> o   Several of these libraries are closely coupled to GHC, and you might
> want to milestone some library tickets with an upcoming GHC release
>
> · If so we’d need a canonical way to identify tickets as CLC
> issues.  Perhaps by making “core-libraries” the owner?  Or perhaps the
> “Component” field?
>
> · Some core libraries (e.g. random) have a maintainer that isn’t
> the committee.  So that maintainer should be the owner of the ticket. Or
> the CLC might like a particular member to own a ticket.  Either way, that
> suggest using the “Component” field to identify CLC tickets
>
> · Or maybe you want a Trac of your own?
>
> The underlying issue from our end is that we’d like a way to
>
> · filter out tickets that you are dealing with
>
> · and be sure you are dealing with them
>
> · without losing track of milestones… i.e. when building a
> release we want to be sure that important tickets are indeed fixed before
> releasing
>
> Simon
>
> --
> You received this message because you are subscribed to the Google Groups
> "haskell-core-libraries" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to haskell-core-libraries+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: Random maintainership -- Was: [core libraries] RE: Core libraries bug tracker

2014-08-22 Thread Edward Kmett
I'm pretty sure we'd be up for taking ownership of it as it is a rather
fundamental piece of infrastructure in the community, and easily falls
within our purview.

That said, if you're concerned that you haven't been able to really push
the random library forward the way it deserves to be pushed, realize that
handing it to the committee is going to trade having you as a passionate
but very distracted maintainer for several folks who will mostly act to
keep things alive, that aren't likely to go make big sweeping changes to it.

-Edward


On Fri, Aug 22, 2014 at 5:58 PM, Ryan Newton  wrote:

> Dear core library folks & others,
>
> > On Tue, Aug 19, 2014 at 10:31 AM, Simon Peyton Jones <
> simo...@microsoft.com> wrote:
> > Some core libraries (e.g. random) have a maintainer that isn’t the
> committee.
>
> Ah, since it came up, maybe this is a good time to discuss that particular
> maintainership.  I'm afraid that since it isn't close to my current work
> (and I'm pre-tenure!) I haven't been able to really push the random library
> forward the way it deserves to be pushed these last three years.  Shall we
> move maintainership of it to the core libraries committee?
>
> Also/alternatively "Thomas Miedema " has stepped
> forward as a volunteer for taking over maintainership.
>
> The library was in limbo in part because it was clear that some API
> changes needed to be made and but there wasn't a major consensus building
> design effort around that topic.  One thing that was already agreed upon on
> via the libraries list decision process was to separate out SplittableGen.
>  Duncan Coutts was in favor of this and also (I think) had some other ideas
> about API changes that should be made.
>
> On the implementation front, my hope was that "tf-random" could replace
> random as the default/standard library. Koen and Michal support this, but I
> think they didn't want to become the maintainers themselves yet.  (I think
> that was to maintain some separation, and get buy-in from someone other
> than them, the implementors, before/during the transition).
>
> Best,
>  -Ryan
>
>
>
> On Tue, Aug 19, 2014 at 5:55 PM, Simon Peyton Jones 
> wrote:
> >
> > If you don't mind the extra traffic in the ghc trac, I'm open to the
> plan to work there.
> >
> >
> >
> > OK great.
> >
> >
> >
> > Let’s agree that:
> >
> > ·The “owner” of a Core Libraries ticket is the person
> responsible for progressing it – or “Core Libraries Committee” as one
> possibility.
> >
> > ·The “component” should identify the ticket as belonging to the
> core libraries committee, not GHC.  We have a bunch of components like
> “libraries/base”, “libraries/directory”, etc, but I’m sure that doesn’t
> cover all the core libraries, and even if it did, it’s probably too fine
> grain.  I suggest having just “Core Libraries”.
> >
> >
> >
> > Actions:
> >
> > ·Edward: update the Core Libraries home page (where is that?) to
> point people to the Trac, tell them how to correctly submit a ticket, etc?
> >
> > ·Edward: send email to tell everyone about the new plan.
> >
> > ·Austin: add the same guidance to the GHC bug tracker.
> >
> > ·    Austin: add “core libraries committee” as something that can be
> an owner.
> >
> > ·Austin: change the “components” list to replace all the
> “libraires/*” stuff with “Core Libraries”.
> >
> >
> >
> > Thanks
> >
> >
> >
> > Simon
> >
> >
> >
> >
> >
> > From: haskell-core-librar...@googlegroups.com [mailto:
> haskell-core-librar...@googlegroups.com] On Behalf Of Edward Kmett
> > Sent: 19 August 2014 16:23
> > To: Simon Peyton Jones
> > Cc: core-libraries-commit...@haskell.org; ghc-devs@haskell.org
> > Subject: Re: [core libraries] RE: Core libraries bug tracker
> >
> >
> >
> > Hi Simon,
> >
> >
> >
> > If you don't mind the extra traffic in the ghc trac, I'm open to the
> plan to work there.
> >
> >
> >
> > I was talking to Eric Mertens a few days ago about this and he agreed to
> take lead on getting us set up to actually build tickets for items that go
> into the libraries@ proposal process, so we have something helping to
> force us to come to a definitive conclusion rather than letting things
> trail off.
> >
> >
> >
> > -Edward
> >
> >
> >
> > On Tue, Aug 19, 2014 at 10:31 AM, Simon Peyton Jones <
> simo...@microsoft

Re: Preliminary proposal: Monoidal categories in base and proc notation support

2014-09-15 Thread Edward Kmett
I find PFunctor/QFunctor finer grained than is tasteful. Alas, they are 
necessary for inference for first/second in practice unless you require the 
Bifunctor to determine the category of both of its arguments. The hask package 
is exploring other ways to maintain inference.

If you try to actually use the current categories package without them, well, 
nothing infers.

Sent from my iPad

> On Sep 15, 2014, at 11:32 AM, Sophie Taylor  wrote:
> 
> >Hi Sophie,
> 
> >In your proposal draft, I am missing the rationale part.
> Yeah, I'm still writing it - I definitely need to expand that a bit mor.
> >Do we need *all* of these classes in base in order to desugar proc? Can
> you demonstrate why they are needed? Or will something simpler suffice?
> 
> I think I might remove the binoidal class, and remove the PFunctor/QFunctor 
> classes - I included them because I usually find finer grained class 
> hierarchies to be more tasteful; but it probably would make it more 
> frustrating to implement an arrow, for example.
> With SMC classes, proc notation can be desugared to remove a LOT of calls to 
> arr, which allows more fine-grained RULES optimisations to take place, and 
> additional work such as the ModalTypes extension in Adam Megacz Joseph's 
> thesis to be much more straightforward.
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://www.haskell.org/mailman/listinfo/ghc-devs
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: Preliminary proposal: Monoidal categories in base and proc notation support

2014-09-15 Thread Edward Kmett
I'm currently working towards a new release of 'categories', which gets you 
closer to your goal here, if not yet integrated into base. Said release is 
currently blocked on GHC #9200, and in many ways the current work on hask is as 
well. I'd frankly consider that base isn't really in a good place to consider 
adopting either treatment at this point. Doing them right requires poly kinds, 
constraint kinds, etc. This steps pretty far outside of the scope of what we've 
been willing to bake into base to date, and it isn't clear how to do such a 
design tastefully.

One option might be to define a number of combinators in an internal module in 
base with the SMC vocabulary and modify the arrow sugar to desugar in terms of 
those combinators rather than general use of arr, with RebindableSyntax turned 
on local definitions of the combinators would get selected, and then rules 
would have something to fire against. This would permit experimentation along 
these lines, without baking in assumptions about which of the points in the 
design space is best.

Sent from my iPad

> On Sep 15, 2014, at 11:32 AM, Sophie Taylor  wrote:
> 
> >Hi Sophie,
> 
> >In your proposal draft, I am missing the rationale part.
> Yeah, I'm still writing it - I definitely need to expand that a bit mor.
> >Do we need *all* of these classes in base in order to desugar proc? Can
> you demonstrate why they are needed? Or will something simpler suffice?
> 
> I think I might remove the binoidal class, and remove the PFunctor/QFunctor 
> classes - I included them because I usually find finer grained class 
> hierarchies to be more tasteful; but it probably would make it more 
> frustrating to implement an arrow, for example.
> With SMC classes, proc notation can be desugared to remove a LOT of calls to 
> arr, which allows more fine-grained RULES optimisations to take place, and 
> additional work such as the ModalTypes extension in Adam Megacz Joseph's 
> thesis to be much more straightforward.
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://www.haskell.org/mailman/listinfo/ghc-devs
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: Preliminary proposal: Monoidal categories in base and proc notation support

2014-09-15 Thread Edward Kmett
The current categories package tries to let you reuse bifunctors of the form c 
* d -> e across base categories, so that e.g. Kleisli m can use (,) for its 
product, etc. (in general it isn't a real product for Kleisli m, but let's 
pretend) this means we have to adopt a 3 for 2 policy wrt the categories 
involved. c e -> d, c d -> e, d e -> c, but when inferring from first or second 
you usually only know one, but also only need one other, but can't know the 
third from context, since no arrow is supplied. It is underdetermined.

Hask avoids this by presupposing a single set of categories for every 
Bifunctor, but it is a conceit of the package that this is enough in practice, 
not something I know, and we pay a measurable price for this. A lot of stuff is 
more complex there! The current hask design, as opposed to the one I gave a 
talk on and the design of categories loses a lot of what powers lens and the 
like in exchange for this simpler story. I don't make that change lightly and 
I'd hesitate to make that change across the entire ecosystem. It hasn't proved 
it is worth the burden yet.

In exchange for this complexity it is able to find simplicity elsewhere, e.g. 
Bifunctor in hask is a derived concept from a Functor to a Functor category, 
which models the curried nature of the arguments to a Bifunctor.

My main point is there are a lot of points in the design space, and I don't 
think we're equipped to find a clear winner right now.

Sent from my iPad

> On Sep 15, 2014, at 3:53 PM, Sophie Taylor  wrote:
> 
> Oh darn, really? That is so disappointing. Why can't it maintain inference? :\
> 
>> On 15 September 2014 23:44, Edward Kmett  wrote:
>> I find PFunctor/QFunctor finer grained than is tasteful. Alas, they are 
>> necessary for inference for first/second in practice unless you require the 
>> Bifunctor to determine the category of both of its arguments. The hask 
>> package is exploring other ways to maintain inference.
>> 
>> If you try to actually use the current categories package without them, 
>> well, nothing infers.
>> 
>> Sent from my iPad
>> 
>>> On Sep 15, 2014, at 11:32 AM, Sophie Taylor  wrote:
>>> 
>>> >Hi Sophie,
>>> 
>>> >In your proposal draft, I am missing the rationale part.
>>> Yeah, I'm still writing it - I definitely need to expand that a bit mor.
>>> >Do we need *all* of these classes in base in order to desugar proc? Can
>>> you demonstrate why they are needed? Or will something simpler suffice?
>>> 
>>> I think I might remove the binoidal class, and remove the PFunctor/QFunctor 
>>> classes - I included them because I usually find finer grained class 
>>> hierarchies to be more tasteful; but it probably would make it more 
>>> frustrating to implement an arrow, for example.
>>> With SMC classes, proc notation can be desugared to remove a LOT of calls 
>>> to arr, which allows more fine-grained RULES optimisations to take place, 
>>> and additional work such as the ModalTypes extension in Adam Megacz 
>>> Joseph's thesis to be much more straightforward.
>>> ___
>>> ghc-devs mailing list
>>> ghc-devs@haskell.org
>>> http://www.haskell.org/mailman/listinfo/ghc-devs
> 
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: Preliminary proposal: Monoidal categories in base and proc notation support

2014-09-15 Thread Edward Kmett
The categories package itself has a number of current users who still use it on 
older versions of GHC, so it doesn't really unblock me there for the 
foreseeable future, but this will definitely help hask along.

The hask package is already tied pretty close to the bleeding edge of GHC 
development, as compiling it with versions of GHC prior to 7.8.3 yields illegal 
.hi files.

-Edward

Sent from my iPad

> On Sep 15, 2014, at 12:06 PM, Simon Peyton Jones  
> wrote:
> 
> #9200 is fixed in HEAD, which may unblock you?
> 
> Simon
>  
> From: ghc-devs [mailto:ghc-devs-boun...@haskell.org] On Behalf Of Edward Kmett
> Sent: 15 September 2014 14:54
> To: Sophie Taylor
> Cc: ghc-devs@haskell.org
> Subject: Re: Preliminary proposal: Monoidal categories in base and proc 
> notation support
>  
> I'm currently working towards a new release of 'categories', which gets you 
> closer to your goal here, if not yet integrated into base. Said release is 
> currently blocked on GHC #9200, and in many ways the current work on hask is 
> as well. I'd frankly consider that base isn't really in a good place to 
> consider adopting either treatment at this point. Doing them right requires 
> poly kinds, constraint kinds, etc. This steps pretty far outside of the scope 
> of what we've been willing to bake into base to date, and it isn't clear how 
> to do such a design tastefully.
>  
> One option might be to define a number of combinators in an internal module 
> in base with the SMC vocabulary and modify the arrow sugar to desugar in 
> terms of those combinators rather than general use of arr, with 
> RebindableSyntax turned on local definitions of the combinators would get 
> selected, and then rules would have something to fire against. This would 
> permit experimentation along these lines, without baking in assumptions about 
> which of the points in the design space is best.
> 
> Sent from my iPad
> 
> On Sep 15, 2014, at 11:32 AM, Sophie Taylor  wrote:
> 
> >Hi Sophie,
>  
> >In your proposal draft, I am missing the rationale part.
> Yeah, I'm still writing it - I definitely need to expand that a bit mor.
> >Do we need *all* of these classes in base in order to desugar proc? Can
> you demonstrate why they are needed? Or will something simpler suffice?
>  
> I think I might remove the binoidal class, and remove the PFunctor/QFunctor 
> classes - I included them because I usually find finer grained class 
> hierarchies to be more tasteful; but it probably would make it more 
> frustrating to implement an arrow, for example.
> With SMC classes, proc notation can be desugared to remove a LOT of calls to 
> arr, which allows more fine-grained RULES optimisations to take place, and 
> additional work such as the ModalTypes extension in Adam Megacz Joseph's 
> thesis to be much more straightforward.
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://www.haskell.org/mailman/listinfo/ghc-devs
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: Permitting trailing commas for record syntax ADT declarations

2014-09-23 Thread Edward Kmett
I'm personally of the "it should be a language extension like everything
else" mindset. Sure, it is a pain, but then so is working with EmptyCase,
TupleSections, DoRec, EmptyDataDecls, ImplicitParams, KindSignatures,
MultiWayIf, TypeOperators, UnicodeSyntax, etc. All of which just make a few
more programs compile in the same sense, and fit into the gaps in the
existing grammar.

The conflict with TupleSections came up the last time this was proposed.

If we limit it to record-like notions, and import/export lists, then we
don't have to deal with conflicts with TupleSections and while it is
inconsistent to have tuples behave differently, than other comma-separated
lists, I'd really rather retain tuple sections, which I use somewhat
heavily, than lose them to mindless uniformity over how we handle
comma-separated lists.

I use TupleSections a fair bit, whereas I've adopted prefixed comma-lists
in Haskell to avoid the need for an extension like this one, so it'd be
quite a shift in my programming style to get to where this helps me. The
one place I'd really want something like this proposal is for the first
line of my export list, where adopting the prefixed ',' convention +
haddock sections makes for an annoying visual asymmetry.

-Edward

On Tue, Sep 23, 2014 at 4:55 AM, Simon Peyton Jones 
wrote:

> PS I have to admit that GHC already fails to adhere to H-2010 by accepting
> trailing commas in module import lists, *without* a language extension.
> That is naughty, but I suppose it is a foot in the door.   What to others
> think?
>
> Incidentally, trailing commas in tuples have quite a different meaning.
> With TupleSections,  (a,,b,) means \x y -> (a,x,b,y).
>
> Simon
>
> | -Original Message-
> | From: Simon Peyton Jones
> | Sent: 23 September 2014 08:32
> | To: 'Alexander Berntsen'; Johan Tibell
> | Cc: ghc-devs@haskell.org
> | Subject: RE: Permitting trailing commas for record syntax ADT
> | declarations
> |
> | | > have a language extension TrailingCommas (or something) to enable
> | | > the extension
> | | For clarification: are you overruling the "do we sneak it in HEAD or
> | | use pragma(s)"-vote and telling me to do the latter?
> |
> | Well, it *is* a language extension, exactly like lots of other language
> | extensions, isn't it? (E.g. UnicodeSyntax.)  What alternative action,
> | exactly, are you proposing?   Why do you propose to treat it differently
> | to other language extensions?  I would be reluctant to simply add it
> | without any indication; then GHC would accept non-Haskell 2010 programs.
> | And we have not done that with other extensions.
> |
> | Simon
> |
> | |
> | | If we can sneak it into HEAD (this is @ you Johan, too), I suggest
> | | that someone applies my patches to make import and export lists
> | | support leading commas (presently they only support trailing commas,
> | | per the report) -- and following this I can just send a bunch of
> | | "Permit leading/trailing ',' in Foo" patches to Phabricator, and you
> | | guys can bikeshed over there about which ones you actually want to
> | | commit. ;-)
> | |
> | | If I am to go down the pragma route, I guess I can make a
> | | RudundantCommas pragma or something like that, that implements
> | | trailing commas for imports/exports, and leading/trailing commas for
> | | the suggestions in this thread.
> | |
> | | I'm +1 on the GHC HEAD route, but I'm not exactly violently opposed to
> | | the pragma route either.
> | | - --
> | | Alexander
> | | alexan...@plaimi.net
> | | https://secure.plaimi.net/~alexander
> | | -BEGIN PGP SIGNATURE-
> | | Version: GnuPG v2
> | | Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/
> | |
> | | iF4EAREIAAYFAlQhHRoACgkQRtClrXBQc7U0WAD+Ixdah2pHMoeLiTGQJf0JLwDR
> | | I2dxYS7yaKyOHuHcUuEBAKh6RQmmpztz82yt/KCw0n2md3pf4n8yc5tt9s9k3FW3
> | | =FfHX
> | | -END PGP SIGNATURE-
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://www.haskell.org/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: [commit: ghc] master: Set default-impl of `mapM`/`sequence` methods to `traverse`/`sequenceA` (f636faa)

2014-09-29 Thread Edward Kmett
Hrmm. I wonder if that is in part caused by the inefficient use of sequence
in

https://github.com/ghc/nofib/blob/master/spectral/fibheaps/Main.lhs#L234

If that sequence . map is swapped for a traverse, what happens?

-Edward

On Mon, Sep 29, 2014 at 3:44 AM, Joachim Breitner 
wrote:

> Hi,
>
>
> Am Samstag, den 27.09.2014, 21:07 + schrieb g...@git.haskell.org:
> > commit f636faa7b2b7fc1d0663f994ad08f365d39a746d
> > Author: Herbert Valerio Riedel 
> > Date:   Sat Sep 27 22:55:19 2014 +0200
> >
> > Set default-impl of `mapM`/`sequence` methods to
> `traverse`/`sequenceA`
> >
> > This is made possible by the AMP, as we don't need the `WrappedMonad`
> > helper for that anymore.
>
> according to
>
> http://ghcspeed-nomeata.rhcloud.com/changes/?rev=f636faa7b2b7fc1d0663f994ad08f365d39a746d&exe=2&env=nomeata%27s%20buildbot
> this has caused a 11% regression in fibheaps allocation:
>
> http://ghcspeed-nomeata.rhcloud.com/timeline/?ben=nofib/allocs/fibheaps&env=1#/?exe=2&base=2+68&ben=nofib/allocs/fibheaps&env=1&revs=50&equid=on
>
> Greetings,
> Joachim
>
> --
> Joachim “nomeata” Breitner
>   m...@joachim-breitner.de • http://www.joachim-breitner.de/
>   Jabber: nome...@joachim-breitner.de  • GPG-Key: 0xF0FBF51F
>   Debian Developer: nome...@debian.org
>
>
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://www.haskell.org/mailman/listinfo/ghc-devs
>
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: [core libraries] RE: Is USE_REPORT_PRELUDE still useful?

2014-10-29 Thread Edward Kmett
I could definitely see moving the code to comments.

Sent from my iPad

On Oct 29, 2014, at 4:45 AM, Simon Peyton Jones  wrote:

> Adding  core-libraries, whose bailiwick this is.
> 
> Simon
>  
> From: ghc-devs [mailto:ghc-devs-boun...@haskell.org] On Behalf Of David Feuer
> Sent: 29 October 2014 00:24
> To: ghc-devs
> Subject: Is USE_REPORT_PRELUDE still useful?
>  
> A lot of code in GHC.List and perhaps elsewhere compiles differently 
> depending on whether USE_REPORT_PRELUDE is defined. Not all code differing 
> from the Prelude implementation. Furthermore, I don't know to what extent, if 
> any, such code actually works these days. Some of it certainly was not usable 
> for *years* because GHC.List did not import GHC.Num. Should we
> 
> 1. Convert all those code blocks to comments?
> 
> 2. Go through everything, check it to make sure it's written as in the 
> Prelude or has an alternative block, and then actually set up all the 
> infrastructure so that works?
> 
> 3. Leave it alone?
> 
> My general inclination is to go to 1.
> 
>  
> 
> I don't *really* like option 3 for four reasons:
> 
> a. It leaves untouched code to rot
> 
> b. It forces us to run CPP on files that otherwise have no need for it.
> 
> c. It interrupts the flow of the code with stuff that *looks* like real code 
> (and is highlighted as such) but is actually inactive.
> 
> d. It's not hard to accidentally move code into or out of the #ifdef blocks.
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "haskell-core-libraries" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to haskell-core-libraries+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: Is USE_REPORT_PRELUDE still useful?

2014-10-29 Thread Edward Kmett
Ack! That -is- a somewhat scary invisible backdoor dependency. :/

We ripped out a lot of unused and untestable ifdefs for other compilers from 
base a couple of years back, I'd be curious if this was already affected.

Any idea where the code for the report generation lies?

-Edward

On Oct 29, 2014, at 5:18 AM, Malcolm Wallace  wrote:

> 
> On 29 Oct 2014, at 00:24, David Feuer wrote:
> 
>> A lot of code in GHC.List and perhaps elsewhere compiles differently 
>> depending on whether USE_REPORT_PRELUDE is defined. Not all code differing 
>> from the Prelude implementation. Furthermore, I don't know to what extent, 
>> if any, such code actually works these days. Some of it certainly was not 
>> usable for *years* because GHC.List did not import GHC.Num.
> 
> I'm not completely certain, but I have a vague feeling that the Haskell 
> Report appendices that define the standard libraries might be auto-generated 
> (LaTeX/HTML/etc) from the base library sources, and might use these #ifdefs 
> to get the right version of the code.
> 
> Regards,
>Malcolm
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://www.haskell.org/mailman/listinfo/ghc-devs
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: [core libraries] RE: Core libraries bug tracker

2014-11-02 Thread Edward Kmett
Alternately, I can make a blanket decree that anyone can feel free to steal
any ticket from me at any time as I mostly work through hvr, dfeuer, and
thoughtpolice anyways to chip away at these.

I lack a strong preference either way.

-Edward

On Sun, Nov 2, 2014 at 5:19 AM, Thomas Miedema 
wrote:

> Herbert/Austin,
>
> in the GHC Trac, when I set a ticket to component 'Core Libraries', ekmett
> is automatically set as owner. This might prevent others from working on
> that ticket, and I doubt Edward himself is working on all >100 of them.
> Please change the default to not set the owner.
>
> Thanks,
> Thomas
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: [core libraries] RE: Core libraries bug tracker

2014-11-02 Thread Edward Kmett
Sounds good to me.

On Sun, Nov 2, 2014 at 2:05 PM, Simon Peyton Jones 
wrote:

>  Better to make them owner-less initially I think.  After all, you might
> *really* take ownership of some tickets!
>
>
>
> S
>
>
>
> *From:* ghc-devs [mailto:ghc-devs-boun...@haskell.org] *On Behalf Of *Edward
> Kmett
> *Sent:* 02 November 2014 14:49
> *To:* Thomas Miedema
> *Cc:* core-libraries-commit...@haskell.org; ghc-devs@haskell.org
> *Subject:* Re: [core libraries] RE: Core libraries bug tracker
>
>
>
> Alternately, I can make a blanket decree that anyone can feel free to
> steal any ticket from me at any time as I mostly work through hvr, dfeuer,
> and thoughtpolice anyways to chip away at these.
>
>
>
> I lack a strong preference either way.
>
>
>
> -Edward
>
>
>
> On Sun, Nov 2, 2014 at 5:19 AM, Thomas Miedema 
> wrote:
>
>   Herbert/Austin,
>
>
>
> in the GHC Trac, when I set a ticket to component 'Core Libraries', ekmett
> is automatically set as owner. This might prevent others from working on
> that ticket, and I doubt Edward himself is working on all >100 of them.
> Please change the default to not set the owner.
>
>
>
> Thanks,
>
> Thomas
>
>
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: Concrete syntax for pattern synonym type signatures

2014-11-04 Thread Edward Kmett
One note on the syntax front, 'pattern type' was mentioned as annoyingly
trying to shoehorn the word 'type' in to lean on an existing keyword, even
though its about a term level construction rather than a type level one.

We do have some perfectly serviceable keywords available to us that
indicate a more 'term/pattern' orientation, e.g. 'case' and 'of' come to
mind as things that are viable candidates for similar abuse here.

I'm just digging through the lexical lego bin for parts. I don't quite know
how to put them together to make a nice syntax though.

-Edward


On Tue, Nov 4, 2014 at 5:32 AM, Simon Peyton Jones 
wrote:

> Here is one principle: for GADTs, the pattern type signature should look
> like the GADT data constructor.  So if we have
>
> data S a where
> S1 :: p -> q -> S (p,q)
> S2 :: ...blah...
>
>   pattern P x y = S1 x y
>
> then surely the signature for P should be
> P :: p -> q -> S (p,q)
>
> The same goes for constraints in the constructor's type. Thus, using your
> example:
>
> |   data T a b where
> | MkT :: (Eq a, Ord b, Show c) => a -> (b, b) -> c -> T a b
>
> If I say
> pattern P x y z = MkT x y z
> then the signature for P should be identical to that of MkT.
>
>
> ---
>
> It gets a bit more interesting when you have nested patterns that
> instantiate the type.  For example, with the same type T, consider
>
> pattern P x y z = MkT (x,y) (False,True) [z]
>
> the "right" signature for P must presumably be
> P :: (Eq (p,q), Show [r]) => p -> q -> r -> T (p,q) Bool
>
> We don't need to distinguish 'r' as existential, any more than we do in
> the original signature for MkT.
>
> Note that we must retain the instantiated but un-simplified constraints
> (Eq (p,b), Show [r]), because when pattern-matching against P, those are
> the constraints brought into scope.
>
> -
>
> The general story, for both data constructors and pattern synonyms, is
> that if the type is
> D :: forall abc. (C1, C2...) => blah
> then the constraints (C1, C2...) are
>  - *required* when using D in an expression,
>  - *provided* (i.e. brought into scope) pattern matching against D.
>
> The tricky case comes when the pattern synonym involves some constraints
> that are *required* during *pattern-matching*.  A simple example is
>
> pattern P1 x = (8, x)
>
> Here we *require* a (Num a) dictionary *both* when using P1 in an
> expression (to build the value 8), *and* when using P in pattern matching
> (to build a value 8 to compare with the value being matched).  I'll call
> the constraints that are *required* when matching the "match-required
> constraints".
>
> The same happens for view patterns:
>
>   gr :: Ord a => a -> a -> Maybe a
> gr v x | x > v = Just v
>  | otherwise = Nothing
>
> pattern P2 x = (gr 8 -> Just x)
>
> Here, (Ord a, Num a) are match-required.  (P2 is uni-directional, so we
> can't use P2 in expressions.)
>
> We can't give a signature to P1 like this
> P1 :: forall a. Num a => b -> (a,b)
> because that looks as if (Num a) would be *provided* when pattern matching
> (see "general story" above), whereas actually it is required.  This is the
> nub of the problem Gergo is presenting us with.
>
> Notice that P1 is bidirectional, and can be used in expressions, where
> again we *require* (Num a), so P1's "term type" really is something like
> (Num a) => b -> (a,b).
>
> The more I think about this, the more I think we'll just have to bite the
> bullet and adapt the syntax for constraints in pattern types, to
> distinguish the match-required and match-provided parts. Suppose we let
> pattern signatures look like this:
>
>   pattern P :: forall tvs. (match-provided ; match-required) => tau
>
> The "; match-required" part is optional, and the "match-provided" part
> might be empty.  So P1 and P2 would look like this:
>
>   pattern P1 :: forall a. (; Num a) => b -> (a,b)
>   pattern P2 :: forall a. (; Num a, Ord a) => a -> a
>
> Because the "match-required" part is optional (and relatively rare) the
> common case looks just like an ordinary data constructor.
>
>
> None of this addresses the bidirectional/unidirectional question, but
> that's a pretty separate matter.
>
> Simon
>
> |  -Original Message-
> |  From: ghc-devs [mailto:ghc-devs-boun...@haskell.org] On Behalf Of Dr.
> |  ERDI Gergo
> |  Sent: 03 November 2014 10:13
> |  To: GHC Devs
> |  Subject: RFC: Concrete syntax for pattern synonym type signatures
> |
> |  Background
> |  --
> |
> |  As explained on
> |  https://ghc.haskell.org/trac/ghc/wiki/PatternSynonyms#Staticsemantics
> |  the type of a pattern synonym can be fully described with the
> |  following pieces of information:
> |
> |  * If the pattern synonym is bidirectional
> |  * Universially-bound type variables, and required constraints on them
> |  * The type of the pattern itself, closed over the universially-bound
> |  type variabl

Re: Concrete syntax for pattern synonym type signatures

2014-11-09 Thread Edward Kmett
On Sun, Nov 9, 2014 at 2:11 PM, Simon Peyton Jones 
wrote:

> On this thread:
>
> * I'm strongly of the opinion that pattern signatures should start
> pattern P :: ...blah...


>
  Just like value signatures, the pattern name comes first,
>   then a double colon.
>

This has the benefit that it lets users logically continue exporting
'pattern Foo', which is a very nice syntax.

The only downside is that the error message when users forget to turn on
pattern synonyms is somewhat more baroque than it can be with the extra
keyword shoehorned in, but the keyword is pretty awful looking.


> * One other possibility would be two => thus
> pattern P :: (Eq b) => (Num a, Eq a) => ...blha...
>
>   If you wanted something which had a "match-required" part but no
>   "match-provided" part, you'd end up with
> patter P :: () => (Num a, Eq a) => ...blah...
>   which is a little odd, but perhaps no odder than
> pattern P :: ( | Num a, Eq a ) => ...blah...
>

The nested (=>) version has a certain appeal to it.

It already parses. The trick is just in properly interpreting it, but users
already interpret (Foo a, Bar b) => .. differently in different contexts,
e.g. in class and instance declarations. They can adapt.

-Edward

>
> | -Original Message-
> | From: Dr. ERDI Gergo [mailto:ge...@erdi.hu]
> | Sent: 09 November 2014 07:56
> | To: Simon Peyton Jones
> | Cc: GHC Devs
> | Subject: RE: Concrete syntax for pattern synonym type signatures
> |
> | On Tue, 4 Nov 2014, Simon Peyton Jones wrote:
> |
> | >  pattern P :: forall tvs. (match-provided ; match-required) => tau
> | >
> | > The "; match-required" part is optional, and the "match-provided" part
> | might be empty.  So P1 and P2 would look like this:
> | >
> | >  pattern P1 :: forall a. (; Num a) => b -> (a,b)
> | >  pattern P2 :: forall a. (; Num a, Ord a) => a -> a
> |
> | Doesn't the ';' look a bit like something that could be incidentially
> | introduced by some layout-aware syntax rule? Wouldn't, e.g., '|' be more
> | explicit as a separator?
> |
> | example:
> |
> | pattern P :: forall tvs. (Eq b | Num a, Eq a) => b -> T a
>
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://www.haskell.org/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: Concrete syntax for pattern synonym type signatures

2014-11-10 Thread Edward Kmett
Note though, it doesn't mean the same thing to say (Foo a, Bar a b) => ...
as it does to say

Foo a => Bar a b => ...

The latter can use Foo a when working on Bar a b, but not Bar a b to
discharge Foo a, which makes a difference when you have functional
dependencies.

So in some sense the 'pattern requires/supplies' split is just that.



That said, Richard's other option

pattern Foo a => P :: Bar a => a

has the benefit that it looks a bit like the old datatype contexts (but
here applied to the constructor/pattern).

If we expect the left hand side or the right hand side to be most often
trivial then that may be worth considering.

You'd occasionally have things like

pattern (Num a, Eq a) => Foo :: a

for

pattern Foo = 8

but most of the time they'd wind up just looking like a GADT constructor.

-Edward

On Sun, Nov 9, 2014 at 10:02 PM, Richard Eisenberg 
wrote:

>
> On Nov 9, 2014, at 2:11 PM, Simon Peyton Jones 
> wrote:
> >
> > * One other possibility would be two => thus
> >   pattern P :: (Eq b) => (Num a, Eq a) => ...blha...
> >
>
> I should note that I can say this in 7.8.3:
>
> foo :: Show a => Eq a => a -> String
> foo x = show x ++ show (x == x)
>
> Note that I've separated the two constraints with a =>, not a comma. This
> syntax does what you might expect. (I actually believe that this is an
> improvement over the conventional syntax, but that's a story for another
> day.) For better or worse, this trick does not work for GADT constructors
> (which is a weird incongruence with function type signatures), so adding
> the extra arrow does not really steal syntax from GADT pattern synonyms.
>
> Richard
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://www.haskell.org/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: Concrete syntax for pattern synonym type signatures

2014-11-11 Thread Edward Kmett
Lamely, I can't seem to reconstruct the problem.

GHC seems to be more careful about gathering the constraints up into a
tuple even when I give an explicit type signature involving nested contexts
nowadays.

-Edward

On Mon, Nov 10, 2014 at 4:07 PM, Simon Peyton Jones 
wrote:

>   Note though, it doesn't mean the same thing to say (Foo a, Bar a b) =>
> ... as it does to say
>
> Foo a => Bar a b => ...
>
> The latter can use Foo a when working on Bar a b, but not Bar a b to
> discharge Foo a, which makes a difference when you have functional
> dependencies.
>
>
>
> I disagree.  Can you offer a concrete example, and show that one
> typechecks when the other does not?
>
>
>
> Simon
>
>
>
> *From:* Edward Kmett [mailto:ekm...@gmail.com]
> *Sent:* 10 November 2014 15:46
> *To:* Richard Eisenberg
> *Cc:* Simon Peyton Jones; GHC Devs
> *Subject:* Re: Concrete syntax for pattern synonym type signatures
>
>
>
> Note though, it doesn't mean the same thing to say (Foo a, Bar a b) => ...
> as it does to say
>
>
>
> Foo a => Bar a b => ...
>
>
>
> The latter can use Foo a when working on Bar a b, but not Bar a b to
> discharge Foo a, which makes a difference when you have functional
> dependencies.
>
>
>
> So in some sense the 'pattern requires/supplies' split is just that.
>
>
>
>
>
>
>
> That said, Richard's other option
>
>
>
> pattern Foo a => P :: Bar a => a
>
>
>
> has the benefit that it looks a bit like the old datatype contexts (but
> here applied to the constructor/pattern).
>
>
>
> If we expect the left hand side or the right hand side to be most often
> trivial then that may be worth considering.
>
>
>
> You'd occasionally have things like
>
>
>
> pattern (Num a, Eq a) => Foo :: a
>
>
>
> for
>
>
>
> pattern Foo = 8
>
>
>
> but most of the time they'd wind up just looking like a GADT constructor.
>
>
>
> -Edward
>
>
>
> On Sun, Nov 9, 2014 at 10:02 PM, Richard Eisenberg 
> wrote:
>
>
> On Nov 9, 2014, at 2:11 PM, Simon Peyton Jones 
> wrote:
> >
> > * One other possibility would be two => thus
> >   pattern P :: (Eq b) => (Num a, Eq a) => ...blha...
> >
>
> I should note that I can say this in 7.8.3:
>
> foo :: Show a => Eq a => a -> String
> foo x = show x ++ show (x == x)
>
> Note that I've separated the two constraints with a =>, not a comma. This
> syntax does what you might expect. (I actually believe that this is an
> improvement over the conventional syntax, but that's a story for another
> day.) For better or worse, this trick does not work for GADT constructors
> (which is a weird incongruence with function type signatures), so adding
> the extra arrow does not really steal syntax from GADT pattern synonyms.
>
> Richard
>
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://www.haskell.org/mailman/listinfo/ghc-devs
>
>
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: Windows breakage

2014-11-14 Thread Edward Kmett
David recently added a new -Wall flag for Trustworthiness.

The reasoning is if it is in -Wall, it can actually get seen and will be
used by folks to improve more things from 'Trustworthy' to 'Safe'. After
all, everything that is merely `Trustworthy` needs to live in the trusted
computing base.

The status quo is that you pretty much have no way to know what you _don't_
need Trustworthy on, and efforts in the past to take large libraries and
convert them to SafeHaskell have been fraught with long rebuild cycles,
because the only way to see was to build, haddock, and look.

What you are running when you go to _fix_ the error appears to be an actual
bug in the safety inference code, though, and definitely needs to be looked
at.

It probably does belong in -Wall in the long term, and -Werror is
notoriously fickle, but we should look at what is causing it to go wrong
here and do a bit more due diligence.

-Edward


On Fri, Nov 14, 2014 at 12:40 PM, Simon Peyton Jones 
wrote:

>  Hmm.  When I got this
>
> libraries\hpc\Trace\Hpc\Mix.hs:3:14: Warning:
>
> ‘Trace.Hpc.Mix’ is marked as Trustworthy but has been inferred as safe!
>
> I changed “Trustworthy” to “Safe”.  But then I got
>
> libraries\hpc\Trace\Hpc\Mix.hs:24:1:
>
> Data.Time: Can't be safely imported! The module itself isn't safe.
>
> This seems unhelpful. After all it’s been “inferred as safe”.  What should
> I do?
>
> Thanks.
>
> Simon
>
> *From:* Simon Peyton Jones
> *Sent:* 14 November 2014 16:51
> *To:* ghc-devs@haskell.org
> *Subject:* Windows breakage
>
>
>
> This breakage didn’t use to happen.   Might someone fix it?  Thanks.  For
> now I’m going through changing a dozen “Trustworthy” to “Safe”.  Is that
> right?
>
> Simon
>
> librariesWin32SystemWin32Console.hsc:2:14: Warning:
>
> ‘System.Win32.Console’ is marked as Trustworthy but has been inferred
> as safe!
>
>
>
> :
>
> Failing due to -Werror.
>
> libraries/Win32/ghc.mk:4: recipe for target
> 'libraries/Win32/dist-install/build/System/Win32/Console.o' failed
>
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://www.haskell.org/mailman/listinfo/ghc-devs
>
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: how to write a ghc primop that acts on an unevaluated argument?

2014-11-23 Thread Edward Kmett
Maybe test for laziness in the argument by just putting something in that
goes boom when forced, e.g. 'undefined'?


On Sun, Nov 23, 2014 at 2:04 PM, Carter Schonwald <
carter.schonw...@gmail.com> wrote:

> Hey All,
> as part of trying to get some fixups for how prefetch works into 7.10,
> i'm adding a "prefetchValue" primop that prefetchs the memory location of
> a lifted heap value
>
> namely
>
> several operations of the following form
>
> primop PrefetchValueOp1 "prefetchValue1#" GenPrimOp
>a -> State# s -> State# s
>with strictness  = { \ _arity -> mkClosedStrictSig [botDmd, topDmd]
> topRes }
>
> I'd like some feedback on the strictness information design by someone
> who's familiar with how that piece of GHC. the idea being that
> prefetchValue is lazy in its polymorphic argument (it doesn't force it, it
> just does a prefetch on the heap location, which may or may not be
> evaluated).
>
> https://phabricator.haskell.org/D350
>
> is the code in question. And i *believe* i'm testing for being lazy in
> that argument correctly.
>
> thoughts?
>
> many thanks!
> -Carter
>
>
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://www.haskell.org/mailman/listinfo/ghc-devs
>
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: help wrt semantics / primops for pure prefetches

2014-11-27 Thread Edward Kmett
My general experience with prefetching is that it is almost never a win
when done just on trees, as in the usual mark-sweep or copy-collection
garbage collector walk. Why? Because the time from the time you prefetch to
the time you use the data is too variable. Stack disciplines and prefetch
don't mix nicely.

If you want to see a win out of it you have to free up some of the ordering
of your walk, and tweak your whole application to support it. e.g. if you
want to use prefetching in garbage collection, the way to do it is to
switch from a strict stack discipline to using a small fixed-sized queue on
the output of the stack, then feed prefetch on the way into the queue
rather than as you walk the stack. That paid out for me as a 10-15% speedup
last time I used it after factoring in the overhead of the extra queue. Not
too bad for a weekend project. =)

Without that sort of known lead-in time, it works out that prefetching is
usually a net loss or vanishes into the noise.

As for the array ops, davean has a couple of cases w/ those for which the
prefetching operations are a 20-25% speedup, which is what motivated Carter
to start playing around with these again. I don't know off hand how easily
those can be turned into public test cases though.

-Edward

On Thu, Nov 27, 2014 at 4:36 AM, Simon Marlow  wrote:

> I haven't been watching this, but I have one question: does prefetching
> actually *work*?  Do you have benchmarks (or better still, actual
> library/application code) that show some improvement?  I admit to being
> slightly sceptical - when I've tried using prefetching in the GC it has
> always been a struggle to get something that shows an improvement, and even
> when I get things tuned on one machine it typically makes things slower on
> a different processor.  And that's in the GC, doing it at the Haskell level
> should be even harder.
>
> Cheers,
> Simon
>
>
> On 22/11/2014 05:43, Carter Schonwald wrote:
>
>> Hey Everyone,
>> in
>> https://ghc.haskell.org/trac/ghc/ticket/9353
>> and
>> https://phabricator.haskell.org/D350
>>
>> is some preliminary work to fix up how the pure versions of the prefetch
>> primops work is laid out and prototyped.
>>
>> However, while it nominally fixes up some of the problems with how the
>> current pure prefetch apis are fundamentally borken,  the simple design
>> in D350 isn't quite ideal, and i sketch out some other ideas in the
>> associated ticket #9353
>>
>> I'd like to make sure  pure prefetch in 7.10 is slightly less broken
>> than in 7.8, but either way, its pretty clear that working out the right
>> fixed up design wont happen till 7.12. Ie, whatever makes 7.10, there
>> WILL have to be breaking changes to fix those primops for 7.12
>>
>> thanks and any feedback / thoughts appreciated
>> -Carter
>>
>>
>> ___
>> ghc-devs mailing list
>> ghc-devs@haskell.org
>> http://www.haskell.org/mailman/listinfo/ghc-devs
>>
>>  ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://www.haskell.org/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: help wrt semantics / primops for pure prefetches

2014-11-28 Thread Edward Kmett
The main takeaway I had from my work with prefetching was that if you can
shove things into a fixed-sized queue and prefetch on the way into the
queue instead of doing it just to sort of kickstart the next element during
a tree traversal that is going to be demanded too fast to take full
advantage of the latency, then you can smooth out a lot of the cross system
variance.

It is just incredibly invasive. =(

Re: doing prefetching in the mark phase, I just skimmed and found
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.93.9090&rep=rep1&type=pdf
takes which appears to take a similar approach.

-Edward

On Fri, Nov 28, 2014 at 3:42 AM, Simon Marlow  wrote:

> Thanks for this.  In the copying GC I was using prefetching during the
> scan phase, where you do have a pretty good tunable knob for how far ahead
> you want to prefetch.  The only variable is the size of the objects being
> copied, but most tend to be in the 2-4 words range.  I did manage to get
> 10-15% speedups with optimal tuning, but it was a slowdown on a different
> machine or with wrong tuning, which is why GHC doesn't have any of this
> right now.
>
> Glad to hear this can actually be used to get real speedups in Haskell, I
> will be less sceptical from now on :)
>
> Cheers,
> Simon
>
> On 27/11/2014 10:20, Edward Kmett wrote:
>
>> My general experience with prefetching is that it is almost never a win
>> when done just on trees, as in the usual mark-sweep or copy-collection
>> garbage collector walk. Why? Because the time from the time you prefetch
>> to the time you use the data is too variable. Stack disciplines and
>> prefetch don't mix nicely.
>>
>> If you want to see a win out of it you have to free up some of the
>> ordering of your walk, and tweak your whole application to support it.
>> e.g. if you want to use prefetching in garbage collection, the way to do
>> it is to switch from a strict stack discipline to using a small
>> fixed-sized queue on the output of the stack, then feed prefetch on the
>> way into the queue rather than as you walk the stack. That paid out for
>> me as a 10-15% speedup last time I used it after factoring in the
>> overhead of the extra queue. Not too bad for a weekend project. =)
>>
>> Without that sort of known lead-in time, it works out that prefetching
>> is usually a net loss or vanishes into the noise.
>>
>> As for the array ops, davean has a couple of cases w/ those for which
>> the prefetching operations are a 20-25% speedup, which is what motivated
>> Carter to start playing around with these again. I don't know off hand
>> how easily those can be turned into public test cases though.
>>
>> -Edward
>>
>> On Thu, Nov 27, 2014 at 4:36 AM, Simon Marlow > <mailto:marlo...@gmail.com>> wrote:
>>
>> I haven't been watching this, but I have one question: does
>> prefetching actually *work*?  Do you have benchmarks (or better
>> still, actual library/application code) that show some improvement?
>> I admit to being slightly sceptical - when I've tried using
>> prefetching in the GC it has always been a struggle to get something
>> that shows an improvement, and even when I get things tuned on one
>> machine it typically makes things slower on a different processor.
>> And that's in the GC, doing it at the Haskell level should be even
>> harder.
>>
>> Cheers,
>> Simon
>>
>>
>> On 22/11/2014 05:43, Carter Schonwald wrote:
>>
>> Hey Everyone,
>> in
>> https://ghc.haskell.org/trac/__ghc/ticket/9353
>> <https://ghc.haskell.org/trac/ghc/ticket/9353>
>> and
>> https://phabricator.haskell.__org/D350
>> <https://phabricator.haskell.org/D350>
>>
>> is some preliminary work to fix up how the pure versions of the
>> prefetch
>> primops work is laid out and prototyped.
>>
>> However, while it nominally fixes up some of the problems with
>> how the
>> current pure prefetch apis are fundamentally borken,  the simple
>> design
>> in D350 isn't quite ideal, and i sketch out some other ideas in
>> the
>> associated ticket #9353
>>
>> I'd like to make sure  pure prefetch in 7.10 is slightly less
>> broken
>> than in 7.8, but either way, its pretty clear that working out
>> the right
>> fixed up design wont happen till 7.12. Ie, whatever makes 7.10,
>> there
>> WILL 

Re: Arrow notation and GHC 7.10 freeze

2014-12-02 Thread Edward Kmett
We typically have a year between releases, so in the absence of anything
contradictory I'd expect the freeze for that to be about this time next
year.

-Edward

On Wed, Dec 3, 2014 at 9:17 AM, Wolfgang Jeltsch  wrote:

> Am Dienstag, den 02.12.2014, 13:27 + schrieb Simon Peyton Jones:
> > It's frozen now.
>
> Hmm, given the current weather, this is actually not surprising. ;-)
>
> > But do work on it for 7.12! It'll come round sooner than you think.
>
> When is 7.12 expected to be frozen, and when to be released?
>
> All the best,
> Wolfgang
>
> > There are many arrow-related tickets longing for love.
> > e.g. #7828, #5267, #5777, #5333, #344
> >
> > Simon
> >
> > |  -Original Message-
> > |  From: ghc-devs [mailto:ghc-devs-boun...@haskell.org] On Behalf Of
> > |  Wolfgang Jeltsch
> > |  Sent: 02 December 2014 12:49
> > |  To: ghc-devs@haskell.org
> > |  Subject: Arrow notation and GHC 7.10 freeze
> > |
> > |  Hi,
> > |
> > |  I would like to work on
> > |  .
> > |  It would be great if the results of this work could go into GHC 7.10.
> > |  Is this still possible? When will GHC 7.10 be frozen?
> > |
> > |  All the best,
> > |  Wolfgang
> > |
> > |  ___
> > |  ghc-devs mailing list
> > |  ghc-devs@haskell.org
> > |  http://www.haskell.org/mailman/listinfo/ghc-devs
>
>
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://www.haskell.org/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: are patterns synonyms definable in GHCI?

2014-12-18 Thread Edward Kmett
ghci also accepts a number of other things.

You can define data types, type synonyms, classes, instances, so pattern
synonyms would seem to fall into scope.

On Thu, Dec 18, 2014 at 5:51 AM, Dr. ERDI Gergo  wrote:
>
> On Thu, 18 Dec 2014, Carter Schonwald wrote:
>
>  Hey all,I was trying to define some pattern synonyms in ghci recently,
>> and that doesnt
>> seem to work. Is that something slated to be fix in 7.10 or something?
>>
>
> I thought GHCi accepts things that would be valid in a 'do' section? So
> e.g.
>
> x = ()
>
> doesn't work in GHCi, but
>
> let x = ()
>
> does.
>
> Pattern synonyms don't work for the exact same reason: they are not valid
> inside a 'do' block.
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://www.haskell.org/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: GHC support for the new "record" package

2015-01-20 Thread Edward Kmett
I'm generally positive on the goal of figuring out better record support in
GHC.

That said, it isn't clear that Nikita's work here directly gives rise to
how the syntax of such a thing would work in GHC proper. Simon's original
proposal overloaded (.) in yet more ways that collide with the uses in lens
and really drastically contribute to confusion in the language we have.
This is why over the summer of 2013 Adam Gundry's proposal evolved away
from that design. Nikita on the other hand gets away with using foo.bar
syntax in a more lens-like fashion precisely because he has a quasi-quoter
isolating it from the rest of the language.

If you strip away that layer, it isn't clear what syntactic mechanism can
be used to convey the distinction that isn't taken or just as obtrusive as
the quasi-quoter.

But, "it isn't clear" is just code for "hey this makes me nervous", so
let's spitball a couple ideas:

Nikita's proposal has two things that need addressing:

1.) The syntax for record types themselves

2.) The syntax for how to get a lens for a field

Re #1

The main term and type level bits of syntax that could be coopted that
aren't already in use are @ and (~ at the term level) and things like
banana brackets (| ... |), while that already has some other, unrelated,
connotations for folks, something related like {| ... |}. We use such
bananas for our row types in Ermine to good effect.

The latter {| ... |} might serve as a solid syntax suggestion for the
anonymous row type syntax.

Re #2

That leaves the means for how to talk about a lens for a given field open.
Under Adam's proposal that had evolved into making a really complicated
instance that we could extract a lens from. This had the benefit over the
current state of the `record` package that we could support full type
changing lenses. Losing type-changing assignment would be a big step back
from the previous proposal / the current state of development for folks who
just use makeClassy or custom lens production rules with lens to get
something similar, though.

But the thing we never found was a nice short syntax for talking about the
lens you get from a given field (or possibly chain of fields); Gundry's
solution was 90% library and almost no syntax. On the other hand Adam was
shackled by having to let the accessor be used as a normal function as well
as a lens. Nikita's records don't have that problem.

Having no syntax at all for extracting the lens from a field accessor, but
rather to having it just be the lens, could directly address that concern.
This raises some questions about scope, where do these names live? What
happens when you have a module A that defines a record with a field, and a
module B that does the same for a different record, and a module C that
imports both, but, really, we had those before with Adam's proposal, so
there is nothing new there.

And for what it is worth, I've seen users in the wild using makeLenses on
records with several hundred fields (!!), so we'd need to figure out
something that doesn't cap a record at 24 fields. This feedback came in
because we made the lenses lazier and it caused some folks a great deal of
pain in terms of time spent in code gen!

It is a long trek from "this is plausible" to "hey, let's bet the future of
records and bunch of syntax in the language on this".

It would also necessarily entail moving a middling-sized chunk of lens into
base so that you can actually do something with these accessors. I've been
trying to draw lines around a "lens-core" for multiple years now. Everyone
has a different belief of what it should be, and trust me, I've heard, and
had to shoot down basically all of the pitches.

We're going to be stuck with the warts of whatever solution we build for a
very long time.

So with those caveats in mind, I'd encourage us to take our time rather
than rush into trying to get this 7.12 ready.

Personally I would be happy if by the time we ship 7.12 we had a plan for
how we could proceed, that we could then judge on its merits.

-Edward


On Tue, Jan 20, 2015 at 4:44 PM, Simon Marlow  wrote:

> For those who haven't seen this, Nikita Volkov proposed a new approach to
> anonymous records, which can be found in the "record" package on Hackage:
> http://hackage.haskell.org/package/record
>
> It had a *lot* of attention on Reddit: http://nikita-volkov.github.
> io/record/
>
> Now, the solution is very nice and lightweight, but because it is
> implemented outside GHC it relies on quasi-quotation (amazing that it can
> be done at all!).  It has some limitations because it needs to parse
> Haskell syntax, and Haskell is big.  So we could make this a lot smoother,
> both for the implementation and the user, by directly supporting anonymous
> record syntax in GHC.  Obviously we'd have to move the library code into
> base too.
>
> This message is by way of kicking off the discussion, since nobody else
> seems to have done so yet.  Can we agree that this is the right thing and
> should be 

Re: GHC support for the new "record" package

2015-01-21 Thread Edward Kmett
On Wed, Jan 21, 2015 at 4:36 AM, Simon Marlow  wrote:

> On 20/01/2015 23:07, Edward Kmett wrote:
>
>  It is a long trek from "this is plausible" to "hey, let's bet the
>> future of records and bunch of syntax in the language on this".
>>
>
> Absolutely.  On the other hand, this is the first proposal I've seen
> that really hits (for me) a point in the design space that has an
> acceptable power to weight ratio.  Yes there are some corners cut, and
> it remains to be seen whether, after we've decided which corners we want
> to uncut, the design retains the same P2W ratio.
>
> A couple of answers to specific points:
>
>  Re #1
>>
>> The main term and type level bits of syntax that could be coopted
>> that aren't already in use are @ and (~ at the term level) and things
>> like banana brackets (| ... |), while that already has some other,
>> unrelated, connotations for folks, something related like {| ... |}.
>> We use such bananas for our row types in Ermine to good effect.
>>
>> The latter {| ... |} might serve as a solid syntax suggestion for the
>>  anonymous row type syntax.
>>
>
> Why not just use { ... } ?


Mostly because it would conflict with the existing record syntax when used
as a member of a data type.

Using { ... } would break all existing code, while {| ... |} could
peacefully co-exist.

data Foo = Foo { bar :: Bar }

vs.

data Foo = Foo {| bar :: Bar |}

You could, I suppose manually distinguish them using ()'s

data Foo = Foo ({bar :: Bar })

might be something folks could grow to accept.

Another reason that comes to mind is that it causes a further divergence
between the way terms and types behave/look, complicated stuff like Richard
Eisenberg's work on giving us something closer to real dependent types.

 Re #2
>>
>> That leaves the means for how to talk about a lens for a given field
>>  open. Under Adam's proposal that had evolved into making a really
>> complicated instance that we could extract a lens from. This had the
>>  benefit over the current state of the `record` package that we could
>>  support full type changing lenses. Losing type-changing assignment
>> would be a big step back from the previous proposal / the current
>> state of development for folks who just use makeClassy or custom lens
>> production rules with lens to get something similar, though.
>>
>> But the thing we never found was a nice short syntax for talking
>> about the lens you get from a given field (or possibly chain of
>> fields); Gundry's solution was 90% library and almost no syntax. On
>> the other hand Adam was shackled by having to let the accessor be
>> used as a normal function as well as a lens. Nikita's records don't
>> have that problem.
>>
>> Having no syntax at all for extracting the lens from a field
>> accessor, but rather to having it just be the lens, could directly
>> address that concern. This raises some questions about scope, where
>> do these names live? What happens when you have a module A that
>> defines a record with a field, and a module B that does the same for
>> a different record, and a module C that imports both, but, really, we
>> had those before with Adam's proposal, so there is nothing new
>> there.
>>
>
> Right.  So either
> (a) A field name is a bare identifier that is bound to the lens, or
> (b) There is special syntax for the lens of a field name
>
> If (a) there needs to be a declaration of the name in order that we can
> talk about scoping.  That makes (b) a lot more attractive; and if you
> really find the syntax awkward then you can always bind a local variable
> to the lens, or export the names from your library.


Alternately (c) we could play games with ensuring the "name" is shared
despite coming from different fields.

As a half-baked idea, if we pretended all field accessors were names from
some magic internal GHC.Record.Fields module, so that using

data Foo = Foo {| bar :: Bar, baz :: Baz |}

would add an `import GHC.Record.Fields (bar, baz)` to the module. These
would all expand to the same Symbol-based representation, behind the
scenes, so that if two record types were used that used the same names,
they'd just work together, with no scoping issues.

This has the benefit that users could write such import statements by hand
to use fields themselves, no sigils get used up, and the resulting code is
the cleanest it can be.

-Edward
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: GHC support for the new "record" package

2015-01-21 Thread Edward Kmett
Personally, I think the two proposals, ORF and Nikita's record approach
address largely differing needs.

The ORF proposal has the benefit that it doesn't require GHC itself to know
anything about lenses in order to work and is mostly compatible with the
existing field accessor combinators.

Nikita's proposal on the other hand builds a form of Trex-like records
where it has its own little universe to play in, and doesn't have to
contort itself to make the field accessors backwards compatible. As its own
little world, the fact that the ORF can't deal with certain types of fields
just becomes a limitation on this little universe, and all existing code
would continue to work.

I, too, have a lot of skin in the game with the existing ORF proposal, but
ultimately we're going to be stuck with whatever solution we build for a
long time, and it is, we both have to confess, admittedly quite
complicated, so it seems exploring the consequences of a related design
which has different constraints on its design does little harm.

I'm mostly paying the work the courtesy it deserves by considering to its
logical conclusion what such a design would look like fleshed out in a way
that maximized how nice the result could be to use. I'm curious, as mostly
a thought experiment, how nice a design we could get in the end under these
slightly different assumptions.

If, in the end, having an anonymous record syntax that is distinct from the
existing one is too ugly, it is okay for us to recoil from it and go back
to committing to the existing proposal, but I for one would prefer to "
steelman "
Nikita's trick first.

Thus far, all of this is but words in a handful of emails. I happen to
think the existing ORF implementation is about as good as we can get
operating under the assumptions it does. That said, operating under
different assumptions may get us a nicer user experience. I'm not sure,
though, hence the thought experiment.

-Edward

On Wed, Jan 21, 2015 at 5:05 AM, Adam Gundry  wrote:

> As someone with quite a lot of skin in this game, I thought it might be
> useful to give my perspective on how this relates to ORF. Apologies that
> this drags on a bit...
>
> Both approaches use essentially the same mechanism for resolving
> overloaded field names (typeclasses indexed by type-level strings,
> called Has/Upd or FieldOwner). ORF allows fields to be both selectors
> and various types of lenses, whereas the record library always makes
> them van Laarhoven lenses, but this isn't really a fundamental difference.
>
> The crucial difference is that ORF adds no new syntax, and solves
> Has/Upd constraints for existing datatypes, whereas the record library
> adds a new syntax for anonymous records and their fields that is
> completely separate from existing datatypes, and solves FieldOwner
> constraints only for these anonymous records (well, their desugaring).
>
> On the one hand, anonymous records are a very desirable feature, and in
> some ways making them separate is a nice simplification. However, they
> are not as expressive as the existing Haskell record datatypes (sums,
> strict/unpacked fields, higher-rank fields), and having two records
> mechanisms is a little unsatisfying. Do we really want to distinguish
>
> data Foo = MkFoo { bar :: Int, baz :: Bool }
> data Foo = MkFoo {| bar :: Int, baz :: Bool |}
>
> (where the first is the traditional approach, and the second is a
> single-argument constructor taking an anonymous record in Edward's
> proposed syntax)?
>
> It might be nice to have a syntactic distinction between record fields
> and normal functions (the [l|...|] in the record library), because it
> makes name resolution much simpler. ORF was going down the route of
> adding no syntax, so name resolution becomes more complex, but we could
> revisit that decision and perhaps make ORF simpler. But I don't know
> what the syntax should be.
>
> I would note that if we go ahead with ORF, the record library could
> potentially take advantage of it (working with ORF's Has/Upd classes
> instead of defining its own FieldOwner class). Then we could
> subsequently add anonymous records to GHC if there is enough interest
> and implementation effort. However, I don't want to limit the
> discussion: if there's consensus that ORF is not the right approach,
> then I'm happy to let it go the way of all the earth. ;-)
>
> (Regarding the status of ORF, Simon PJ and I had a useful meeting last
> week where we identified a plan for getting it back on track, including
> some key simplifications to the sticking points in the implementation.
> So there might be some hope for getting it in after all.)
>
> Adam
>
>
> On 20/01/15 21:44, Simon Marlow wrote:
> > For those who haven't seen this, Nikita Volkov proposed a new approach
> > to anonymous records, which can be found in the "record" package on
> > Hackage: http://hackage.haskell.org/package/record
> >
> > It had a *lo

Re: GHC support for the new "record" package

2015-01-21 Thread Edward Kmett
On Wed, Jan 21, 2015 at 1:06 PM, Adam Gundry  wrote:

> Also, I'd add fields with higher-rank types to the list of missing
> features. From a user's perspective, it might seem a bit odd if
>
> data T = MkT { foo :: forall a . a }
>
> was fine but
>
> data T = MkT {| foo :: forall a . a |}
>
> would not be a valid declaration. (Of course, ORF can't overload "foo"
> either, and maybe this is an inevitable wart.)


I'm slowly coming around to thinking that this is inevitable without a
bunch of changes in the way we work with classes. You otherwise need to
allow impredicative types in some contexts, which raises all sorts of
questions.

In the latter case we can at least be clear about why it doesn't work in
the error message, in the ORF case it has to just not generate a lens. =(

>
> >> 5) I don't know if I want to commit the *language* to a particular lens
> >> type.
>
> Agreed, but I don't think this need be an issue for either proposal. We
> know from ORF that we can make fields sufficiently polymorphic to be
> treated as selector functions or arbitrary types of lenses (though
> treating them as van Laarhoven lenses requires either some clever
> typeclass trickery in the base library, or using a combinator to make a
> field into a lens at use sites).


Admittedly that has also been the source of 90% of the complexity of the
ORF proposal. There we _had_ to do this in order to support the use as a
regular function.

There is a large design space here, and the main thing Nikita's proposal
brings to the table is slightly different assumptions about what such
records should mean. This _could_ let us shed some of the rougher edges of
ORF, at the price of having to lock in a notion of lenses.

I'm on the fence about whether it would be a good idea to burden Nikita's
proposal in the same fashion, but I think it'd be wise to explore it in
both fashions. My gut feeling though is that if we tie it up with the same
restrictions as ORF you just mostly get a less useful ORF with anonymous
record sugar thrown in.

-Edward
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: GHC support for the new "record" package

2015-01-21 Thread Edward Kmett
On Wed, Jan 21, 2015 at 4:34 PM, Adam Gundry  wrote:

> I'm surprised and interested that you view this as a major source of
> complexity. From my point of view, I quite liked how the ability to
> overload fields as both selector functions and arbitrary lenses turned
> out. Compared to some of the hairy GHC internal details relating to name
> resolution, it feels really quite straightforward. Also, I've recently
> worked out how to simplify and generalise it somewhat (see [1] and [2]
> if you're curious).


I'm actually reasonably happy with the design we wound up with, but the
need to mangle all the uses of the accessor with a combinator to extract
from the data type is a perpetual tax paid, that by giving in and picking a
lens type and not having to _also_ provide a normal field accessor we could
avoid.

> There is a large design space here, and the main thing Nikita's proposal
> > brings to the table is slightly different assumptions about what such
> > records should mean. This _could_ let us shed some of the rougher edges
> > of ORF, at the price of having to lock in a notion of lenses.
>
> Yes. It's good to explore the options. For what it's worth, I'm
> sceptical about blessing a particular notion of lenses unless it's
> really necessary, but I'm prepared to be convinced otherwise.


For users this means the difference between set (foo.bar) 12  and set (le
foo.le bar) 12  -- for some combinator that is hard to pick a name for that
turns an accessor into a lens. It means they always have to be cognizant of
that distinction. The inability to shed that tax in the other design is the
major pain point it has always had for me.

The user experience for it is / was going to be bad enough that I have
remained concerned about how well the adoption for it would be compared to
existing approaches, which have more set up but offer cleaner usage.

> I'm on the fence about whether it would be a good idea to burden
> > Nikita's proposal in the same fashion, but I think it'd be wise to
> > explore it in both fashions. My gut feeling though is that if we tie it
> > up with the same restrictions as ORF you just mostly get a less useful
> > ORF with anonymous record sugar thrown in.
>
> I think there's a sensible story to tell about an incremental plan that
> starts with something like ORF and ends up with something like Nikita's
> anonymous records. I'll try to tell this story when I can rub a few more
> braincells together...
>

I definitely think there is a coherent story there, but I'm not sure I see
a way that such a story could end that avoids the concerns above.

I very much agree that it makes sense to keep both options on the table
though so that we can work through the attendant issues and trade-offs.

-Edward
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: GHC support for the new "record" package

2015-01-22 Thread Edward Kmett
On Thu, Jan 22, 2015 at 4:31 AM, Adam Gundry  wrote:

> Actually, the simplifications I recently came up with could allow us to
> make uses of the field work as van Laarhoven lenses, other lenses *and*
> selector functions. In practice, however, I suspect this might lead to
> somewhat confusing error messages, so it might not be desirable.


Interesting. Have you actually tried this with a composition of your
simplified form, because I don't see how that can work.

When we tried this before we showed that there was a fundamental limitation
in the way the functional dependencies had to flow information down the
chain, also, "foo.bar.baz" has very different interpretations, between the
lens and normal accessors, and both are producing functions, so its hard to
see how this doesn't yield overlapping instance hell.

-Edward
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: GHC support for the new "record" package

2015-01-23 Thread Edward Kmett
On Fri, Jan 23, 2015 at 5:06 PM, Adam Gundry  wrote:

> Thanks for the feedback, Iavor!
>
> On 23/01/15 19:30, Iavor Diatchki wrote:
> > 2. I would propose that we simplify things further, and provide just one
> > class for overloading:
> >
> > class Field (name :: Symbol)
> > rec   rec'
> > field field'
> >   | name rec -> field
> >   , name rec'-> field'
> >   , name rec  field' -> rec'
> >   , name rec' field  -> rec
> >   where
> >   field :: Functor f => Proxy name -> (field -> f field') ->
> >   (rec   -> f rec')
> >
> > I don't think we need to go into "lenses" at all, the `field` method
> > simply provides a functorial
> > update function similar to `mapM`.   Of course, one could use the `lens`
> > library to
> > get more functionality but this is entirely up to the programmer.
> >
> > When the ORF extension is enabled, GHC should simply generate an
> > instance of the class,
> > in a similar way to what the lens library does
>

> 3. I like the idea of `#x` desugaring into `field (Proxy :: Proxy "x")`,
> > but I don't like the concrete symbol choice:
> >   - # is a valid operator and a bunch of libraries use it, so it won't
> > be compatible with existing code.
>
> Ah. I didn't realise that, but assumed it was safe behind -XMagicHash.
> Yes, that's no good.
>
> >   - @x might be a better choice; then you could write things like:
> > view @x  rec
> >   set  @x 3rec
> >   over @x (+2) rec
>
> This could work, though it has the downside that we've been informally
> using @ for explicit type application for a long time! Does anyone know
> what the status of the proposed ExplicitTypeApplication extension is?


I'll confess I've been keen on stealing @foo for the purpose of (Proxy ::
Proxy foo) or (Proxy :: Proxy "foo") from the type application stuff for a
long time -- primarily because I remain rather dubious about how well the
type application stuff can work, once you take a type and it goes through a
usage/generalization cycle, the order of the types you can "apply" gets all
jumbled up, making type application very difficult to actually use. Proxies
on the other hand remain stable. I realize that I'm probably on the losing
side of that debate, however. But I think it is fair to say that that
little bit of dangling syntax will be a bone that is heavily fought over. ;)

>   - another nice idea (due to Eric Mertens, aka glguy), which allows us
> > to avoid additional special syntax is as follows:
> > - instead of using special syntax, reuse the module system
> > - designate a "magic" module name (e.g., GHC.Records)
> > - when the renamer sees a name imported from that module, it
> > "resolves" the name by desugaring it into whatever we want
> > - For example, if `GHC.Records.x` desugars into `field (Proxy ::
> > Proxy "x")`, we could write things like this:
> >
> > import GHC.Records as R
> >
> > view R.x  rec
> > set  R.x 3rec
> > over R.x (+2) rec
>
> Interesting; I think Edward suggested something similar earlier in this
> thread. Avoiding a special syntax is a definite advantage, but the need
> for a qualified name makes composing the resulting lenses a bit tiresome
> (R.x.R.y.R.z or R.x . R.y . R.z). I suppose one could do
>
> import GHC.Records (x, y, z)
> import MyModule hiding (x, y, z)
>
> but having to manually hide the selector functions and bring into scope
> the lenses is also annoying.


In the suggestion I made as a (c) option for how to proceed around field
names a few posts back in this thread I was hinting towards having an
explicit use of {| foo :: x |} somewhere in the module provide an implicit
import of

import Field (foo)

then users can always reference Field.foo explicitly if they don't have
such in local scope, and names all share a common source.

Of course this was in the context a Nikita style {| ... |} rather than the
ORF { .. }.

If the Nikita records didn't make an accessor, because there's no way for
them to really do so, then there'd be nothing to conflict with.

Being able to use import and use them with ORF-style records would just be
gravy then. Users would be able to get those out of the box.

-Edward
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: GHC support for the new "record" package

2015-01-23 Thread Edward Kmett
If the level of complaints I received when I stole (#) for use in lens is
any indication, er.. it is in very wide use. It was by far the most
contentious operator I grabbed. ;)

It seems to me that I'd not be in a hurry to both break existing code and
pay a long term syntactic cost when we have options on the table that don't
require either, the "magic Field module" approach that both Eric and I
appear to have arrived at independently side-steps this issue nicely and
appears to result in a better user experience.

Keep in mind, one source of objections to operator-based sigils is that if
you put an sigil at the start of a lens the tax isn't one character but
two, there is a space you now need to avoid (.#) when chaining these
things. "foo.bar" vs. "#foo . #bar" and the latter will always be uglier.

The `import Field (...)` approach results in users never having to pay more
syntactically than with options they have available to them now, and being
class based is even beneficial to folks who don't use Nikita's records.

-Edward

On Fri, Jan 23, 2015 at 5:47 PM, Greg Weber  wrote:

> If we only add syntax when the language extension is used then we are not
> clobbering everyone. # is not that common of an operator. I would much
> rather upset a few people by taking that operator back when they opt-in to
> turning the extension on than having a worse records implementation.
>
> On Fri, Jan 23, 2015 at 2:23 PM, Edward Kmett  wrote:
>
>>
>> On Fri, Jan 23, 2015 at 5:06 PM, Adam Gundry  wrote:
>>
>>> Thanks for the feedback, Iavor!
>>>
>>> On 23/01/15 19:30, Iavor Diatchki wrote:
>>> > 2. I would propose that we simplify things further, and provide just
>>> one
>>> > class for overloading:
>>> >
>>> > class Field (name :: Symbol)
>>> > rec   rec'
>>> > field field'
>>> >   | name rec -> field
>>> >   , name rec'-> field'
>>> >   , name rec  field' -> rec'
>>> >   , name rec' field  -> rec
>>> >   where
>>> >   field :: Functor f => Proxy name -> (field -> f field') ->
>>> >   (rec   -> f rec')
>>> >
>>> > I don't think we need to go into "lenses" at all, the `field` method
>>> > simply provides a functorial
>>> > update function similar to `mapM`.   Of course, one could use the
>>> `lens`
>>> > library to
>>> > get more functionality but this is entirely up to the programmer.
>>> >
>>> > When the ORF extension is enabled, GHC should simply generate an
>>> > instance of the class,
>>> > in a similar way to what the lens library does
>>>
>>
>> > 3. I like the idea of `#x` desugaring into `field (Proxy :: Proxy "x")`,
>>> > but I don't like the concrete symbol choice:
>>> >   - # is a valid operator and a bunch of libraries use it, so it won't
>>> > be compatible with existing code.
>>>
>>> Ah. I didn't realise that, but assumed it was safe behind -XMagicHash.
>>> Yes, that's no good.
>>>
>>> >   - @x might be a better choice; then you could write things like:
>>> > view @x  rec
>>> >   set  @x 3rec
>>> >   over @x (+2) rec
>>>
>>> This could work, though it has the downside that we've been informally
>>> using @ for explicit type application for a long time! Does anyone know
>>> what the status of the proposed ExplicitTypeApplication extension is?
>>
>>
>> I'll confess I've been keen on stealing @foo for the purpose of (Proxy ::
>> Proxy foo) or (Proxy :: Proxy "foo") from the type application stuff for a
>> long time -- primarily because I remain rather dubious about how well the
>> type application stuff can work, once you take a type and it goes through a
>> usage/generalization cycle, the order of the types you can "apply" gets all
>> jumbled up, making type application very difficult to actually use. Proxies
>> on the other hand remain stable. I realize that I'm probably on the losing
>> side of that debate, however. But I think it is fair to say that that
>> little bit of dangling syntax will be a bone that is heavily fought over. ;)
>>
>> >   - another nice idea (due to Eric Mertens, aka glguy), which allows us
>>> > to avoid additional special syntax is as follows:
>>> > -

Re: GHC support for the new "record" package

2015-01-26 Thread Edward Kmett
Personally, I don't like the sigil mangled version at all.

If it is then further encumbered by a combinator it is now several symbols
longer at every single use site than other alternatives put forth in this
thread. =(

xx #bar . xx #baz

or

xx @bar . xx @baz

compares badly enough against

bar.baz

for some as yet unnamed combinator xx and is a big enough tax for all users
to unavoidably pay that I fear it would greatly hinder adoption.

The former also has the disadvantage of stealing an operator that is
already in wide use.

Even assuming the fixity issues can be worked out for some other set of
operators to glue these tother we're still looking at

x^!? #bar!? #baz

vs.

x^.bar.baz

with another set of arcane rules to switch back and forth out of this to
deal with the lenses/traversals/prisms/etc that many folks have in their
code today.

It is something like 3 extra sets of symbols to memorize plus a tax of 3
characters per lens use site.

I know that I for one would hesitate to throw over my template haskell
generated lenses for something that was noisier at every use site. For all
that lenses are complex internally, they are a lot less arbitrary than that.

The import Field trick is magic, yes, but it has the benefit of being the
first approach I've seen where the resulting syntax can be as light as what
the user can generate by hand today.

-Edward

On Mon, Jan 26, 2015 at 8:50 AM, Simon Peyton Jones 
wrote:

> |  "wired" into record selectors, which can't be undone later. I think we
> |  can fix some of that by desugaring record definitions to:
> |
> |  data T = MkT {x :: Int}
> |
> |  instance FieldSelector "T" T Int where
> |   fieldSelector (MkT x) = x
> |
> |  Then someone can, in a library, define:
> |
> |  instance FieldSelector x r a => IV x (r -> a) where
> |   iv = fieldSelector
> |
> |  Now that records don't mention IV, we are free to provide lots of
> |  different instances, each capturing some properties of each field,
> |  without committing to any one style of lens at this point. Therefore,
> |  we could have record desugaring also produce:
> |
> |  instance FieldSetter "T" T Int where
> |  fieldSet v (T _) = T v
> |
> |  And also:
> |
> |  instance FieldSTAB "T" T Int where
> |  fieldSTAB = ... the stab lens ...
>
> OK, I buy this.
>
> We generate FieldSelector instances where possible, and FieldSetter
> instances where possible (fewer cases).
>
> Fine.
>
>
>
> Cutting to the chase, if we are beginning to converge, could someone
> (Adam, Neil?) modify the Redesign page
> https://ghc.haskell.org/trac/ghc/wiki/Records/OverloadedRecordFields/Redesign
> to focus on plan B only; and add this FieldGetter/Setter stuff?
>
> It's confusing when we have too many things in play.  I'm sick at the
> moment, so I'm going home to bed -- hence handing off in a hopeful way to
> you two.
>
> I have added Edwards "import Field(x)" suggestion under syntax, although I
> don't really like it.
>
> One last thing: Edward, could you live with lenses coming from #x being of
> a newtype (Lens a b), or stab variant, rather than actually being a higher
> rank function etc?  Of course lens composition would no longer be function
> composition, but that might not be so terrible; ".." perhaps.  It would
> make error messages vastly more perspicuous. And, much as I love lenses, I
> think it's a mistake not to abstraction; it dramatically limits your future
> wiggle room.
>
>
>
> I really think we are finally converging.
>
> Simon
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://www.haskell.org/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: GHC support for the new "record" package

2015-01-26 Thread Edward Kmett
On Mon, Jan 26, 2015 at 4:18 PM, Nikita Volkov 
wrote:
>
> I.e., it would be `#bar . #baz`.
>

Note: once you start using a data-type then (.) necessarily fails to allow
you to ever do type changing assignment, due to the shape of Category, or
you have to use yet another operator, so that snippet cannot work without
giving up something we can do today. OTOH: Using the lens-style story, no
types are needed here that isn't already available in base and, done right,
no existing operators need be stolen from the user, and type changing
assignment is trivial.

I'm confess, I, like many in this thread, am less than comfortable with the
notion of bringing chunks of lens into base. Frankly, I'd casually
dismissed such concerns as a job for Haskell 2025. ;) However, I've been
trying to consider it with an open mind here, because the alternatives
proposed thus far lock in uglier code than the status quo with more
limitations while simultaneously being harder to explain.

-Edward

Introducing such a change would, of course, exclude the possibility of
> making "lens"-compatible libraries without depending on "lens", like I did
> with "record". However the most basic functionality of "lens" could be
> extracted into a separate library with a minimum of transitive
> dependencies, then the reasons for not depending on it would simply
> dissolve.
>
>
> 2015-01-26 23:22 GMT+03:00 Edward Kmett :
>
>> Personally, I don't like the sigil mangled version at all.
>>
>> If it is then further encumbered by a combinator it is now several
>> symbols longer at every single use site than other alternatives put forth
>> in this thread. =(
>>
>> xx #bar . xx #baz
>>
>> or
>>
>> xx @bar . xx @baz
>>
>> compares badly enough against
>>
>> bar.baz
>>
>> for some as yet unnamed combinator xx and is a big enough tax for all
>> users to unavoidably pay that I fear it would greatly hinder adoption.
>>
>> The former also has the disadvantage of stealing an operator that is
>> already in wide use.
>>
>> Even assuming the fixity issues can be worked out for some other set of
>> operators to glue these tother we're still looking at
>>
>> x^!? #bar!? #baz
>>
>> vs.
>>
>> x^.bar.baz
>>
>> with another set of arcane rules to switch back and forth out of this to
>> deal with the lenses/traversals/prisms/etc that many folks have in their
>> code today.
>>
>> It is something like 3 extra sets of symbols to memorize plus a tax of 3
>> characters per lens use site.
>>
>> I know that I for one would hesitate to throw over my template haskell
>> generated lenses for something that was noisier at every use site. For all
>> that lenses are complex internally, they are a lot less arbitrary than that.
>>
>> The import Field trick is magic, yes, but it has the benefit of being the
>> first approach I've seen where the resulting syntax can be as light as what
>> the user can generate by hand today.
>>
>> -Edward
>>
>> On Mon, Jan 26, 2015 at 8:50 AM, Simon Peyton Jones <
>> simo...@microsoft.com> wrote:
>>
>>> |  "wired" into record selectors, which can't be undone later. I think we
>>> |  can fix some of that by desugaring record definitions to:
>>> |
>>> |  data T = MkT {x :: Int}
>>> |
>>> |  instance FieldSelector "T" T Int where
>>> |   fieldSelector (MkT x) = x
>>> |
>>> |  Then someone can, in a library, define:
>>> |
>>> |  instance FieldSelector x r a => IV x (r -> a) where
>>> |   iv = fieldSelector
>>> |
>>> |  Now that records don't mention IV, we are free to provide lots of
>>> |  different instances, each capturing some properties of each field,
>>> |  without committing to any one style of lens at this point. Therefore,
>>> |  we could have record desugaring also produce:
>>> |
>>> |  instance FieldSetter "T" T Int where
>>> |  fieldSet v (T _) = T v
>>> |
>>> |  And also:
>>> |
>>> |  instance FieldSTAB "T" T Int where
>>> |  fieldSTAB = ... the stab lens ...
>>>
>>> OK, I buy this.
>>>
>>> We generate FieldSelector instances where possible, and FieldSetter
>>> instances where possible (fewer cases).
>>>
>>> Fine.
>>>
>>>
>>>
>>> Cutting to the chase, if we are beginning to converge, could someone
>>> (Adam, Neil?) modify the Redesign

Re: GHC support for the new "record" package

2015-01-26 Thread Edward Kmett
On Mon, Jan 26, 2015 at 4:41 PM, Simon Peyton Jones 
wrote:

>Personally, I don't like the sigil mangled version at all.
>
> You don’t comment on the relationship with implicit parameters.  Are they
> ok with sigils?
>

I don't have too many opinions about implicit parameters, but they don't
really see a lot of use, which makes me somewhat leery of copying the
pattern. ;)

  If it is then further encumbered by a combinator it is now several
> symbols longer at every single use site than other alternatives put forth
> in this thread. =(
>
> No, as Nikita says, under the “Redesign” proposal it would be #bar . #baz
>
The problem is that if you make #bar an instance of Category so that it can
use (.) then it will fail to allow type changing re-assignment.


>  The import Field trick is magic, yes, but it has the benefit of being
> the first approach I've seen where the resulting syntax can be as light as
> what the user can generate by hand today.
>
> That’s why I added it to the “Redesign” page. It seems viable to me; a
> clever idea, thank you.  Still, personally I prefer #x because of the link
> with implicit parameters.  It would be interesting to know what others
> think.
>
Admittedly @bar . @baz has the benefit that it introduces no namespacing
conflicts at all.

If we really had to go with some kind of sigil based solution, I _could_
rally behind that one, but I do like it a lot less than the import trick,
if only because the import trick gets rid of that last '@' and space we
have on every accessor and you have to admit that the existing

foo^.bar.baz.quux

idiom reads a lot more cleanly than

foo ^. @bar . @baz . @quux

ever can.

(I used @foo above because it avoids any potential conflict with existing
user code as @ isn't a legal operator)

 I'm confess, I, like many in this thread, am less than comfortable with
> the notion of bringing chunks of lens into base. Frankly, I'd casually
> dismissed such concerns as a job for Haskell 2025. ;) However, I've been
> trying to consider it with an open mind here, because the alternatives
> proposed thus far lock in uglier code than the status quo with more
> limitations while simultaneously being harder to explain.
>
> I don’t think anyone is suggesting adding any of lens are they?  Which
> bits did you think were being suggested for addition?
>
I was mostly referring to the use of the (a -> f b) -> s -> f t form.

>  Note: once you start using a data-type then (.) necessarily fails to
> allow you to ever do type changing assignment, due to the shape of
> Category, or you have to use yet another operator, so that snippet cannot
> work without giving up something we can do today. OTOH: Using the
> lens-style story, no types are needed here that isn't already available in
> base and, done right, no existing operators need be stolen from the user,
> and type changing assignment is trivial.
>
> I’m afraid I couldn’t understand this paragraph at all.  Perhaps some
> examples would help, to illustrate
>
what you mean?
>
I was writing that paragraph in response to your query if it'd make sense
to have the @foo return some data type: It comes at a rather high cost.

Lens gets away with using (.) to compose because its really using
functions, with a funny mapM-like shape (a -> f b) -> (s -> f t) is still a
function on the outside, it just happens to have a (co)domain that also
looks like a function (a -> f b).

If we make the outside type constructor a data type with its own Category
instance, and just go `Accessor s a` then it either loses its ability to
change out types in s  -- e.g. change out the type of the second member in
a pair, or it loses its ability to compose.

We gave up the latter to make Gundry's proposal work as we were forced into
that shape by trying to return a combinators that could be overloaded to
act as an existing accessor function.

To keep categorical composition for the accessor, you might at first think
we can use a product kind or something to get Accessor '(s,t) '(a,b) with
both indices but that gets stuck when you go to define `id`, so necessarily
such a version of things winds up needing its own set of combinators.

-Edward
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: GHC support for the new "record" package

2015-01-26 Thread Edward Kmett
I'm also rather worried, looking over the IV proposal, that it just doesn't
actually work.

We actually tried the code under "Haskell 98 records" back when Gundry
first started his proposal and it fell apart when you went to compose them.

A fundep/class associated type in the class is a stronger constraint that a
type equality defined on an individual instance.

I don't see how

@foo . @bar . @baz

(or #foo . #bar . #baz as would be written under the concrete proposal on
the wiki)

is ever supposed to figure out the intermediate types when working
polymorphically in the data type involved.

What happens when the type of that chain of accessors is left to inference?
You get stuck wallowing in AllowAmbiguousTypes territory:

(#foo . #bar . #baz) :: (IV "foo" (c -> d), IV "bar" (b -> c), IV "baz" (a
-> b)) => a -> d

has a variables 'b' and 'c' that don't occur on the right hand side, and
which are only determinable by knowing that the instances you expect to see
look something like:

instance (a ~ Bool) => IV "x" (S -> a) where
  iv (MkS x) = x

but that is too weak to figure out that "S" determines "a" unless S is
already known, even if we just limit ourselves to field accessors as
functions.

-Edward


On Mon, Jan 26, 2015 at 7:43 PM, Edward Kmett  wrote:

> On Mon, Jan 26, 2015 at 4:41 PM, Simon Peyton Jones  > wrote:
>
>>Personally, I don't like the sigil mangled version at all.
>>
>> You don’t comment on the relationship with implicit parameters.  Are they
>> ok with sigils?
>>
>
> I don't have too many opinions about implicit parameters, but they don't
> really see a lot of use, which makes me somewhat leery of copying the
> pattern. ;)
>
>   If it is then further encumbered by a combinator it is now several
>> symbols longer at every single use site than other alternatives put forth
>> in this thread. =(
>>
>> No, as Nikita says, under the “Redesign” proposal it would be #bar . #baz
>>
> The problem is that if you make #bar an instance of Category so that it
> can use (.) then it will fail to allow type changing re-assignment.
>
>
>>  The import Field trick is magic, yes, but it has the benefit of being
>> the first approach I've seen where the resulting syntax can be as light as
>> what the user can generate by hand today.
>>
>> That’s why I added it to the “Redesign” page. It seems viable to me; a
>> clever idea, thank you.  Still, personally I prefer #x because of the link
>> with implicit parameters.  It would be interesting to know what others
>> think.
>>
> Admittedly @bar . @baz has the benefit that it introduces no namespacing
> conflicts at all.
>
> If we really had to go with some kind of sigil based solution, I _could_
> rally behind that one, but I do like it a lot less than the import trick,
> if only because the import trick gets rid of that last '@' and space we
> have on every accessor and you have to admit that the existing
>
> foo^.bar.baz.quux
>
> idiom reads a lot more cleanly than
>
> foo ^. @bar . @baz . @quux
>
> ever can.
>
> (I used @foo above because it avoids any potential conflict with existing
> user code as @ isn't a legal operator)
>
>  I'm confess, I, like many in this thread, am less than comfortable with
>> the notion of bringing chunks of lens into base. Frankly, I'd casually
>> dismissed such concerns as a job for Haskell 2025. ;) However, I've been
>> trying to consider it with an open mind here, because the alternatives
>> proposed thus far lock in uglier code than the status quo with more
>> limitations while simultaneously being harder to explain.
>>
>> I don’t think anyone is suggesting adding any of lens are they?  Which
>> bits did you think were being suggested for addition?
>>
> I was mostly referring to the use of the (a -> f b) -> s -> f t form.
>
>>  Note: once you start using a data-type then (.) necessarily fails to
>> allow you to ever do type changing assignment, due to the shape of
>> Category, or you have to use yet another operator, so that snippet cannot
>> work without giving up something we can do today. OTOH: Using the
>> lens-style story, no types are needed here that isn't already available in
>> base and, done right, no existing operators need be stolen from the user,
>> and type changing assignment is trivial.
>>
>> I’m afraid I couldn’t understand this paragraph at all.  Perhaps some
>> examples would help, to illustrate
>>
> what you mean?
>>
> I was writing that paragraph in response to your que

Re: GHC support for the new "record" package

2015-01-27 Thread Edward Kmett
On Tue, Jan 27, 2015 at 4:07 AM, Adam Gundry  wrote:
>
> AFAICS it's still an open question as to whether that instance
> should provide
>
> (a) selector functions r -> a
> (b) lenses (a -> f b) -> s -> f t
> (c) both
> (d) neither
>
> but I'm starting to think (b) is the sanest option.
>

Glad I'm not the only voice in the wilderness ;)

On the syntax question, Edward, could you say more about how you would
> expect the magic imports to work? If a module both declares (or imports)
> a record field `x` and magically imports `x`, what does a use of `x`
> mean? (In the original ORF, we didn't have the magic module, but just
> said that record fields were automatically polymorphic... that works but
> is a bit fiddly in the renamer, and isn't a conservative extension.)
>

The straw man I was offering when this was just about {| foo :: .., ... |}
-style records would be to have those bring into scope the Field.foo lenses
by default as a courtesy, since there is nothing involved in that that
necessarily ever defines a normal field accessor.

I'm very much not convinced one way or the other if such a courtesy import
would be better than requiring the user to do it by hand.

It is when we start mixing this with ORF that things get confusing, which
is of course why we're having this nice big discussion.

Having definitions we bring from that module able to be used with normal
records via something like the ORF makes sense. It invites some headaches
though, as higher-rank fields seem to be a somewhat insurmountable obstacle
to the latter, whereas they can be unceremoniously ignored in anonymous
records, since they didn't exist before.

As Neil noted, you _can_ write `foo = @foo` to make such an accessor have
the lighter weight syntax. Of course, once folks start using template
haskell to do so, we get right back to where we are today. It also invites
the question of where such exports should be made.

I'm less sanguine about the proposed IV class, as it doesn't actually work
in its current incarnation in the proposal as mentioned above.

Assuming it has been modified to actually compose and infer, the benefit of
the `import Field (...)` or naked @foo approach is that if two modules
bring in the same field they are both compatible when imported into a third
module.

One half-way serious option might be to have that Field or Lens or whatever
module just export `foo = @foo` definitions from a canonical place so they
can be shared, and to decide if folks have to import it explicitly to use
it.

Then @foo could be the lens to get at the contents of the field, can do
type changing assignment, and users can import the fields to avoid the
noise.

It confess, the solution there feels quite heavy, though.

-Edward

Adam
>
>
> On 27/01/15 00:59, Edward Kmett wrote:
> > I'm also rather worried, looking over the IV proposal, that it just
> > doesn't actually work.
> >
> > We actually tried the code under "Haskell 98 records" back when Gundry
> > first started his proposal and it fell apart when you went to compose
> them.
> >
> > A fundep/class associated type in the class is a stronger constraint
> > that a type equality defined on an individual instance.
> >
> > I don't see how
> >
> > @foo . @bar . @baz
> >
> > (or #foo . #bar . #baz as would be written under the concrete proposal
> > on the wiki)
> >
> > is ever supposed to figure out the intermediate types when working
> > polymorphically in the data type involved.
> >
> > What happens when the type of that chain of accessors is left to
> > inference? You get stuck wallowing in AllowAmbiguousTypes territory:
> >
> > (#foo . #bar . #baz) :: (IV "foo" (c -> d), IV "bar" (b -> c), IV "baz"
> > (a -> b)) => a -> d
> >
> > has a variables 'b' and 'c' that don't occur on the right hand side, and
> > which are only determinable by knowing that the instances you expect to
> > see look something like:
> >
> > instance (a ~ Bool) => IV "x" (S -> a) where
> >   iv (MkS x) = x
> >
> > but that is too weak to figure out that "S" determines "a" unless S is
> > already known, even if we just limit ourselves to field accessors as
> > functions.
> >
> > -Edward
>
>
> --
> Adam Gundry, Haskell Consultant
> Well-Typed LLP, http://www.well-typed.com/
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: GHC support for the new "record" package

2015-01-27 Thread Edward Kmett
On Tue, Jan 27, 2015 at 6:12 AM, Simon Peyton Jones 
wrote:

> |  1. What are the IV instances provided in base? These could give
> |  selector functions, lenses, both or neither.
>
> My instinct: just selector functions.  Leave lenses for a lens package.
>

How do these selectors actually typecheck when composed?

Ignoring lenses all together for the moment, I don't see how IV works.



> I still have not understood the argument for lenses being a function
> rather that a newtype wrapping that function; apart from the (valuable)
> ability to re-use ordinary (.), which is cute.  Edward has explained this
> several time, but I have failed to understand.


You can make a data type

data Lens s a = Lens (s -> a) (a -> s -> s)

or

newtype Lens s a = Lens (s -> (a, a -> s))

The latter is basically the approach I used to take in my old data-lens
library.

This works great for lenses that don't let you change types.

You can write a Category instance for this notion of lens.

You can make it compose the way functions normally compose (or you can flip
the arguments and make it compose the way lenses in the lens library do,
here you have an option.)

Now, expand it to let you do type changing assignment.

newtype Lens s t a b = Lens (s -> a) (s -> b -> t)

Now we have 4 arguments, but Category wants 2.

I've punted a way-too-messy aside about why 4 arguments are used to the
end. [*]

You can come up with a horrible way in which you can encode a GADT

data Lens :: (*,*) -> (*,*) -> * where
  Lens :: (s -> a) -> (s -> b -> t) -> Lens '(s,t) '(a,b)

but when you go to define

instance Category Lens where
  id = ...

you'd get stuck, because we can't prove that all inhabitants of (*,*) look
like '(a,b) for some types a and b.

On the other hand, you can make the data type too big

data Lens :: * -> * -> * where
  Lens :: (s -> a) -> (s -> b -> t) -> Lens (s,t) (a,b)
  Id :: Lens a a

but now you can distinguish too much information, GHC is doing case
analysis everywhere, etc.

Performance drops like a stone and it doesn't fit the abstraction.

In short, using a dedicated data type costs you access to (.) for
composition or costs you the ability to let the types change.

-Edward

[*] Why 4 arguments?

We can make up our own combinators for putting these things together, but
we can't use (.) from the Prelude or even from Control.Category.

There are lots of ways to motivate the 4 argument version:

Logically there are two type families involved the 'inner' family and the
'outer' one and the lens type looks like

outer i is isomorphic to the pair of some 'complement' that doesn't depend
on the index i, and some inner i.

outer i <-> (complement, inner i)

We can't talk about such families in Haskell though, we need them to
compose by pullback/unification, so we fake it by using two instantiations
of the schema

outer i -> (inner i, inner j -> outer j)

which is enough for 99% of the things a user wants to say with a lens or
field accessor.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: GHC support for the new "record" package

2015-01-28 Thread Edward Kmett
There is a problem with the old TRex syntax.

In a world with kind signatures and rank-2 types, it would appear that

type Point2D = Rec ( x :: Coord, y :: Coord)

is ambiguous.

Is Coord a kind signature being applied to x and y which are type variables
brought into scope implicitly as

   type Point2D = forall (x :: Coord, y :: Coord) => Rec (x, y)

would make more explicit?

e.g.

type Lens s t a b = Functor f => (a -> f b) -> s -> f t
works today in ghc, even though f isn't explicitly scoped and elaborates to:

type Lens s t a b = forall f. Functor f => (a -> f b) -> s -> f t

-Edward

On Wed, Jan 28, 2015 at 4:48 PM, Nikita Volkov 
wrote:

> Chris, this is great! Looks like we can even get rid of the Rec prefix!
>
>-
>
>A phrase in round braces and with :: is itself unambiguous in the type
>context.
> -
>
>A phrase in round braces with = symbols is unambiguous in the
>expression context.
>
> Concerning the pattern context a solution needs to be found though. But
> the two points above are enough for me to fall in love with this direction!
> The {| braces had a too icky of a touch to them and the plain { required
> the user to choose whether to use the standard record syntax or anonymous
> one on the module scale, but not both.
> ​
>
>
> 2015-01-29 0:26 GMT+03:00 Christopher Done :
>
>> There’s too much to absorb in this discussion at the moment and I’m
>> late to the party anyway, but I would like to make a small note on
>> syntax. Given that this is very similar to TRex both in behaviour and
>> syntactic means of construction, why not just take TRex’s actual
>> syntax? http://en.wikipedia.org/wiki/Hugs#Extensible_records
>>
>> type Point2D = Rec (x::Coord, y::Coord)
>> point2D = (x=1, y=1) :: Point2D
>> (#x point)
>>
>> It seems like it wouldn’t create any syntactical ambiguities (which is
>> probably why the Hugs developers chose it).
>>
>> Ciao
>>
>> On 20 January 2015 at 22:44, Simon Marlow  wrote:
>> > For those who haven't seen this, Nikita Volkov proposed a new approach
>> to
>> > anonymous records, which can be found in the "record" package on
>> Hackage:
>> > http://hackage.haskell.org/package/record
>> >
>> > It had a *lot* of attention on Reddit:
>> > http://nikita-volkov.github.io/record/
>> >
>> > Now, the solution is very nice and lightweight, but because it is
>> > implemented outside GHC it relies on quasi-quotation (amazing that it
>> can be
>> > done at all!).  It has some limitations because it needs to parse
>> Haskell
>> > syntax, and Haskell is big.  So we could make this a lot smoother, both
>> for
>> > the implementation and the user, by directly supporting anonymous record
>> > syntax in GHC.  Obviously we'd have to move the library code into base
>> too.
>> >
>> > This message is by way of kicking off the discussion, since nobody else
>> > seems to have done so yet.  Can we agree that this is the right thing
>> and
>> > should be directly supported by GHC?  At this point we'd be aiming for
>> 7.12.
>> >
>> > Who is interested in working on this?  Nikita?
>> >
>> > There are various design decisions to think about.  For example, when
>> the
>> > quasi-quote brackets are removed, the syntax will conflict with the
>> existing
>> > record syntax.  The syntax ends up being similar to Simon's 2003
>> proposal
>> >
>> http://research.microsoft.com/en-us/um/people/simonpj/Haskell/records.html
>> > (there are major differences though, notably the use of lenses for
>> selection
>> > and update).
>> >
>> > I created a template wiki page:
>> > https://ghc.haskell.org/trac/ghc/wiki/Records/Volkov
>> >
>> > Cheers,
>> > Simon
>> > ___
>> > ghc-devs mailing list
>> > ghc-devs@haskell.org
>> > http://www.haskell.org/mailman/listinfo/ghc-devs
>>
>
>
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://www.haskell.org/mailman/listinfo/ghc-devs
>
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: GHC support for the new "record" package

2015-01-28 Thread Edward Kmett
Alas, the 'f' isn't the only thing in the lens library signatures that gets
overloaded in practice.

Isomorphisms and prisms overload the shape to look more like p a (f b) -> p
s (f t), rather than (a -> f b) -> s -> f t

Indexing, which matters for folding and traversing over containers with
keys overloads with a shape like p a (f b) -> s -> f t

By the time you reach that level of generality, and add type-changing, the
newtype is sort of just dangling there actively getting in the way and
providing no actual encapsulation.

Now, you could make up a bunch of individual ad hoc data types for all the
different lens types we happen to know about today.

However, it is deeply insightful that it is the form that lenses take that
let us _find_ all the different lens-likes that we use today.

Half of them we had no idea were out there until we spent time exploring
the impact of the design we have.

Switching to a representation where these things arise from O(n^2) ad-hoc
rules rather than the existing relationships between mostly "common sense"
classes seems like a poor trade.

In scala Julien Truffaut has a library called Monocle, which aspires to be
a port of the ideas of lens to Scala. Due to the vagaries of the language
the only option they have open to them is to implement things the way you
are looking at exploring here. It doesn't work out well. Vastly more effort
yields a library full of boilerplate that handles a much smaller scope and
yields no insight into why these things are related.

-Edward


On Wed, Jan 28, 2015 at 5:32 AM, Simon Peyton Jones 
wrote:

>   As soon as you have a distinct Lens type, and use something
> Category-like for composition, you are limiting yourself to composing two
> lenses to get back a lens (barring a terrible mptc 'solution'). And that is
> weak. The only reason I (personally) think lens pulls its weight, and is
> worth using (unlike every prior lens library, which I never bothered with),
> is the ability for lenses, prisms, ismorphisms, traversals, folds, etc. to
> properly degrade to one another and compose automatically.​
>
> Aha.  I keep asking whether it’s just the cute ability to re-use (.) that
> justifies the lack of abstraction in the Lens type.  But Dan’s comment has
> made me remember something from my own talk on the subject.  Here are the
> types of lenses and traversals (2-parameter versions):
>
>
>
> type Lens’  s a = forall f. Functor f
>
>=> (a -> f a) -> (s -> f s)
>
> type Traversal’ s a = forall f. Applicative f
>
>=> (a -> f a) -> (s -> f s)
>
>
>
> Suppose we have
>
> ln1 :: Lens'  s1 s2
>
> tr1 :: Traversal' s1 s2
>
> ln2 :: Lens'  s2 a
>
> tr2 :: Traversal' s2 a
>
>
>
> Now these compositions are all well typed
>
> ln1 . ln2 :: Lens' s1 a
>
> tr1 . tr2 :: Traversal' s1 a
>
> tr1 . ln2 :: Traversal' s1 a
>
> ln1 . tr2 :: Traversal' s1 a
>
>
>
> which is quite remarkable.  If Lens’ and Traversal’ were newtypes, you’d
> need four different operators.  (I think that what Dan means by “a terrible
> mptc solution” is trying to overload those four operators into one.)
>
>
>
> I don’t know if this exhausts the reasons that lenses are not abstract.  I
> would love to know more, explained in a smilar style.
>
>
>
> Incidentally has anyone explored this?
>
>
>
> newtype PolyLens c s a = PL (forall f. c f => (a -> f a) -> s -> f s)
>
>
>
> I’ve just abstracted over the Functor/Applicative part, so that Lens’ and
> Traversal’ are both PolyLenses.  Now perhaps we can do (.), with a type like
>
>
>
> (.) :: PolyLens c1 s1 s2 -> PolyLens c2 s2 a -> PolyLens (And c1 c2) s1 a
>
>
>
> where And is a type function
>
>
>
> type instance And Functor Applicative = Applicative
>
> etc
>
>
>
> I have no idea whether this could be made to work out, but it seems like
> an obvious avenue so I wonder if anyone has explored it.
>
>
>
> Simon
>
>
>
> *From:* Dan Doel [mailto:dan.d...@gmail.com]
> *Sent:* 28 January 2015 00:27
> *To:* Edward Kmett
> *Cc:* Simon Peyton Jones; ghc-devs@haskell.org
> *Subject:* Re: GHC support for the new "record" package
>
>
>
> On Tue, Jan 27, 2015 at 6:47 PM, Edward Kmett  wrote:
>
>
>
> This works great for lenses that don't let you change types.
>
>
>
> ​This is not the only restriction required for this to be an acceptable
> solution.
>
> As soon as you have a distinct Lens type, and use something Category-like
> for composition, you are limiting yourself to composing two lens

Re: Delaying 7.10?

2015-01-29 Thread Edward Kmett
I personally would rather see this issue given the time to be resolved
correctly than rush to release 7.10 now because of a self-imposed deadline.

An unsafeCoerce bug, especially one which affects SafeHaskell, pretty much
trumps all in my eyes.

-Edward

On Thu, Jan 29, 2015 at 2:54 PM, Austin Seipp  wrote:

> After thinking about it a little, I'm fine with pushing the release out to
> March. I think #9858 is the more serious of our concerns vs a raging
> debate, too.
>
> My only concern really is dealing with the merging of such a patch. For
> example, if the patch to fix this is actually as wide ranging as we believe
> to the type hacker, I can definitely foresee a merge conflict, with, say,
> the recent -fwarn-redundant-constraints, which I've managed to leave out of
> 7.10 so far.
>
> In any case, with some more time, we can work those details out.
>
> On Thursday, January 29, 2015, Simon Peyton Jones 
> wrote:
>
>>  Friends
>>
>> In a call with a bunch of type hackers, we were discussing
>>
>>https://ghc.haskell.org/trac/ghc/ticket/9858
>>
>> This is a pretty serious bug.  It allows a malicious person to construct
>> his own unsafeCoerce, and so completely subverts Safe Haskell.
>>
>> Actually there are two bugs (see comment:19).  The first is easily
>> fixed.  But the second is not.
>>
>> We explored various quick fixes, but the real solution is not far out of
>> reach.  It amounts to this:
>>
>> ·Every data type is automatically in Typeable.  No need to say
>> “deriving(Typeable)” or “AutoDeriveTypeable” (which would become deprecated)
>>
>> ·In implementation terms, the constraint solver treats Typeable
>> specially, much as it already treats Coercible specially.
>>
>> It’s not a huge job.  It’d probably take a couple of days of
>> implementation work, and some time for shaking out bugs and consequential
>> changes.  The biggest thing might be simply working out implementation
>> design choices.  (For example, there is a modest code-size cost to making
>> everything Typeable, esp because that includes the data constructors of the
>> type (which can be used in types, with DataKinds).  Does that matter?
>> Should we provide a way to suppress it?  If so, we’d also need a way to
>> express whether or not the Typable instance exists in the interface file.)
>>
>> But it is a substantial change that will touch a lot of lines of code.
>> Moreover, someone has to do it, and Iavor (who heroically volunteered)
>> happens to be travelling next week.
>>
>> So it’s really not the kind of thing we would usually do after RC2.
>>
>> But (a) it’s serious and, as it happens, (b) there is also the BBP
>> Prelude debate going on.
>>
>> Hence the question: should we simply delay 7.10  by, say, a month?  After
>> all, the timetable is up to us.  Doing so might give a bit more breathing
>> space to the BBP debate, which might allow time for reflection and/or
>> implementation of modest features to help the transition.  (I know that
>> several are under discussion.)  Plus, anyone waiting for 7.10 can simply
>> use RC2, which is pretty good.
>>
>> Would that be a relief to the BBP debate?  Or any other opinions.
>>
>> Simon
>>
>> PS: I know, I know: there is endless pressure to delay releases to get
>> stuff in.  If we give in to that pressure, we never make a release.  But we
>> should know when to break our own rules.  Perhaps this is such an occasion.
>>
>
>
> --
> Regards,
>
> Austin Seipp, Haskell Consultant
> Well-Typed LLP, http://www.well-typed.com/
>
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://www.haskell.org/mailman/listinfo/ghc-devs
>
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Seeking an active maintainer for 'directory'

2015-02-17 Thread Edward Kmett
The 'directory' package could use an active maintainer.

Currently, the package falls to the Core Libraries Committee for
maintenance, but we've had a number of issues accrete for the directory
package over the last six months or so, which need some attention to detail
and a good understanding of cross-platform issues.

Is anybody interested in nominating themselves for this role?

-Edward
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Seeking an active maintainer for 'directory'

2015-02-17 Thread Edward Kmett
And we have a winner.

Thanks, Phil.

If you need any help from the core libraries committee, just ask; we'll
support your efforts however we can.

-Edward

On Tue, Feb 17, 2015 at 2:25 PM, Phil Ruffwind  wrote:

> > Is anybody interested in nominating themselves for this role?
>
> I would be interested in this.  I'm generally quite meticulous :) and
> I'm familiar with the APIs of both POSIX and Win32, albeit more so
> with POSIX.
>
> --
> Phil
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Seeking an active maintainer for 'directory'

2015-02-17 Thread Edward Kmett
I have no particularly strong opinion on the matter. I'm happy to leave
that up to Phil.

-Edward

On Tue, Feb 17, 2015 at 5:12 PM, Elliot Robinson <
elliot.robin...@argiopetech.com> wrote:

> My, that was quick...
>
> I'd be happy to throw my hat into the ring as a co-maintainer with Phil
> (if the involved parties aren't opposed). I'm also somewhat more familiar
> with the POSIX side of things, though it wouldn't hurt me to brush up on my
> Win32.
>
> --
> Elliot Robinson
> GPG Key: 9FEDE59A
>
> On 02/17/15, Edward Kmett wrote:
> > And we have a winner.
> >
> > Thanks, Phil.
> >
> > If you need any help from the core libraries committee, just ask; we'll
> > support your efforts however we can.
> >
> > -Edward
> >
> > On Tue, Feb 17, 2015 at 2:25 PM, Phil Ruffwind 
> wrote:
> >
> > > > Is anybody interested in nominating themselves for this role?
> > >
> > > I would be interested in this.  I'm generally quite meticulous :) and
> > > I'm familiar with the APIs of both POSIX and Win32, albeit more so
> > > with POSIX.
> > >
> > > --
> > > Phil
> > >
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: [core libraries] tyConHash -- quick fix?

2015-03-11 Thread Edward Kmett
I like your idea of using tyConHash for the Int version and
tyConFingerprint to refer to the Fingerprint.

The former fits more closely with the usage elsewhere e.g. hashUnique,
hashStableName.

-Edward

On Wed, Mar 11, 2015 at 5:39 AM, Simon Peyton Jones 
wrote:

>  data TyCon = TyCon {
>
>tyConHash:: {-# UNPACK #-} !Fingerprint, -- ^ @since 4.8.0.0
>
>tyConPackage :: String, -- ^ @since 4.5.0.0
>
>tyConModule  :: String, -- ^ @since 4.5.0.0
>
>tyConName:: String  -- ^ @since 4.5.0.0
>
> }
>
>
>
> Friends,
>
> Is tyConHash a good name here?  Wouldn’t tyConFingerprint be better?
>
> · Hash functions usually yield a Int.
>
> · tyConFingerprint :: TyCon -> Fingerprint makes the name match
> the type.
>
> · If we had fingerprintHash:: Fingerprint -> Int, then we might
> want
> tyConHash :: TyCon -> Int
> tyConHash = fingerprintHash . tyConFingerpring
>
>
>
> This is new in 7.10, so we could fix it now with no trouble.
>
> Simon
>
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "haskell-core-libraries" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to haskell-core-libraries+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: wither the Platform

2015-03-22 Thread Edward Kmett
The original reason for the cabal hack that prevented it from trying to
reinstall template-haskell is that almost every time someone did this it
broke, silently. Then five packages later something would use template
haskell, and you'd get completely nonsensical error messages, and someone
_else_ would get the bug report. Sure there might have been a scenario in
which an expert who is working on ghc may want to reinstall the
template-haskell to get a new point release, but TH has never worked across
multiple GHC versions, and old versions shipped with very wide bounds.

Now, of course, maintainers and the trustees have the ability to
retroactively narrow bounds (and you've already done so for
template-haskell), so this view is dated. template-haskell should just be
reinstallable like everything else now.

-Edward

On Sun, Mar 22, 2015 at 6:24 AM, Herbert Valerio Riedel  wrote:

> On 2015-03-22 at 11:17:21 +0100, Erik Hesselink wrote:
>
> [...]
>
> > I do this for template-haskell, since it's not possible to reinstall
> > but cabal would occasionally try it. I can imagine it would work well
> > to prevent the scenario you describe with network.
>
> Why isn't it possible to reinstall TH (unless you also need to depend on
> the `ghc` package)? We even explicitly allowed template-haskell to be
> reinstallable again in Cabal as there didn't seem any reason to forbid
> it anymore (it's no different than e.g. `bytestring` which is
> reinstallable as well):
>
>
> https://github.com/haskell/cabal/commit/ffd67e5e630766906e6f4c6655c067a79f739150
>
> Cheers,
>   hvr
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: HP 2015.2.0.0 and GHC 7.10

2015-03-25 Thread Edward Kmett
On Mon, Mar 23, 2015 at 4:35 AM, Sven Panne  wrote:

> 2015-03-23 6:13 GMT+01:00 Mark Lentczner :
> > [...] exceptions & multipart - needed by cgi - is exceptions now
> subsumed by
> > something in transformers? [...]
>
> Coincidentally, Edward Kmett pointed me to the exceptions package
> while I was trying to generalize some of my packages from using plain
> IO to MonadIO. Alas, the transformers package still doesn't subsume
> the exceptions package, but IMHO it really should. Looking at the
> import list of e.g. Control.Monad.Catch alone is already indicating
> that. :-)


transformers remains rather rigidly locked into Haskell 98/2010.

mtl uses comparatively few extensions.

exceptions uses rank-3 types in the API, which is something we currently
don't do in transformers or the mtl.


> BTW: System.Console.Haskeline.MonadException has something
> similar, but far less complete, too, but that's really a strange place
> for such a feature. How did it end up there?
>

Haskeline makes a few weird choices. e.g. The opacity of the InputT type
pretty much renders the library very difficult to use the moment you need
to do something that the package doesn't anticipate, like work with InputT
in a transformer and expect any instances to exist, handle exceptions
around _it_ in turn, lift monad transformers over it yourself, etc. =( I
have more code for working around this aspect of Haskeline than I do for
working with it. But it appears, in this case, Judah needed it for working
with InputT, and chose to implement that by lifting
transformer-by-transformer, since internally InputT is made by wrapping up
an mtl-based type in a newtype.

-Edward


> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: wither the Platform

2015-03-27 Thread Edward Kmett
On Fri, Mar 27, 2015 at 10:09 AM, Mark Lentczner 
wrote:

> But that is true of GHC as well. We need to stop having the attitude of
> "Platform is for newcomers / GHC is for heavyweights". It is perfectly fine
> to announce "GHC 7.10.1 is out - you can install it from Platform 7.10.1
> which is a complete installer for your OS with core and standard libraries,
> and tools; or if you prefer you can get the minimal binary compiler build.
> As always, not all packages on Hackage will be compatible." Then our
> recommendations on to users on IRC are about which version is best for
> their needs, not "don't install platform, you won't be able to get lens to
> compile..."
>

The lens package (alongside every other package I maintain that is incurred
as a dependency of lens) has very deliberately support all Haskell Platform
releases for at least 3 current major GHC releases, often at great expense
to the API.

No offense, but I don't think lens is the culprit here.

-Edward
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: HP 2015.2.0.0 and GHC 7.10

2015-03-27 Thread Edward Kmett
I'm personally a rather vocal proponent of the new OpenGL API changes.

I'd also in general favor a policy of greater trust when it comes to
library authors factoring out bits of their packages even once they become
part of the platform. We all want our code to work together.

The hand-wringing we've had over the splitting off of multipart from cgi
and ObjectName or StateVar from OpenGL because designers of packages like
sdl2 want to be able to support a common API without incurring a needless
OpenGL dependency is largely indicative of why some folks get scared of
their packages being included in the platform.

And, e.g. aeson's scientific dependency is needed to ensure data going
through the API doesn't lose precision, and due to stackage almost everyone
has adapted to its presence for over a year. Removing it would do nobody
any good. Let's bless it and move on.

-Edward

On Fri, Mar 27, 2015 at 10:41 AM, Mark Lentczner 
wrote:

> NO MOAR BIKESHEDS!
>
> I don't want to release in the platform an API that is out of date the day
> we release it. So we either go with the new (and I'm trusting Sven to vouch
> for 'it's the right API, we'll support this for the next year or so') - or
> we drop OpenGL* from the platform release - or we do "with and without"
> releases.
>
> Votes?
> ​
>
> ___
> Libraries mailing list
> librar...@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries
>
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: HP 2015.2.0.0 and GHC 7.10

2015-03-27 Thread Edward Kmett
To ship `gl`, we wound up having to rename all of the modules to make it
work on Windows.

Alas, the module name conventions used by `OpenGLRaw` are longer.

-Edward



On Fri, Mar 27, 2015 at 1:16 PM, Randy Polen 
wrote:

> I am helping Mark with the Haskell Platform, doing the Windows builds (32-
> and
> 64-bit).  I want you to be aware of a problem I am encountering, and
> solicit
> suggestions and possible help.
>
> In building for HP 2015.2.0.0 on Windows 7, 64-bit (haven't gotten to
> 32-bit
> yet but likely the same problem will occur), I seem to be hitting the 32K
> limit
> for the length of arguments to a process, encountered while cabal is
> invoking
> haddock to build the docs for the OpenGLRaw package.  For HP2014.2.0.0, the
> argument list was ~25K (from looking at my old build logs) but now is ~36K,
> which exceeds the maximum for CreateProcess (not a limit of the
> command-line,
> but of the OS call itself).
>
> Is there a way to build haddock docs for a single package but in multiple
> haddock invocations (maybe building a .haddock file for portions, then
> combining them, with the goal that the command line is kept short)?  Seems
> this
> would also require a corresponding cabal change, as cabal is the invocator
> when
> this happens.
>
>
> Barring any existing mechanism, the typical solution to this problem on the
> Windows OS is (when possible, of course) to modify the program to accept a
> "response file" of command-line arguments.  In this case, we could add an
> option to haddock to accept either a complete "response file" (i.e.,
> allowing
> *all* options and arguments to come from a file) or just a file containing
> the
> files to process.  Either of these changes to haddock are rather trivial to
> write (but adding another option implies more testing, documentation, other
> cases to handle, etc.).  Since haddock ships with the ghc release, that's
> another wrinkle for this particular release.  The other implication of
> such a
> solution is that cabal would need a change to utilize this change for it
> to be
> effective, checking haddock's version for support of this new
> haddock-flag, and
> either use it if the haddock version supports it, or do it optionally
> (which
> implies a new flag for cabal's haddock sub-command).  This change to cabal
> is
> also rather trivial to implement (this is not to imply insensitivity to the
> incurred cost of each line of code, nor to the added burden of user-visible
> changes such as a command-line option).
>
>
> (Less desirable possibilities, mentioned only for completeness: skip the
> documentation for OpenGLRaw for this version of the Haskell Platform;
> split up
> the OpenGLRaw package itself in some way.)
>
>
> Other possible solutions and work-arounds?  Thoughts on either using
> haddock in
> a different way (and the cabal change that would be required to break up
> the
> doc build into multiple steps for a single package)?  Thoughts on the
> "response
> file" solution?
>
>
> Randy
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: traverse_

2015-03-31 Thread Edward Kmett
We deliberately took no more symbols than we needed in 7.10 from Prelude as
part of the Foldable/Traversable Proposal. There are multiple combinators
in Data.Foldable and Data.Traversable that we do not export. traverse_ is
one of them as, strictly speaking, traverse_ was a symbol we didn't have to
take.

If we had would anybody have complained any more loudly? Not sure... but it
was a deliberate choice to not bring in any symbols into Prelude that
weren't already there that weren't part of the definition of a class or
needed to define instances that already existed.

-Edward

On Mon, Mar 30, 2015 at 11:33 PM, Fumiaki Kinoshita 
wrote:

> Well, I see. It'd be nice.
>
> That aside, the absence of traverse_ doesn't seem to be intended (even the
> documentation for mapM_ says "mapM_ is just traverse_"!)
>
> 2015-03-30 16:54 GMT+09:00 Herbert Valerio Riedel :
>
>> On 2015-03-30 at 07:05:56 +0200, Fumiaki Kinoshita wrote:
>>
>> [...]
>>
>> > I found out that (<>) (in Data.Monoid) is missing, also. It would be
>> nice
>> > to reexamine Prelude to export things we want to export.
>>
>> Fwiw, (<>) was actually left-out as it wasn't required (it's just a an
>> alias for `mappend`), *and* to keep our options open (or at least not
>> make it more difficult) in terms of possible migration-plans available
>> for the case we'd be moving 'Semigroup' to base/Prelude at some point in
>> the future.
>>
>>
>
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: traverse_

2015-03-31 Thread Edward Kmett
I have no objection to having the discussion about widening the set of
symbols to shave off warts like this in 7.12.

-Edward

On Tue, Mar 31, 2015 at 10:11 AM, Fumiaki Kinoshita 
wrote:

> I understand the ground. It seems reasonable not to add symbols facilely.
>
> But in this case, the "too specialized" version is exported while more
> fundamental one is not.
> Although folks (including me) use mapM_ mostly today, someday we will like
> to have traverse_, I guess.
>
> 2015-03-31 19:41 GMT+09:00 Edward Kmett :
>
>> We deliberately took no more symbols than we needed in 7.10 from Prelude
>> as part of the Foldable/Traversable Proposal. There are multiple
>> combinators in Data.Foldable and Data.Traversable that we do not export.
>> traverse_ is one of them as, strictly speaking, traverse_ was a symbol we
>> didn't have to take.
>>
>> If we had would anybody have complained any more loudly? Not sure... but
>> it was a deliberate choice to not bring in any symbols into Prelude that
>> weren't already there that weren't part of the definition of a class or
>> needed to define instances that already existed.
>>
>> -Edward
>>
>> On Mon, Mar 30, 2015 at 11:33 PM, Fumiaki Kinoshita 
>> wrote:
>>
>>> Well, I see. It'd be nice.
>>>
>>> That aside, the absence of traverse_ doesn't seem to be intended (even
>>> the documentation for mapM_ says "mapM_ is just traverse_"!)
>>>
>>> 2015-03-30 16:54 GMT+09:00 Herbert Valerio Riedel :
>>>
>>>> On 2015-03-30 at 07:05:56 +0200, Fumiaki Kinoshita wrote:
>>>>
>>>> [...]
>>>>
>>>> > I found out that (<>) (in Data.Monoid) is missing, also. It would be
>>>> nice
>>>> > to reexamine Prelude to export things we want to export.
>>>>
>>>> Fwiw, (<>) was actually left-out as it wasn't required (it's just a an
>>>> alias for `mappend`), *and* to keep our options open (or at least not
>>>> make it more difficult) in terms of possible migration-plans available
>>>> for the case we'd be moving 'Semigroup' to base/Prelude at some point in
>>>> the future.
>>>>
>>>>
>>>
>>> ___
>>> ghc-devs mailing list
>>> ghc-devs@haskell.org
>>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>>>
>>>
>>
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


[haskell.org Google Summer of Code] Call for Mentors

2015-04-06 Thread Edward Kmett
We have had a rather large pool of potential students apply for this year's
Google Summer of Code, but, ultimately, Google won't let us ask for a slot
unless we have a potential mentor assigned in advance.

On top of that, one thing we try to do with each project is wherever
possible, assign both a primary and a backup mentor, so the available
mentoring pool is drawn a little thin. Many hands make for light work,
though: If you've mentored or thought about mentoring in years past, I'd
encourage you to sign up on google-melange for the Google Summer of Code at:

https://www.google-melange.com/gsoc/homepage/google/gsoc2015

and request a connection to haskell.org as a Mentor.

Once you've done this you can help us vote on proposals, and should
something seem appropriate to you, you can flag yourself as available as a
potential mentor or backup mentor for one (or more) of the projects.

We have a couple of weeks left to rate proposals and request slots, but
it'd be good to make as much progress as we can this week.

If you have any questions, feel free to reach out to me, or to Shachaf
Ben-Kiki or Gershom Bazerman who have been helping out with organizational
issues this year. We also have a #haskell-gsoc channel on irc.freenode.net
if you have questions about what is involved.

Thank you for your time and consideration,
-Edward Kmett
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Tracking bugs for libraries

2015-05-05 Thread Edward Kmett


> On May 5, 2015, at 7:30 AM, Jan Stolarek  wrote:
> 
> I just noticed that bugs for several of our libraries can either be tracked 
> on Trac (there is a 
> corresponding entry in the "Component" field) or on github (issue tracking 
> for that library is 
> enabled in the repo). These libraries are: directory, hoopl, old-time, 
> pretty, process, random 
> and unix. I don't like the idea of spreading bug reports in two separate 
> places, unless there is 
> a good reason for doing this (eg. because we actually use our own copies of 
> these libraries). So, 
> is this duplication intended or accidental?
> 

As of a couple of months ago, we updated 

https://wiki.haskell.org/Library_submissions

to include the definitive source for where issues should be tracked on a 
package by package basis for all of the core libraries. As we spread 
maintainership of these packages around, we found more and more people 
preferred using github issue tracking to the legacy trac.

There may well wind up with parallel trac items for some items in a separate 
project issue tracker, but the primary reason for such would be when ghc has to 
respond to an external change. Most (if not all) of the remaining duplicate 
issues were pushed out to the corresponding external trackers.

As we clear out the remaining issues, we might look at removing these as 
component selections in the trac.

-Edward

> Janek
> 
> ---
> Politechnika Łódzka
> Lodz University of Technology
> 
> Treść tej wiadomości zawiera informacje przeznaczone tylko dla adresata.
> Jeżeli nie jesteście Państwo jej adresatem, bądź otrzymaliście ją przez 
> pomyłkę
> prosimy o powiadomienie o tym nadawcy oraz trwałe jej usunięcie.
> 
> This email contains information intended solely for the use of the individual 
> to whom it is addressed.
> If you are not the intended recipient or if you have received this message in 
> error,
> please notify the sender and delete it from your system.
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Tracking bugs for libraries

2015-05-05 Thread Edward Kmett
Hoopl didn't exist when the page was first created. I'm not sure if it
should be considered a "core library" or not, honestly. If so, then it
probably belongs on that page. If not, then it is probably worth
documenting more clearly somewhere where its issues are tracked.

-Edward

On Wed, May 6, 2015 at 1:53 AM, Jan Stolarek  wrote:

> > As of a couple of months ago, we updated
> >
> > https://wiki.haskell.org/Library_submissions
> >
> > to include the definitive source for where issues should be tracked on a
> > package by package basis for all of the core libraries. As we spread
> > maintainership of these packages around, we found more and more people
> > preferred using github issue tracking to the legacy trac.
> That makes perfect sense. Thanks for clarifying. However, Hoopl is not
> included on the wiki page.
> Is that accidental or intentional omission?
>
> Janek
>
>
> >
> > There may well wind up with parallel trac items for some items in a
> > separate project issue tracker, but the primary reason for such would be
> > when ghc has to respond to an external change. Most (if not all) of the
> > remaining duplicate issues were pushed out to the corresponding external
> > trackers.
> >
> > As we clear out the remaining issues, we might look at removing these as
> > component selections in the trac.
> >
> > -Edward
> >
> > > Janek
> > >
> > > ---
> > > Politechnika Łódzka
> > > Lodz University of Technology
> > >
> > > Treść tej wiadomości zawiera informacje przeznaczone tylko dla
> adresata.
> > > Jeżeli nie jesteście Państwo jej adresatem, bądź otrzymaliście ją przez
> > > pomyłkę prosimy o powiadomienie o tym nadawcy oraz trwałe jej
> usunięcie.
> > >
> > > This email contains information intended solely for the use of the
> > > individual to whom it is addressed. If you are not the intended
> recipient
> > > or if you have received this message in error, please notify the sender
> > > and delete it from your system.
> > > ___
> > > ghc-devs mailing list
> > > ghc-devs@haskell.org
> > > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
>
>
> ---
> Politechnika Łódzka
> Lodz University of Technology
>
> Treść tej wiadomości zawiera informacje przeznaczone tylko dla adresata.
> Jeżeli nie jesteście Państwo jej adresatem, bądź otrzymaliście ją przez
> pomyłkę
> prosimy o powiadomienie o tym nadawcy oraz trwałe jej usunięcie.
>
> This email contains information intended solely for the use of the
> individual to whom it is addressed.
> If you are not the intended recipient or if you have received this message
> in error,
> please notify the sender and delete it from your system.
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: MonadFail proposal (MFP): Moving fail out of Monad

2015-06-09 Thread Edward Kmett
+1 from me for both the spirit and the substance of this proposal. We've
been talking about this in the abstract for a while now (since ICFP 2013 or
so) and as concrete plans go, this strikes me as straightforward and
implementable.

-Edward

On Tue, Jun 9, 2015 at 10:43 PM, David Luposchainsky <
dluposchain...@googlemail.com> wrote:

> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> Hello *,
>
> the subject says it all. After we successfully put `=>`
> into Monad, it is time to remove something in return: `fail`.
>
> Like with the AMP, I wrote up the proposal in Markdown
> format on Github, which you can find below as a URL, and in
> verbatim copy at the end of this email. It provides an
> overview over the intended outcome, which design decisions
> we had to take, and how our initial plan for the transition
> looks like. There are also some issues left open to
> discussion.
>
> https://github.com/quchen/articles/blob/master/monad_fail.md
>
> Here's a short abstract:
>
> - - Move `fail` from `Monad` into a new class `MonadFail`.
> - - Code using failable patterns will receive a more
>   restrictive `MonadFail` constraint. Code without this
>   constraint will be safe to use for all Monads.
> - - Transition will take at least two GHC releases.
>   GHC 7.12 will include the new class, and generate
>   warnings asking users to make their failable patterns
>   compliant.
> - - Stackage showed an upper bound of less than 500 breaking
>   code fragments when compiled with the new desugaring.
>
> For more details, refer to the link or the paste at the end.
>
>
> Let's get going!
>
> David aka quchen
>
>
>
>
>
> ===
> ===
> ===
>
>
>
>
>
> `MonadFail` proposal (MFP)
> ==
>
> A couple of years ago, we proposed to make `Applicative` a superclass of
> `Monad`, which successfully killed the single most ugly thing in Haskell
> as of GHC 7.10.
>
> Now, it's time to tackle the other major issue with `Monad`: `fail` being a
> part of it.
>
> You can contact me as usual via IRC/Freenode as *quchen*, or by email to
> *dluposchainsky at the email service of Google*. This file will also be
> posted
> on the ghc-devs@ and libraries@ mailing lists, as well as on Reddit.
>
>
>
> Overview
> - 
>
> - - **The problem** - reason for the proposal
> - - **MonadFail class** - the solution
> - - **Discussion** - explaining our design choices
> - - **Adapting old code** - how to prepare current code to transition
> smoothly
> - - **Esimating the breakage** - how much stuff we will break (spoiler:
> not much)
> - - **Transitional strategy** - how to break as little as possible while
> transitioning
> - - **Current status**
>
>
>
>
> The problem
> - ---
>
> Currently, the `<-` symbol is unconditionally desugared as follows:
>
> ```haskell
> do pat <- computation >>> let f pat = more
>more   >>> f _ = fail "..."
>   >>> in  computation >>= f
> ```
>
> The problem with this is that `fail` cannot (!) be sensibly implemented for
> many monads, for example `State`, `IO`, `Reader`. In those cases it
> defaults to
> `error`. As a consequence, in current Haskell, you can not use
> `Monad`-polymorphic code safely, because although it claims to work for all
> `Monad`s, it might just crash on you. This kind of implicit non-totality
> baked
> into the class is *terrible*.
>
> The goal of this proposal is adding the `fail` only when necessary and
> reflecting that in the type signature of the `do` block, so that it can be
> used
> safely, and more importantly, is guaranteed not to be used if the type
> signature does not say so.
>
>
>
> `MonadFail` class
> - -
>
> To fix this, introduce a new typeclass:
>
> ```haskell
> class Monad m => MonadFail m where
> fail :: String -> m a
> ```
>
> Desugaring can now be changed to produce this constraint when necessary.
> For
> this, we have to decide when a pattern match can not fail; if this is the
> case,
> we can omit inserting the `fail` call.
>
> The most trivial examples of unfailable patterns are of course those that
> match
> anywhere unconditionally,
>
> ```haskell
> do x <- action >>> let f x = more
>more>>> in  action >>= f
> ```
>
> In particular, the programmer can assert any pattern be unfailable by
> making it
> irrefutable using a prefix tilde:
>
> ```haskell
> do ~pat <- action >>> let f ~pat = more
>more   >>> in  action >>= f
> ```
>
> A class of patterns that are conditionally failable are `newtype`s, and
> single
> constructor `data` types, which are unfailable by themselves, but may fail
> if matching on their fields is done with failable paterns.
>
> ```haskell
> data Newtype a = Newtype a
>
> - -- "x" cannot fail
> do Newtype x <-

Re: MonadFail proposal (MFP): Moving fail out of Monad

2015-06-09 Thread Edward Kmett
I can give a couple of "rather academic" issues that the status quo causes:

An example of where this has bit us in the hindquarters in the past is that
the old Error class based instance for Monad (Either a) from the mtl
incurred a constraint on the entire Monad instance in order to support
'fail'.

This ruled out many applications for the Either monad, e.g. apo/gapo are
based on the real (Either e) monad, just as para/zygo are based on the
"real" ((,) e) comonad. This rather complicated the use of recursion
schemes and in fact was what drove me to write what turned into the
"either" package in the first place.

Now we don't try to support 'fail' at all for that Monad. Under this
proposal though, one _could_ add a MonadFail instance that incurred a
rather ad hoc constraint on the left hand side of the sum without
encumbering the Monad itself.

In general you have no way of knowing that you stick to the product-like
structure of the Monad in the current ecosystem, because 'fail' is always
there, you can get to values in the Monad you couldn't reach with just
return and (>>=).

Ideally you'd have (forall m. Monad m => m a) being isomorphic to a, this
can be useful for ensuring we can plumb user-defined effects through code:

http://comonad.com/reader/2011/searching-infinity/

but in Haskell as it exists today you always have to worry about it
invoking a call to fail, and having a special form of _distinguishable_
bottom available from that computation so it is really more like `Either
String a`.

Can you just say "don't do that?"

Sure, but it is the moral equivalent of programming with nulls all over
your code.

-Edward

On Wed, Jun 10, 2015 at 12:26 AM, Johan Tibell 
wrote:

> Thanks for putting this together.
>
> The proposal says:
>
> "As a consequence, in current Haskell, you can not use Monad-polymorphic
> code safely, because although it claims to work for all Monads, it might
> just crash on you. This kind of implicit non-totality baked into the class
> is terrible."
>
> Is this actually a problem in practice? Is there any code we can point to
> that suffers because of the current state of affairs? Could it be included
> in the proposal?
>
>
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: MFP updates: ideas worth discussing

2015-06-17 Thread Edward Kmett
There is a bit of a knee-jerk reaction that we should go to something
simpler than Monad as a superclass constraint for MonadFail, but I think
most of those reasons fall apart or at least lose much of their weight upon
deeper inspection.

Ultimately, I'm a not concerned about interactions between ApplicativeDo
notation and fail.

Any automatic desugaring into 'fail' will be in a context which is
necessarily incurring a monad constraint.

E.g.

do
   Just x <- m
   ...

has to pick up the Monad constraint anyways to deal with the binding!

This leaves only code that does something like.

foo = x <*> fail y

which is hand written to invoke fail.

Given that the entire "tree" of the Applicative" is available for
inspection and that that fail can't depend on any context internal to the
Applicative and remain 'just Applicative' I have a hard time foreseeing any
real applications lost by continuing to assume a context of:

class Monad m => MonadFail m

and there is a lot of value in the simple context.

Most of the value in ApplicativeDo notation comes from the opportunities
for increased parallelism, not so much from the reduced constraints on the
resulting code, and as we can see above, it'll never arise during the
desguaring in a place that wouldn't incur a Monad constraint anyways.

Even getting rid of the Monad constraint w/ ApplicativeDo is going to
require gymnastics around `return`.

-Edward

P.S. On an unrelated note, for the record, I'm very strongly +1 on a
MonadFail instance for IO. There we use throwIO explicitly, so it is even
able to be handled and caught locally. The set of things you can do in IO
is large enough to support and reason about explicit failure.

P.P.S. I think if we extend the proposal to include an explicit member of
the class for pattern match failure with the code we currently have lurking
in the compiler for producing the string from the context, then most of the
concerns raised by folks who would prefer to use a heavier weight -- but
vastly harder to standardize -- exception mechanism would also be addressed
in practice.

On Tue, Jun 16, 2015 at 11:07 AM, David Luposchainsky <
dluposchain...@googlemail.com> wrote:

> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> MonadFail proposal update 1
> ===
>
>
> Rendered version of this text:
> https://github.com/quchen/articles/blob/master/monad_fail_update1.md
>
> Original MFP:
> https://github.com/quchen/articles/blob/master/monad_fail.md
>
>
> Short summary
> - -
>
> A week has passed since I posted the MFP, and the initial discussion is
> mostly
> over. Here are my observations:
>
> - - Everyone agrees that `fail` should not be in `Monad`.
> - - Almost everyone agrees that it should be thrown out of it.
> - - Some would prefer to see the special desugaring be gone entirely.
> - - The name `MonadFail` is controversial, because of a potential
> `Applicative`
>   constraint.
> - - We're still unsure about whether `IO` should get a `MonadFail`
> instance, but
>   the bias seems to be towards "yes".
>
>
>
> New ideas worth thinking about
> - --
>
> ### Special desugaring or not
>
> Johann suggested an optional warning whenever something desugars to use
> `fail`.
> I think that's an idea we should think about. It is easily implemented in
> the
> compiler, and would probably behave similar to -fwarn-unused-do-binds in
> practice: notation that is not wrong, but might not be what the programmer
> intended.
>
>
> ### Errors vs. Exceptions
>
> Henning is concerned about the confusion between exceptions and programming
> errors. In his words,
>
> > We should clearly decide what "fail" is intended for - for programming
> > errors or for exceptions.
>
> What I see clashing with his point is backwards compatibility. Removing the
> `String` argument breaks all explicit invocations of `fail`. Unfortunately,
> we're not in a position to break very much. If someone has a better idea
> I'd
> love to hear about it though.
>
>
> ### ApplicativeDo
>
> ApplicativeDo is somewhere out there on the horizon, and we're not sure
> yet how
> much `fail` makes sense in the context of `Applicative`. An Applicative
> computation is statically determined in its shape, so it either always or
> never
> fails. Depending on previous results would introduce the `Monad` constraint
> anyway.
>
>
>
> Probing status
> - --
>
> Henning has started to look at the impact of the proposal when explicit
> invocations of `fail` are considered as well, something I have not done in
> my
> original survey. Luckily, things don't look too bad, Lens and its forest of
> dependencies can be fixed in around ten trivial changes, for example.
>
>
> Greetings,
> David/quchen
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v1
>
> iQEcBAEBAgAGBQJVgDvGAAoJELrQsaT5WQUspmIIAJi9UVYIitHv2CKvWSmk1fg0
> hYaPRXDJMnyFS21v57+JeTPhM/dnI4k0guUUrlIB9k5WPaySQ6MKIAnB51o5O9Gv
> zt87FII5/oYsJtVPruKgBtLPbJVhg6zGUXmNco1S2wvB5m5HdBooQ

Re: Abstract FilePath Proposal

2015-06-28 Thread Edward Kmett
Worse there are situations where you absolutely _have_ to be able to use
\\?\ encoding of a path on Windows to read, modify or delete files with
"impossible names" that were created by other means.

e.g. Filenames like AUX, that had traditional roles under DOS cause weird
interactions, or that were created with "impossibly long names" -- which
can happen in the wild when you move directories around, etc.

I'm weakly in favor of the proposal precisely because it is the first
version of this concept that I've seen that DOESN'T try to get too clever
with regards to adding all sorts of normalization and this proposal seems
to be the simplest move that would enable us to do something correctly in
the future, regardless of what that correct thing winds up being.

-Edward

On Sun, Jun 28, 2015 at 8:09 AM, David Turner  wrote:

> Hi,
>
> I think it'd be more robust to handle normalisation when converting from
> String/Text to FilePath (and combining things with () and so on) rather
> than in the underlying representation.
>
> It's absolutely crucial that you can ask the OS for a filename (which it
> gives you as a sequence of bytes) and then pass that exact same sequence of
> bytes back to the OS without any normalisation or other useful alterations
> having taken place.
>
> You can do some deeply weird stuff in Windows by starting an absolute path
> with \\?\, including apparently using '.' and '..' as the name of a
> filesystem component:
>
> Because it turns off automatic expansion of the path string, the "\\?\"
> prefix also allows the use of ".." and "." in the path names, which can be
> useful if you are attempting to perform operations on a file with these
> otherwise reserved relative path specifiers as part of the fully qualified
> path.
>
>
> (from
> https://msdn.microsoft.com/en-us/library/windows/desktop/aa365247(v=vs.85).aspx
> )
>
> I don't fancy shaking all the corner cases out of this. An explicit
> 'normalise' function seems ok, but baking normalisation into the type
> itself seems bad.
>
> Cheers,
>
> David
>
>
> On 28 June 2015 at 11:03, Boespflug, Mathieu  wrote:
>
>> Hi Neil,
>>
>> why does the proposal *not* include normalization?
>>
>> There are four advantages that I see to making FilePath a datatype:
>>
>> 1. it makes it possible to implement the correct semantics for some
>> systems (including POSIX),
>> 2. it allows for information hiding, which in turn helps modularity,
>> 3. the type is distinct from any other type, hence static checks are
>> stronger,
>> 4. it becomes possible to quotient values over some arbitrary set of
>> identities that makes sense. i.e. in the case of FilePath, arguably
>> "foo/bar//baz" *is* "foo/bar/baz" *is* "foo//bar/baz" for all intents
>> and purposes, so it is not useful to distinguish these three ways of
>> writing down the same path (and in fact in practice distinguishing
>> them leads to subtle bugs). That is, the Eq instance compares
>> FilePath's modulo a few laws.
>>
>> Do you propose to forego (4)? If so why so?
>>
>> If we're going through a deprecation process, could we do so once, by
>> getting the notion of path equality we want right the first time?
>> Contrary to type indexing FilePath, it seems to me that the design
>> space for path identities is much smaller. Essentially, exactly the
>> ones here:
>> https://hackage.haskell.org/package/filepath-1.1.0.2/docs/System-FilePath-Posix.html#v:normalise
>> .
>>
>> Best,
>>
>> Mathieu
>>
>>
>> On 27 June 2015 at 12:12, Neil Mitchell  wrote:
>> > Hi Niklas,
>> >
>> > The function writeFile takes a FilePath. We could fork base or tell
>> everyone
>> > to use writeFile2, but in practice everyone will keep using writeFile,
>> and
>> > this String for FilePath. This approach is the only thing we could
>> figure
>> > that made sense.
>> >
>> > Henning: we do not propose normalisation on initialisation. For ASCII
>> > strings fromFilePath . toFilePath will be id. It might also be for
>> unicode
>> > on some/all platforms. Of course, you can write your own FilePath
>> creator
>> > that does normalisation on construction.
>> >
>> > Thanks, Neil
>> >
>> >
>> > On Saturday, 27 June 2015, Niklas Larsson  wrote:
>> >>
>> >> Hi!
>> >>
>> >> Instead of trying to minimally patch the existing API and still
>> breaking
>> >> loads of code, why not make a new API that doesn't have to compromise
>> and
>> >> depreciate the old one?
>> >>
>> >> Niklas
>> >> 
>> >> Från: Herbert Valerio Riedel
>> >> Skickat: ‎2015-‎06-‎26 18:09
>> >> Till: librar...@haskell.org; ghc-devs@haskell.org
>> >> Ämne: Abstract FilePath Proposal
>> >>
>> >> -BEGIN PGP SIGNED MESSAGE-
>> >> Hash: SHA1
>> >>
>> >> Hello *,
>> >>
>> >> What?
>> >> =
>> >>
>> >> We (see From: & CC: headers) propose, plain and simple, to turn the
>> >> currently defined type-synonym
>> >>
>> >>   type FilePath = String
>> >>
>> >> into an abstract/opaque data type instead.
>> >>
>> >> Why/How/When?
>> >> =
>> >>

Re: Handling overflow and division by zero

2015-06-28 Thread Edward Kmett
You should be able to reduce the bit-twiddling a great deal IIRC in the
word case.

SW a + SW b
  | c <- a + b, c >= min a b = SW c
  | otherwise = throw Overflow

There is a similar trick that escapes me at the moment for the signed case.


On Sun, Jun 28, 2015 at 6:15 PM, Nikita Karetnikov 
wrote:

> Haskell is often marketed as a safe (or safer) language, but there's
> an issue that makes it less safe as it could be.  I'm talking about
> arithmetic overflows and division by zero.  The safeint package tries
> to address this, but it only supports the Int type because (as I
> understand it) there are no useful primitives for other common types
> defined in Data.Int and Data.Word.
>
> I've tried adding Int64 support to safeint just to see how it would work
> without primops.  Here's a snippet (I haven't tested this code well, so
> it may be wrong, sorry about that):
>
> shiftRUnsigned :: Word64 -> Int -> Word64
> shiftRUnsigned = shiftR
>
> --
> http://git.haskell.org/ghc.git/blob/HEAD:/compiler/codeGen/StgCmmPrim.hs#l930
> plusSI64 :: SafeInt64 -> SafeInt64 -> SafeInt64
> plusSI64 (SI64 a) (SI64 b) = if c == 0 then SI64 r else overflowError
>   where
> r = a + b
> c = (fromIntegral $ (complement (a `xor` b)) .&. (a `xor` r))
> `shiftRUnsigned`
> ((finiteBitSize a) - 1)
>
> --
> http://git.haskell.org/ghc.git/blob/HEAD:/compiler/codeGen/StgCmmPrim.hs#l966
> minusSI64 :: SafeInt64 -> SafeInt64 -> SafeInt64
> minusSI64 (SI64 a) (SI64 b) = if c == 0 then SI64 r else overflowError
>   where
> r = a - b
> c = (fromIntegral $ (a `xor` b) .&. (a `xor` r))
> `shiftRUnsigned`
> ((finiteBitSize a) - 1)
>
> -- https://stackoverflow.com/a/1815371
> timesSI64 :: SafeInt64 -> SafeInt64 -> SafeInt64
> timesSI64 (SI64 a) (SI64 b) =
>   let x = a * b
>   in if a /= 0 && x `div` a /= b
>  then overflowError
>  else SI64 x
>
> I may be wrong, but my understanding is that new primops could reduce
> overhead here.  If so, would a patch adding them be accepted?  Are
> there any caveats?
>
> In the safeint package, would it be reasonable to return an Either
> value instead of throwing an exception?  Or would it be too much?
>
> I haven't created a wiki page or ticket because I don't know much, so
> I want to get some feedback before doing so.  That would be my first
> patch to GHC (if ever), so maybe I'm not the best candidate, but I've
> been thinking about it for too long to ignore. :\
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


ArrayArrays

2015-08-20 Thread Edward Kmett
Would it be possible to add unsafe primops to add Array# and SmallArray#
entries to an ArrayArray#? The fact that the ArrayArray# entries are all
directly unlifted avoiding a level of indirection for the containing
structure is amazing, but I can only currently use it if my leaf level data
can be 100% unboxed and distributed among ByteArray#s. It'd be nice to be
able to have the ability to put SmallArray# a stuff down at the leaves to
hold lifted contents.

I accept fully that if I name the wrong type when I go to access one of the
fields it'll lie to me, but I suppose it'd do that if i tried to use one of
the members that held a nested ArrayArray# as a ByteArray# anyways, so it
isn't like there is a safety story preventing this.

I've been hunting for ways to try to kill the indirection problems I get
with Haskell and mutable structures, and I could shoehorn a number of them
into ArrayArrays if this worked.

Right now I'm stuck paying for 2 or 3 levels of unnecessary indirection
compared to c/java and this could reduce that pain to just 1 level of
unnecessary indirection.

-Edward
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: ArrayArrays

2015-08-20 Thread Edward Kmett
When (ab)using them for this purpose, SmallArrayArray's would be very handy
as well.

Consider right now if I have something like an order-maintenance structure
I have:

data Upper s = Upper {-# UNPACK #-} !(MutableByteArray s) {-# UNPACK #-}
!(MutVar s (Upper s)) {-# UNPACK #-} !(MutVar s (Upper s))

data Lower s = Lower {-# UNPACK #-} !(MutVar s (Upper s)) {-# UNPACK #-}
!(MutableByteArray s) {-# UNPACK #-} !(MutVar s (Lower s)) {-# UNPACK #-}
!(MutVar s (Lower s))

The former contains, logically, a mutable integer and two pointers, one for
forward and one for backwards. The latter is basically the same thing with
a mutable reference up pointing at the structure above.

On the heap this is an object that points to a structure for the bytearray,
and points to another structure for each mutvar which each point to the
other 'Upper' structure. So there is a level of indirection smeared over
everything.

So this is a pair of doubly linked lists with an upward link from the
structure below to the structure above.

Converted into ArrayArray#s I'd get

data Upper s = Upper (MutableArrayArray# s)

w/ the first slot being a pointer to a MutableByteArray#, and the next 2
slots pointing to the previous and next previous objects, represented just
as their MutableArrayArray#s. I can use sameMutableArrayArray# on these for
object identity, which lets me check for the ends of the lists by tying
things back on themselves.

and below that

data Lower s = Lower (MutableArrayArray# s)

is similar, with an extra MutableArrayArray slot pointing up to an upper
structure.

I can then write a handful of combinators for getting out the slots in
question, while it has gained a level of indirection between the wrapper to
put it in * and the MutableArrayArray# s in #, that one can be basically
erased by ghc.

Unlike before I don't have several separate objects on the heap for each
thing. I only have 2 now. The MutableArrayArray# for the object itself, and
the MutableByteArray# that it references to carry around the mutable int.

The only pain points are

1.) the aforementioned limitation that currently prevents me from stuffing
normal boxed data through a SmallArray or Array into an ArrayArray leaving
me in a little ghetto disconnected from the rest of Haskell,

and

2.) the lack of SmallArrayArray's, which could let us avoid the card
marking overhead. These objects are all small, 3-4 pointers wide. Card
marking doesn't help.

Alternately I could just try to do really evil things and convert the whole
mess to SmallArrays and then figure out how to unsafeCoerce my way to
glory, stuffing the #'d references to the other arrays directly into the
SmallArray as slots, removing the limitation  we see here by aping the
MutableArrayArray# s API, but that gets really really dangerous!

I'm pretty much willing to sacrifice almost anything on the altar of speed
here, but I'd like to be able to let the GC move them and collect them
which rules out simpler Ptr and Addr based solutions.

-Edward

On Thu, Aug 20, 2015 at 9:01 PM, Manuel M T Chakravarty <
c...@cse.unsw.edu.au> wrote:

> That’s an interesting idea.
>
> Manuel
>
> > Edward Kmett :
> >
> > Would it be possible to add unsafe primops to add Array# and SmallArray#
> entries to an ArrayArray#? The fact that the ArrayArray# entries are all
> directly unlifted avoiding a level of indirection for the containing
> structure is amazing, but I can only currently use it if my leaf level data
> can be 100% unboxed and distributed among ByteArray#s. It'd be nice to be
> able to have the ability to put SmallArray# a stuff down at the leaves to
> hold lifted contents.
> >
> > I accept fully that if I name the wrong type when I go to access one of
> the fields it'll lie to me, but I suppose it'd do that if i tried to use
> one of the members that held a nested ArrayArray# as a ByteArray# anyways,
> so it isn't like there is a safety story preventing this.
> >
> > I've been hunting for ways to try to kill the indirection problems I get
> with Haskell and mutable structures, and I could shoehorn a number of them
> into ArrayArrays if this worked.
> >
> > Right now I'm stuck paying for 2 or 3 levels of unnecessary indirection
> compared to c/java and this could reduce that pain to just 1 level of
> unnecessary indirection.
> >
> > -Edward
> > ___
> > ghc-devs mailing list
> > ghc-devs@haskell.org
> > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: ArrayArrays

2015-08-21 Thread Edward Kmett
On Fri, Aug 21, 2015 at 9:49 AM, Ryan Yates  wrote:

> Hi Edward,
>
> I've been working on removing indirection in STM and I added a heap
> object like SmallArray, but with a mix of words and pointers (as well
> as a header with metadata for STM).  It appears to work well now, but
> it is missing the type information.  All the pointers have the same
> type which works fine for your Upper.  In my case I use it to
> represent a red-black tree node [1].
>

This would be perfect for my purposes.


> Also all the structures I make are fixed size and it would be nice if
> the compiler could treat that fix size like a constant in code
> generation.


To make the fixed sized thing work without an extra couple of size
parameters in the arguments, you'd want to be able to build an info table
for each generated size. That sounds messy.


> I don't know what the right design is or what would be
> needed, but it seems simple enough to give the right typing
> information to something like this and basically get a mutable struct.
> I'm talking about this work at HIW and really hope to find someone
> interested in extending this expressiveness to let us write something
> that looks clear in Haskell, but gives the heap representation that we
> really need for performance.


I'll be there. Let's talk.


> From the RTS perspective I think there are any obstacles.
>

FWIW- I was able to get some code put together that let me scribble
unlifted SmallMutableArray#s directly into other SmallMutableArray#s, which
nicely "just works" as long as you fix up all the fields that are supposed
to be arrays before you ever dare use them.

writeSmallMutableArraySmallArray# :: SmallMutableArray# s Any -> Int# ->
SmallMutableArray# s Any -> State# s -> State# s
writeSmallMutableArraySmallArray# m i a s = unsafeCoerce# writeSmallArray#
m i a s
{-# INLINE writeSmallMutableArraySmallArray# #-}

readSmallMutableArraySmallArray# :: SmallMutableArray# s Any -> Int# ->
State# s -> (# State# s, SmallMutableArray# s Any #)
readSmallMutableArraySmallArray# m i s = unsafeCoerce# readSmallArray# m i s
{-# INLINE readSmallMutableArraySmallArray# #-}

With some support for typed 'Field's I can write code now that looks like:
order :: PrimMonad m => Upper (PrimState m) -> Int -> Order (PrimState m)
-> Order (PrimState m) -> m (Order (PrimState m))
order p a l r = st $ do
  this <- primitive $ \s -> case unsafeCoerce# newSmallArray# 4# a s of
(# s', b #) -> (# s', Order b #)
  set parent this p
  set next this l
  set prev this r
  return this

and in there basically build my own little strict, mutable, universe and
with some careful monitoring of the core make sure that the little Order
wrappers as the fringes get removed.

Here I'm using one of the slots as a pointer to a boxed Int for testing,
rather than as a pointer to a MutableByteArray that holds the Int.

-Edward
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: ArrayArrays

2015-08-27 Thread Edward Kmett
nter chase.

But, if like Ryan suggested, we had a heap object we could construct that
had n words with unsafe access and m pointers to other heap objects, one
that could put itself on the mutable list when any of those pointers
changed then I could shed this last factor of two in all circumstances.

Prototype
-

Over the last few days I've put together a small prototype implementation
with a few non-trivial imperative data structures for things like Tarjan's
link-cut trees, the list labeling problem and order-maintenance.

https://github.com/ekmett/structs

Notable bits:

Data.Struct.Internal.LinkCut
<https://github.com/ekmett/structs/blob/9ff2818f888aff4789b7a41077a674a10d15e6ee/src/Data/Struct/Internal/LinkCut.hs>
provides an implementation of link-cut trees in this style.

Data.Struct.Internal
<https://github.com/ekmett/structs/blob/9ff2818f888aff4789b7a41077a674a10d15e6ee/src/Data/Struct/Internal.hs>
provides the rather horrifying guts that make it go fast.

Once compiled with -O or -O2, if you look at the core, almost all the
references to the LinkCut or Object data constructor get optimized away,
and we're left with beautiful strict code directly mutating out underlying
representation.

At the very least I'll take this email and turn it into a short article.

-Edward

On Thu, Aug 27, 2015 at 9:00 AM, Simon Peyton Jones 
wrote:

> Just to say that I have no idea what is going on in this thread.  What is
> ArrayArray?  What is the issue in general?  Is there a ticket? Is there a
> wiki page?
>
>
>
> If it’s important, an ab-initio wiki page + ticket would be a good thing.
>
>
>
> Simon
>
>
>
> *From:* ghc-devs [mailto:ghc-devs-boun...@haskell.org] *On Behalf Of *Edward
> Kmett
> *Sent:* 21 August 2015 05:25
> *To:* Manuel M T Chakravarty
> *Cc:* Simon Marlow; ghc-devs
> *Subject:* Re: ArrayArrays
>
>
>
> When (ab)using them for this purpose, SmallArrayArray's would be very
> handy as well.
>
>
>
> Consider right now if I have something like an order-maintenance structure
> I have:
>
>
>
> data Upper s = Upper {-# UNPACK #-} !(MutableByteArray s) {-# UNPACK #-}
> !(MutVar s (Upper s)) {-# UNPACK #-} !(MutVar s (Upper s))
>
>
>
> data Lower s = Lower {-# UNPACK #-} !(MutVar s (Upper s)) {-# UNPACK #-}
> !(MutableByteArray s) {-# UNPACK #-} !(MutVar s (Lower s)) {-# UNPACK #-}
> !(MutVar s (Lower s))
>
>
>
> The former contains, logically, a mutable integer and two pointers, one
> for forward and one for backwards. The latter is basically the same thing
> with a mutable reference up pointing at the structure above.
>
>
>
> On the heap this is an object that points to a structure for the
> bytearray, and points to another structure for each mutvar which each point
> to the other 'Upper' structure. So there is a level of indirection smeared
> over everything.
>
>
>
> So this is a pair of doubly linked lists with an upward link from the
> structure below to the structure above.
>
>
>
> Converted into ArrayArray#s I'd get
>
>
>
> data Upper s = Upper (MutableArrayArray# s)
>
>
>
> w/ the first slot being a pointer to a MutableByteArray#, and the next 2
> slots pointing to the previous and next previous objects, represented just
> as their MutableArrayArray#s. I can use sameMutableArrayArray# on these for
> object identity, which lets me check for the ends of the lists by tying
> things back on themselves.
>
>
>
> and below that
>
>
>
> data Lower s = Lower (MutableArrayArray# s)
>
>
>
> is similar, with an extra MutableArrayArray slot pointing up to an upper
> structure.
>
>
>
> I can then write a handful of combinators for getting out the slots in
> question, while it has gained a level of indirection between the wrapper to
> put it in * and the MutableArrayArray# s in #, that one can be basically
> erased by ghc.
>
>
>
> Unlike before I don't have several separate objects on the heap for each
> thing. I only have 2 now. The MutableArrayArray# for the object itself, and
> the MutableByteArray# that it references to carry around the mutable int.
>
>
>
> The only pain points are
>
>
>
> 1.) the aforementioned limitation that currently prevents me from stuffing
> normal boxed data through a SmallArray or Array into an ArrayArray leaving
> me in a little ghetto disconnected from the rest of Haskell,
>
>
>
> and
>
>
>
> 2.) the lack of SmallArrayArray's, which could let us avoid the card
> marking overhead. These objects are all small, 3-4 pointers wide. Card
> marking doesn't help.
>
>
>
> Alternately I could just try to do really evil things and convert the
> wh

Re: ArrayArrays

2015-08-27 Thread Edward Kmett
On Thu, Aug 27, 2015 at 1:24 PM, Edward Z. Yang  wrote:

> It seems to me that we should take a page from OCaml's playbook
> and add support for native mutable fields in objects, because
> this is essentially what a mix of words and pointers is.
>

That actually doesn't work as well as one might hope.

We currently treat data constructor closures as so much tissue paper around
a present. We tear them open, rip out all their contents, scatter them
throughout our code and then we build a whole new data constructor closure
when we're done, or we just leave them suspended in closures awaiting
someone to demand we finally make a new data constructor.

Half the time we don't even give back the data constructor closure and push
it into update g frames and we just give back the items on the stack.

With the machinery I mentioned above I get a world where every time I
access an object I can know it is evaluated for real, so this means I'm not
stuck 'entering an unknown closure', and getting it to give me back a slab
of memory that we know is a real data constructor that i can bang away on
mutable entries in.

In a world where things in * could hold mutable pointers we have to care a
lot more about object identity in deeply uncomfortable ways.

With what I've implemented I only care about object identity between things
in # that are gcptrs. The garbage collector may move them around, but it
doesn't put in thunks anywhere.

-Edward
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: ArrayArrays

2015-08-28 Thread Edward Kmett
I posted a summary article on "what this lets you do" to

https://www.fpcomplete.com/user/edwardk/unlifted-structures

I can see about making a more proposal/feature-oriented summary for the
Haskell Wiki. It may have to wait until after ICFP though.

-Edward

On Fri, Aug 28, 2015 at 5:42 AM, Simon Peyton Jones 
wrote:

> At the very least I'll take this email and turn it into a short article.
>
> Yes, please do make it into a wiki page on the GHC Trac, and maybe make a
> ticket for it.
>
>
> Thanks
>
>
>
> Simon
>
>
>
> *From:* Edward Kmett [mailto:ekm...@gmail.com]
> *Sent:* 27 August 2015 16:54
> *To:* Simon Peyton Jones
> *Cc:* Manuel M T Chakravarty; Simon Marlow; ghc-devs
> *Subject:* Re: ArrayArrays
>
>
>
> An ArrayArray# is just an Array# with a modified invariant. It points
> directly to other unlifted ArrayArray#'s or ByteArray#'s.
>
>
>
> While those live in #, they are garbage collected objects, so this all
> lives on the heap.
>
>
>
> They were added to make some of the DPH stuff fast when it has to deal
> with nested arrays.
>
>
>
> I'm currently abusing them as a placeholder for a better thing.
>
>
>
> The Problem
>
> -
>
>
>
> Consider the scenario where you write a classic doubly-linked list in
> Haskell.
>
>
>
> data DLL = DLL (IORef (Maybe DLL) (IORef (Maybe DLL)
>
>
>
> Chasing from one DLL to the next requires following 3 pointers on the heap.
>
>
>
> DLL ~> IORef (Maybe DLL) ~> MutVar# RealWorld (Maybe DLL) ~> Maybe DLL ~>
> DLL
>
>
>
> That is 3 levels of indirection.
>
>
>
> We can trim one by simply unpacking the IORef with -funbox-strict-fields
> or UNPACK
>
>
>
> We can trim another by adding a 'Nil' constructor for DLL and worsening
> our representation.
>
>
>
> data DLL = DLL !(IORef DLL) !(IORef DLL) | Nil
>
>
>
> but now we're still stuck with a level of indirection
>
>
>
> DLL ~> MutVar# RealWorld DLL ~> DLL
>
>
>
> This means that every operation we perform on this structure will be about
> half of the speed of an implementation in most other languages assuming
> we're memory bound on loading things into cache!
>
>
>
> Making Progress
>
> --
>
>
>
> I have been working on a number of data structures where the indirection
> of going from something in * out to an object in # which contains the real
> pointer to my target and coming back effectively doubles my runtime.
>
>
>
> We go out to the MutVar# because we are allowed to put the MutVar# onto
> the mutable list when we dirty it. There is a well defined write-barrier.
>
>
>
> I could change out the representation to use
>
>
>
> data DLL = DLL (MutableArray# RealWorld DLL) | Nil
>
>
>
> I can just store two pointers in the MutableArray# every time, but this
> doesn't help _much_ directly. It has reduced the amount of distinct
> addresses in memory I touch on a walk of the DLL from 3 per object to 2.
>
>
>
> I still have to go out to the heap from my DLL and get to the array object
> and then chase it to the next DLL and chase that to the next array. I do
> get my two pointers together in memory though. I'm paying for a card
> marking table as well, which I don't particularly need with just two
> pointers, but we can shed that with the "SmallMutableArray#" machinery
> added back in 7.10, which is just the old array code a a new data type,
> which can speed things up a bit when you don't have very big arrays:
>
>
>
> data DLL = DLL (SmallMutableArray# RealWorld DLL) | Nil
>
>
>
> But what if I wanted my object itself to live in # and have two mutable
> fields and be able to share the sme write barrier?
>
>
>
> An ArrayArray# points directly to other unlifted array types. What if we
> have one # -> * wrapper on the outside to deal with the impedence mismatch
> between the imperative world and Haskell, and then just let the
> ArrayArray#'s hold other arrayarrays.
>
>
>
> data DLL = DLL (MutableArrayArray# RealWorld)
>
>
>
> now I need to make up a new Nil, which I can just make be a special
> MutableArrayArray# I allocate on program startup. I can even abuse pattern
> synonyms. Alternately I can exploit the internals further to make this
> cheaper.
>
>
>
> Then I can use the readMutableArrayArray# and writeMutableArrayArray#
> calls to directly access the preceding and next entry in the linked list.
>
>
>
> So now we have one DLL wrapper which just 'bootstraps me' int

Re: ArrayArrays

2015-08-28 Thread Edward Kmett
Some form of MutableStruct# with a known number of words and a known number
of pointers is basically what Ryan Yates was suggesting above, but where
the word counts were stored in the objects themselves.

Given that it'd have a couple of words for those counts it'd likely want to
be something we build in addition to MutVar# rather than a replacement.

On the other hand, if we had to fix those numbers and build info tables
that knew them, and typechecker support, for instance, it'd get rather
invasive.

Also, a number of things that we can do with the 'sized' versions above,
like working with evil unsized c-style arrays directly inline at the end of
the structure cease to be possible, so it isn't even a pure win if we did
the engineering effort.

I think 90% of the needs I have are covered just by adding the one
primitive. The last 10% gets pretty invasive.

-Edward

On Fri, Aug 28, 2015 at 5:30 PM, Ryan Newton  wrote:

> I like the possibility of a general solution for mutable structs (like Ed
> said), and I'm trying to fully understand why it's hard.
>
> So, we can't unpack MutVar into constructors because of object identity
> problems. But what about directly supporting an extensible set of unlifted
> MutStruct# objects, generalizing (and even replacing) MutVar#? That may be
> too much work, but is it problematic otherwise?
>
> Needless to say, this is also critical if we ever want best in class
> lockfree mutable structures, just like their Stm and sequential
> counterparts.
>
> On Fri, Aug 28, 2015 at 4:43 AM Simon Peyton Jones 
> wrote:
>
>> At the very least I'll take this email and turn it into a short article.
>>
>> Yes, please do make it into a wiki page on the GHC Trac, and maybe make a
>> ticket for it.
>>
>>
>> Thanks
>>
>>
>>
>> Simon
>>
>>
>>
>> *From:* Edward Kmett [mailto:ekm...@gmail.com]
>> *Sent:* 27 August 2015 16:54
>> *To:* Simon Peyton Jones
>> *Cc:* Manuel M T Chakravarty; Simon Marlow; ghc-devs
>> *Subject:* Re: ArrayArrays
>>
>>
>>
>> An ArrayArray# is just an Array# with a modified invariant. It points
>> directly to other unlifted ArrayArray#'s or ByteArray#'s.
>>
>>
>>
>> While those live in #, they are garbage collected objects, so this all
>> lives on the heap.
>>
>>
>>
>> They were added to make some of the DPH stuff fast when it has to deal
>> with nested arrays.
>>
>>
>>
>> I'm currently abusing them as a placeholder for a better thing.
>>
>>
>>
>> The Problem
>>
>> -
>>
>>
>>
>> Consider the scenario where you write a classic doubly-linked list in
>> Haskell.
>>
>>
>>
>> data DLL = DLL (IORef (Maybe DLL) (IORef (Maybe DLL)
>>
>>
>>
>> Chasing from one DLL to the next requires following 3 pointers on the
>> heap.
>>
>>
>>
>> DLL ~> IORef (Maybe DLL) ~> MutVar# RealWorld (Maybe DLL) ~> Maybe DLL ~>
>> DLL
>>
>>
>>
>> That is 3 levels of indirection.
>>
>>
>>
>> We can trim one by simply unpacking the IORef with -funbox-strict-fields
>> or UNPACK
>>
>>
>>
>> We can trim another by adding a 'Nil' constructor for DLL and worsening
>> our representation.
>>
>>
>>
>> data DLL = DLL !(IORef DLL) !(IORef DLL) | Nil
>>
>>
>>
>> but now we're still stuck with a level of indirection
>>
>>
>>
>> DLL ~> MutVar# RealWorld DLL ~> DLL
>>
>>
>>
>> This means that every operation we perform on this structure will be
>> about half of the speed of an implementation in most other languages
>> assuming we're memory bound on loading things into cache!
>>
>>
>>
>> Making Progress
>>
>> --
>>
>>
>>
>> I have been working on a number of data structures where the indirection
>> of going from something in * out to an object in # which contains the real
>> pointer to my target and coming back effectively doubles my runtime.
>>
>>
>>
>> We go out to the MutVar# because we are allowed to put the MutVar# onto
>> the mutable list when we dirty it. There is a well defined write-barrier.
>>
>>
>>
>> I could change out the representation to use
>>
>>
>>
>> data DLL = DLL (MutableArray# RealWorld DLL) | Nil
>>
>>
>>
>> I can just store two pointers in the MutableArray# every time, but this
>

Re: ArrayArrays

2015-08-28 Thread Edward Kmett
I think both are useful, but the one you suggest requires a lot more
plumbing and doesn't subsume all of the usecases of the other.

-Edward

On Fri, Aug 28, 2015 at 5:51 PM, Ryan Newton  wrote:

> So that primitive is an array like thing (Same pointed type, unbounded
> length) with extra payload.
>
> I can see how we can do without structs if we have arrays, especially with
> the extra payload at front. But wouldn't the general solution for structs
> be one that that allows new user data type defs for # types?
>
>
>
> On Fri, Aug 28, 2015 at 4:43 PM Edward Kmett  wrote:
>
>> Some form of MutableStruct# with a known number of words and a known
>> number of pointers is basically what Ryan Yates was suggesting above, but
>> where the word counts were stored in the objects themselves.
>>
>> Given that it'd have a couple of words for those counts it'd likely want
>> to be something we build in addition to MutVar# rather than a replacement.
>>
>> On the other hand, if we had to fix those numbers and build info tables
>> that knew them, and typechecker support, for instance, it'd get rather
>> invasive.
>>
>> Also, a number of things that we can do with the 'sized' versions above,
>> like working with evil unsized c-style arrays directly inline at the end of
>> the structure cease to be possible, so it isn't even a pure win if we did
>> the engineering effort.
>>
>> I think 90% of the needs I have are covered just by adding the one
>> primitive. The last 10% gets pretty invasive.
>>
>> -Edward
>>
>> On Fri, Aug 28, 2015 at 5:30 PM, Ryan Newton  wrote:
>>
>>> I like the possibility of a general solution for mutable structs (like
>>> Ed said), and I'm trying to fully understand why it's hard.
>>>
>>> So, we can't unpack MutVar into constructors because of object identity
>>> problems. But what about directly supporting an extensible set of unlifted
>>> MutStruct# objects, generalizing (and even replacing) MutVar#? That may be
>>> too much work, but is it problematic otherwise?
>>>
>>> Needless to say, this is also critical if we ever want best in class
>>> lockfree mutable structures, just like their Stm and sequential
>>> counterparts.
>>>
>>> On Fri, Aug 28, 2015 at 4:43 AM Simon Peyton Jones <
>>> simo...@microsoft.com> wrote:
>>>
>>>> At the very least I'll take this email and turn it into a short article.
>>>>
>>>> Yes, please do make it into a wiki page on the GHC Trac, and maybe make
>>>> a ticket for it.
>>>>
>>>>
>>>> Thanks
>>>>
>>>>
>>>>
>>>> Simon
>>>>
>>>>
>>>>
>>>> *From:* Edward Kmett [mailto:ekm...@gmail.com]
>>>> *Sent:* 27 August 2015 16:54
>>>> *To:* Simon Peyton Jones
>>>> *Cc:* Manuel M T Chakravarty; Simon Marlow; ghc-devs
>>>> *Subject:* Re: ArrayArrays
>>>>
>>>>
>>>>
>>>> An ArrayArray# is just an Array# with a modified invariant. It points
>>>> directly to other unlifted ArrayArray#'s or ByteArray#'s.
>>>>
>>>>
>>>>
>>>> While those live in #, they are garbage collected objects, so this all
>>>> lives on the heap.
>>>>
>>>>
>>>>
>>>> They were added to make some of the DPH stuff fast when it has to deal
>>>> with nested arrays.
>>>>
>>>>
>>>>
>>>> I'm currently abusing them as a placeholder for a better thing.
>>>>
>>>>
>>>>
>>>> The Problem
>>>>
>>>> -
>>>>
>>>>
>>>>
>>>> Consider the scenario where you write a classic doubly-linked list in
>>>> Haskell.
>>>>
>>>>
>>>>
>>>> data DLL = DLL (IORef (Maybe DLL) (IORef (Maybe DLL)
>>>>
>>>>
>>>>
>>>> Chasing from one DLL to the next requires following 3 pointers on the
>>>> heap.
>>>>
>>>>
>>>>
>>>> DLL ~> IORef (Maybe DLL) ~> MutVar# RealWorld (Maybe DLL) ~> Maybe DLL
>>>> ~> DLL
>>>>
>>>>
>>>>
>>>> That is 3 levels of indirection.
>>>>
>>>>
>>>>
>>>> We can trim one by simply unpacking the IORef

Re: ArrayArrays

2015-08-28 Thread Edward Kmett
Well, on the plus side you'd save 16 bytes per object, which adds up if
they were small enough and there are enough of them. You get a bit better
locality of reference in terms of what fits in the first cache line of them.

-Edward

On Fri, Aug 28, 2015 at 6:14 PM, Ryan Newton  wrote:

> Yes. And for the short term I can imagine places we will settle with
> arrays even if it means tracking lengths unnecessarily and unsafeCoercing
> pointers whose types don't actually match their siblings.
>
> Is there anything to recommend the hacks mentioned for fixed sized array
> objects *other* than using them to fake structs? (Much to derecommend, as
> you mentioned!)
>
> On Fri, Aug 28, 2015 at 3:07 PM Edward Kmett  wrote:
>
>> I think both are useful, but the one you suggest requires a lot more
>> plumbing and doesn't subsume all of the usecases of the other.
>>
>> -Edward
>>
>> On Fri, Aug 28, 2015 at 5:51 PM, Ryan Newton  wrote:
>>
>>> So that primitive is an array like thing (Same pointed type, unbounded
>>> length) with extra payload.
>>>
>>> I can see how we can do without structs if we have arrays, especially
>>> with the extra payload at front. But wouldn't the general solution for
>>> structs be one that that allows new user data type defs for # types?
>>>
>>>
>>>
>>> On Fri, Aug 28, 2015 at 4:43 PM Edward Kmett  wrote:
>>>
>>>> Some form of MutableStruct# with a known number of words and a known
>>>> number of pointers is basically what Ryan Yates was suggesting above, but
>>>> where the word counts were stored in the objects themselves.
>>>>
>>>> Given that it'd have a couple of words for those counts it'd likely
>>>> want to be something we build in addition to MutVar# rather than a
>>>> replacement.
>>>>
>>>> On the other hand, if we had to fix those numbers and build info tables
>>>> that knew them, and typechecker support, for instance, it'd get rather
>>>> invasive.
>>>>
>>>> Also, a number of things that we can do with the 'sized' versions
>>>> above, like working with evil unsized c-style arrays directly inline at the
>>>> end of the structure cease to be possible, so it isn't even a pure win if
>>>> we did the engineering effort.
>>>>
>>>> I think 90% of the needs I have are covered just by adding the one
>>>> primitive. The last 10% gets pretty invasive.
>>>>
>>>> -Edward
>>>>
>>>> On Fri, Aug 28, 2015 at 5:30 PM, Ryan Newton 
>>>> wrote:
>>>>
>>>>> I like the possibility of a general solution for mutable structs (like
>>>>> Ed said), and I'm trying to fully understand why it's hard.
>>>>>
>>>>> So, we can't unpack MutVar into constructors because of object
>>>>> identity problems. But what about directly supporting an extensible set of
>>>>> unlifted MutStruct# objects, generalizing (and even replacing) MutVar#?
>>>>> That may be too much work, but is it problematic otherwise?
>>>>>
>>>>> Needless to say, this is also critical if we ever want best in class
>>>>> lockfree mutable structures, just like their Stm and sequential
>>>>> counterparts.
>>>>>
>>>>> On Fri, Aug 28, 2015 at 4:43 AM Simon Peyton Jones <
>>>>> simo...@microsoft.com> wrote:
>>>>>
>>>>>> At the very least I'll take this email and turn it into a short
>>>>>> article.
>>>>>>
>>>>>> Yes, please do make it into a wiki page on the GHC Trac, and maybe
>>>>>> make a ticket for it.
>>>>>>
>>>>>>
>>>>>> Thanks
>>>>>>
>>>>>>
>>>>>>
>>>>>> Simon
>>>>>>
>>>>>>
>>>>>>
>>>>>> *From:* Edward Kmett [mailto:ekm...@gmail.com]
>>>>>> *Sent:* 27 August 2015 16:54
>>>>>> *To:* Simon Peyton Jones
>>>>>> *Cc:* Manuel M T Chakravarty; Simon Marlow; ghc-devs
>>>>>> *Subject:* Re: ArrayArrays
>>>>>>
>>>>>>
>>>>>>
>>>>>> An ArrayArray# is just an Array# with a modified invariant. It points
>>>>>> directly to other unlifted ArrayArray#'s or B

Re: ArrayArrays

2015-08-28 Thread Edward Kmett
Also there are 4 different "things" here, basically depending on two
independent questions:

a.) if you want to shove the sizes into the info table, and
b.) if you want cardmarking.

Versions with/without cardmarking for different sizes can be done pretty
easily, but as noted, the infotable variants are pretty invasive.

-Edward

On Fri, Aug 28, 2015 at 6:36 PM, Edward Kmett  wrote:

> Well, on the plus side you'd save 16 bytes per object, which adds up if
> they were small enough and there are enough of them. You get a bit better
> locality of reference in terms of what fits in the first cache line of them.
>
> -Edward
>
> On Fri, Aug 28, 2015 at 6:14 PM, Ryan Newton  wrote:
>
>> Yes. And for the short term I can imagine places we will settle with
>> arrays even if it means tracking lengths unnecessarily and unsafeCoercing
>> pointers whose types don't actually match their siblings.
>>
>> Is there anything to recommend the hacks mentioned for fixed sized array
>> objects *other* than using them to fake structs? (Much to derecommend, as
>> you mentioned!)
>>
>> On Fri, Aug 28, 2015 at 3:07 PM Edward Kmett  wrote:
>>
>>> I think both are useful, but the one you suggest requires a lot more
>>> plumbing and doesn't subsume all of the usecases of the other.
>>>
>>> -Edward
>>>
>>> On Fri, Aug 28, 2015 at 5:51 PM, Ryan Newton  wrote:
>>>
>>>> So that primitive is an array like thing (Same pointed type, unbounded
>>>> length) with extra payload.
>>>>
>>>> I can see how we can do without structs if we have arrays, especially
>>>> with the extra payload at front. But wouldn't the general solution for
>>>> structs be one that that allows new user data type defs for # types?
>>>>
>>>>
>>>>
>>>> On Fri, Aug 28, 2015 at 4:43 PM Edward Kmett  wrote:
>>>>
>>>>> Some form of MutableStruct# with a known number of words and a known
>>>>> number of pointers is basically what Ryan Yates was suggesting above, but
>>>>> where the word counts were stored in the objects themselves.
>>>>>
>>>>> Given that it'd have a couple of words for those counts it'd likely
>>>>> want to be something we build in addition to MutVar# rather than a
>>>>> replacement.
>>>>>
>>>>> On the other hand, if we had to fix those numbers and build info
>>>>> tables that knew them, and typechecker support, for instance, it'd get
>>>>> rather invasive.
>>>>>
>>>>> Also, a number of things that we can do with the 'sized' versions
>>>>> above, like working with evil unsized c-style arrays directly inline at 
>>>>> the
>>>>> end of the structure cease to be possible, so it isn't even a pure win if
>>>>> we did the engineering effort.
>>>>>
>>>>> I think 90% of the needs I have are covered just by adding the one
>>>>> primitive. The last 10% gets pretty invasive.
>>>>>
>>>>> -Edward
>>>>>
>>>>> On Fri, Aug 28, 2015 at 5:30 PM, Ryan Newton 
>>>>> wrote:
>>>>>
>>>>>> I like the possibility of a general solution for mutable structs
>>>>>> (like Ed said), and I'm trying to fully understand why it's hard.
>>>>>>
>>>>>> So, we can't unpack MutVar into constructors because of object
>>>>>> identity problems. But what about directly supporting an extensible set 
>>>>>> of
>>>>>> unlifted MutStruct# objects, generalizing (and even replacing) MutVar#?
>>>>>> That may be too much work, but is it problematic otherwise?
>>>>>>
>>>>>> Needless to say, this is also critical if we ever want best in class
>>>>>> lockfree mutable structures, just like their Stm and sequential
>>>>>> counterparts.
>>>>>>
>>>>>> On Fri, Aug 28, 2015 at 4:43 AM Simon Peyton Jones <
>>>>>> simo...@microsoft.com> wrote:
>>>>>>
>>>>>>> At the very least I'll take this email and turn it into a short
>>>>>>> article.
>>>>>>>
>>>>>>> Yes, please do make it into a wiki page on the GHC Trac, and maybe
>>>>>>> make a ticket for it.
>>>>>>>
>>>>&

Re: ArrayArrays

2015-08-28 Thread Edward Kmett
They just segfault at this level. ;)

Sent from my iPhone

> On Aug 28, 2015, at 7:25 PM, Ryan Newton  wrote:
> 
> You presumably also save a bounds check on reads by hard-coding the sizes?
> 
>> On Fri, Aug 28, 2015 at 3:39 PM, Edward Kmett  wrote:
>> Also there are 4 different "things" here, basically depending on two 
>> independent questions: 
>> 
>> a.) if you want to shove the sizes into the info table, and 
>> b.) if you want cardmarking.
>> 
>> Versions with/without cardmarking for different sizes can be done pretty 
>> easily, but as noted, the infotable variants are pretty invasive.
>> 
>> -Edward
>> 
>>> On Fri, Aug 28, 2015 at 6:36 PM, Edward Kmett  wrote:
>>> Well, on the plus side you'd save 16 bytes per object, which adds up if 
>>> they were small enough and there are enough of them. You get a bit better 
>>> locality of reference in terms of what fits in the first cache line of them.
>>> 
>>> -Edward
>>> 
>>>> On Fri, Aug 28, 2015 at 6:14 PM, Ryan Newton  wrote:
>>>> Yes. And for the short term I can imagine places we will settle with 
>>>> arrays even if it means tracking lengths unnecessarily and unsafeCoercing 
>>>> pointers whose types don't actually match their siblings. 
>>>> 
>>>> Is there anything to recommend the hacks mentioned for fixed sized array 
>>>> objects *other* than using them to fake structs? (Much to derecommend, as 
>>>> you mentioned!)
>>>> 
>>>>> On Fri, Aug 28, 2015 at 3:07 PM Edward Kmett  wrote:
>>>>> I think both are useful, but the one you suggest requires a lot more 
>>>>> plumbing and doesn't subsume all of the usecases of the other.
>>>>> 
>>>>> -Edward
>>>>> 
>>>>>> On Fri, Aug 28, 2015 at 5:51 PM, Ryan Newton  wrote:
>>>>>> So that primitive is an array like thing (Same pointed type, unbounded 
>>>>>> length) with extra payload. 
>>>>>> 
>>>>>> I can see how we can do without structs if we have arrays, especially 
>>>>>> with the extra payload at front. But wouldn't the general solution for 
>>>>>> structs be one that that allows new user data type defs for # types?
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>>> On Fri, Aug 28, 2015 at 4:43 PM Edward Kmett  wrote:
>>>>>>> Some form of MutableStruct# with a known number of words and a known 
>>>>>>> number of pointers is basically what Ryan Yates was suggesting above, 
>>>>>>> but where the word counts were stored in the objects themselves.
>>>>>>> 
>>>>>>> Given that it'd have a couple of words for those counts it'd likely 
>>>>>>> want to be something we build in addition to MutVar# rather than a 
>>>>>>> replacement.
>>>>>>> 
>>>>>>> On the other hand, if we had to fix those numbers and build info tables 
>>>>>>> that knew them, and typechecker support, for instance, it'd get rather 
>>>>>>> invasive.
>>>>>>> 
>>>>>>> Also, a number of things that we can do with the 'sized' versions 
>>>>>>> above, like working with evil unsized c-style arrays directly inline at 
>>>>>>> the end of the structure cease to be possible, so it isn't even a pure 
>>>>>>> win if we did the engineering effort.
>>>>>>> 
>>>>>>> I think 90% of the needs I have are covered just by adding the one 
>>>>>>> primitive. The last 10% gets pretty invasive.
>>>>>>> 
>>>>>>> -Edward
>>>>>>> 
>>>>>>>> On Fri, Aug 28, 2015 at 5:30 PM, Ryan Newton  
>>>>>>>> wrote:
>>>>>>>> I like the possibility of a general solution for mutable structs (like 
>>>>>>>> Ed said), and I'm trying to fully understand why it's hard. 
>>>>>>>> 
>>>>>>>> So, we can't unpack MutVar into constructors because of object 
>>>>>>>> identity problems. But what about directly supporting an extensible 
>>>>>>>> set of unlifted MutStruct# objects, generalizing (and even replacing) 
>>>>>>>> MutVar#? That may

Re: ArrayArrays

2015-08-28 Thread Edward Kmett
I'd love to have that last 10%, but its a lot of work to get there and more
importantly I don't know quite what it should look like.

On the other hand, I do have a pretty good idea of how the primitives above
could be banged out and tested in a long evening, well in time for 7.12.
And as noted earlier, those remain useful even if a nicer typed version
with an extra level of indirection to the sizes is built up after.

The rest sounds like a good graduate student project for someone who has
graduate students lying around. Maybe somebody at Indiana University who
has an interest in type theory and parallelism can find us one. =)

-Edward

On Fri, Aug 28, 2015 at 8:48 PM, Ryan Yates  wrote:

> I think from my perspective, the motivation for getting the type
> checker involved is primarily bringing this to the level where users
> could be expected to build these structures.  it is reasonable to
> think that there are people who want to use STM (a context with
> mutation already) to implement a straight forward data structure that
> avoids extra indirection penalty.  There should be some places where
> knowing that things are field accesses rather then array indexing
> could be helpful, but I think GHC is good right now about handling
> constant offsets.  In my code I don't do any bounds checking as I know
> I will only be accessing my arrays with constant indexes.  I make
> wrappers for each field access and leave all the unsafe stuff in
> there.  When things go wrong though, the compiler is no help.  Maybe
> template Haskell that generates the appropriate wrappers is the right
> direction to go.
> There is another benefit for me when working with these as arrays in
> that it is quite simple and direct (given the hoops already jumped
> through) to play with alignment.  I can ensure two pointers are never
> on the same cache-line by just spacing things out in the array.
>
> On Fri, Aug 28, 2015 at 7:33 PM, Edward Kmett  wrote:
> > They just segfault at this level. ;)
> >
> > Sent from my iPhone
> >
> > On Aug 28, 2015, at 7:25 PM, Ryan Newton  wrote:
> >
> > You presumably also save a bounds check on reads by hard-coding the
> sizes?
> >
> > On Fri, Aug 28, 2015 at 3:39 PM, Edward Kmett  wrote:
> >>
> >> Also there are 4 different "things" here, basically depending on two
> >> independent questions:
> >>
> >> a.) if you want to shove the sizes into the info table, and
> >> b.) if you want cardmarking.
> >>
> >> Versions with/without cardmarking for different sizes can be done pretty
> >> easily, but as noted, the infotable variants are pretty invasive.
> >>
> >> -Edward
> >>
> >> On Fri, Aug 28, 2015 at 6:36 PM, Edward Kmett  wrote:
> >>>
> >>> Well, on the plus side you'd save 16 bytes per object, which adds up if
> >>> they were small enough and there are enough of them. You get a bit
> better
> >>> locality of reference in terms of what fits in the first cache line of
> them.
> >>>
> >>> -Edward
> >>>
> >>> On Fri, Aug 28, 2015 at 6:14 PM, Ryan Newton 
> wrote:
> >>>>
> >>>> Yes. And for the short term I can imagine places we will settle with
> >>>> arrays even if it means tracking lengths unnecessarily and
> unsafeCoercing
> >>>> pointers whose types don't actually match their siblings.
> >>>>
> >>>> Is there anything to recommend the hacks mentioned for fixed sized
> array
> >>>> objects *other* than using them to fake structs? (Much to
> derecommend, as
> >>>> you mentioned!)
> >>>>
> >>>> On Fri, Aug 28, 2015 at 3:07 PM Edward Kmett 
> wrote:
> >>>>>
> >>>>> I think both are useful, but the one you suggest requires a lot more
> >>>>> plumbing and doesn't subsume all of the usecases of the other.
> >>>>>
> >>>>> -Edward
> >>>>>
> >>>>> On Fri, Aug 28, 2015 at 5:51 PM, Ryan Newton 
> >>>>> wrote:
> >>>>>>
> >>>>>> So that primitive is an array like thing (Same pointed type,
> unbounded
> >>>>>> length) with extra payload.
> >>>>>>
> >>>>>> I can see how we can do without structs if we have arrays,
> especially
> >>>>>> with the extra payload at front. But wouldn't the general solution
> for
> >>>>>> structs be one that that allows new user data type defs for # types?
&

Re: ArrayArrays

2015-08-31 Thread Edward Kmett
Works for me.

On Mon, Aug 31, 2015 at 10:14 PM, Johan Tibell 
wrote:

> Works for me.
>
> On Mon, Aug 31, 2015 at 3:50 PM, Ryan Yates  wrote:
>
>> Any time works for me.
>>
>> Ryan
>>
>> On Mon, Aug 31, 2015 at 6:11 PM, Ryan Newton  wrote:
>> > Dear Edward, Ryan Yates, and other interested parties --
>> >
>> > So when should we meet up about this?
>> >
>> > May I propose the Tues afternoon break for everyone at ICFP who is
>> > interested in this topic?  We can meet out in the coffee area and
>> congregate
>> > around Edward Kmett, who is tall and should be easy to find ;-).
>> >
>> > I think Ryan is going to show us how to use his new primops for combined
>> > array + other fields in one heap object?
>> >
>> > On Sat, Aug 29, 2015 at 9:24 PM Edward Kmett  wrote:
>> >>
>> >> Without a custom primitive it doesn't help much there, you have to
>> store
>> >> the indirection to the mask.
>> >>
>> >> With a custom primitive it should cut the on heap root-to-leaf path of
>> >> everything in the HAMT in half. A shorter HashMap was actually one of
>> the
>> >> motivating factors for me doing this. It is rather astoundingly
>> difficult to
>> >> beat the performance of HashMap, so I had to start cheating pretty
>> badly. ;)
>> >>
>> >> -Edward
>> >>
>> >> On Sat, Aug 29, 2015 at 5:45 PM, Johan Tibell 
>> >> wrote:
>> >>>
>> >>> I'd also be interested to chat at ICFP to see if I can use this for my
>> >>> HAMT implementation.
>> >>>
>> >>> On Sat, Aug 29, 2015 at 3:07 PM, Edward Kmett 
>> wrote:
>> >>>>
>> >>>> Sounds good to me. Right now I'm just hacking up composable accessors
>> >>>> for "typed slots" in a fairly lens-like fashion, and treating the
>> set of
>> >>>> slots I define and the 'new' function I build for the data type as
>> its API,
>> >>>> and build atop that. This could eventually graduate to
>> template-haskell, but
>> >>>> I'm not entirely satisfied with the solution I have. I currently
>> distinguish
>> >>>> between what I'm calling "slots" (things that point directly to
>> another
>> >>>> SmallMutableArrayArray# sans wrapper) and "fields" which point
>> directly to
>> >>>> the usual Haskell data types because unifying the two notions meant
>> that I
>> >>>> couldn't lift some coercions out "far enough" to make them vanish.
>> >>>>
>> >>>> I'll be happy to run through my current working set of issues in
>> person
>> >>>> and -- as things get nailed down further -- in a longer lived medium
>> than in
>> >>>> personal conversations. ;)
>> >>>>
>> >>>> -Edward
>> >>>>
>> >>>> On Sat, Aug 29, 2015 at 7:59 AM, Ryan Newton 
>> wrote:
>> >>>>>
>> >>>>> I'd also love to meet up at ICFP and discuss this.  I think the
>> array
>> >>>>> primops plus a TH layer that lets (ab)use them many times without
>> too much
>> >>>>> marginal cost sounds great.  And I'd like to learn how we could be
>> either
>> >>>>> early users of, or help with, this infrastructure.
>> >>>>>
>> >>>>> CC'ing in Ryan Scot and Omer Agacan who may also be interested in
>> >>>>> dropping in on such discussions @ICFP, and Chao-Hong Chen, a Ph.D.
>> student
>> >>>>> who is currently working on concurrent data structures in Haskell,
>> but will
>> >>>>> not be at ICFP.
>> >>>>>
>> >>>>>
>> >>>>> On Fri, Aug 28, 2015 at 7:47 PM, Ryan Yates 
>> >>>>> wrote:
>> >>>>>>
>> >>>>>> I completely agree.  I would love to spend some time during ICFP
>> and
>> >>>>>> friends talking about what it could look like.  My small array for
>> STM
>> >>>>>> changes for the RTS can be seen here [1].  It is on a branch
>> somewhere
>> >>>>>> between 7.8 and 7.10 and includes irrelevant STM bits and some
>> &g

Re: ArrayArrays

2015-09-07 Thread Edward Kmett
I volunteered to write something up with the caveat that it would take me a
while after the conference ended to get time to do so.

I'll see what I can do.

-Edward

On Mon, Sep 7, 2015 at 9:59 AM, Simon Peyton Jones 
wrote:

> It was fun to meet and discuss this.
>
>
>
> Did someone volunteer to write a wiki page that describes the proposed
> design?  And, I earnestly hope, also describes the menagerie of currently
> available array types and primops so that users can have some chance of
> picking the right one?!
>
>
>
> Thanks
>
>
>
> Simon
>
>
>
> *From:* ghc-devs [mailto:ghc-devs-boun...@haskell.org] *On Behalf Of *Ryan
> Newton
> *Sent:* 31 August 2015 23:11
> *To:* Edward Kmett; Johan Tibell
> *Cc:* Simon Marlow; Manuel M T Chakravarty; Chao-Hong Chen; ghc-devs;
> Ryan Scott; Ryan Yates
> *Subject:* Re: ArrayArrays
>
>
>
> Dear Edward, Ryan Yates, and other interested parties --
>
>
>
> So when should we meet up about this?
>
>
>
> May I propose the Tues afternoon break for everyone at ICFP who is
> interested in this topic?  We can meet out in the coffee area and
> congregate around Edward Kmett, who is tall and should be easy to find ;-).
>
>
>
> I think Ryan is going to show us how to use his new primops for combined
> array + other fields in one heap object?
>
>
>
> On Sat, Aug 29, 2015 at 9:24 PM Edward Kmett  wrote:
>
> Without a custom primitive it doesn't help much there, you have to store
> the indirection to the mask.
>
>
>
> With a custom primitive it should cut the on heap root-to-leaf path of
> everything in the HAMT in half. A shorter HashMap was actually one of the
> motivating factors for me doing this. It is rather astoundingly difficult
> to beat the performance of HashMap, so I had to start cheating pretty
> badly. ;)
>
>
>
> -Edward
>
>
>
> On Sat, Aug 29, 2015 at 5:45 PM, Johan Tibell 
> wrote:
>
> I'd also be interested to chat at ICFP to see if I can use this for my
> HAMT implementation.
>
>
>
> On Sat, Aug 29, 2015 at 3:07 PM, Edward Kmett  wrote:
>
> Sounds good to me. Right now I'm just hacking up composable accessors for
> "typed slots" in a fairly lens-like fashion, and treating the set of slots
> I define and the 'new' function I build for the data type as its API, and
> build atop that. This could eventually graduate to template-haskell, but
> I'm not entirely satisfied with the solution I have. I currently
> distinguish between what I'm calling "slots" (things that point directly to
> another SmallMutableArrayArray# sans wrapper) and "fields" which point
> directly to the usual Haskell data types because unifying the two notions
> meant that I couldn't lift some coercions out "far enough" to make them
> vanish.
>
>
>
> I'll be happy to run through my current working set of issues in person
> and -- as things get nailed down further -- in a longer lived medium than
> in personal conversations. ;)
>
>
>
> -Edward
>
>
>
> On Sat, Aug 29, 2015 at 7:59 AM, Ryan Newton  wrote:
>
> I'd also love to meet up at ICFP and discuss this.  I think the array
> primops plus a TH layer that lets (ab)use them many times without too much
> marginal cost sounds great.  And I'd like to learn how we could be either
> early users of, or help with, this infrastructure.
>
>
>
> CC'ing in Ryan Scot and Omer Agacan who may also be interested in dropping
> in on such discussions @ICFP, and Chao-Hong Chen, a Ph.D. student who is
> currently working on concurrent data structures in Haskell, but will not be
> at ICFP.
>
>
>
>
>
> On Fri, Aug 28, 2015 at 7:47 PM, Ryan Yates  wrote:
>
> I completely agree.  I would love to spend some time during ICFP and
> friends talking about what it could look like.  My small array for STM
> changes for the RTS can be seen here [1].  It is on a branch somewhere
> between 7.8 and 7.10 and includes irrelevant STM bits and some
> confusing naming choices (sorry), but should cover all the details
> needed to implement it for a non-STM context.  The biggest surprise
> for me was following small array too closely and having a word/byte
> offset miss-match [2].
>
> [1]:
> https://github.com/fryguybob/ghc/compare/ghc-htm-bloom...fryguybob:ghc-htm-mut
> [2]: https://ghc.haskell.org/trac/ghc/ticket/10413
>
> Ryan
>
>
> On Fri, Aug 28, 2015 at 10:09 PM, Edward Kmett  wrote:
> > I'd love to have that last 10%, but its a lot of work to get there and
> more
> > importantly I don't know quite what it should look like.
> >
> > On the oth

Re: ArrayArrays

2015-09-07 Thread Edward Kmett
I had a brief discussion with Richard during the Haskell Symposium about
how we might be able to let parametricity help a bit in reducing the space
of necessarily primops to a slightly more manageable level.

Notably, it'd be interesting to explore the ability to allow parametricity
over the portion of # that is just a gcptr.

We could do this if the levity polymorphism machinery was tweaked a bit.
You could envision the ability to abstract over things in both * and the
subset of # that are represented by a gcptr, then modifying the existing
array primitives to be parametric in that choice of levity for their
argument so long as it was of a "heap object" levity.

This could make the menagerie of ways to pack {Small}{Mutable}Array{Array}#
references into a {Small}{Mutable}Array{Array}#' actually typecheck
soundly, reducing the need for folks to descend into the use of the more
evil structure primitives we're talking about, and letting us keep a few
more principles around us.

Then in the cases like `atomicModifyMutVar#` where it needs to actually be
in * rather than just a gcptr, due to the constructed field selectors it
introduces on the heap then we could keep the existing less polymorphic
type.

-Edward

On Mon, Sep 7, 2015 at 9:59 AM, Simon Peyton Jones 
wrote:

> It was fun to meet and discuss this.
>
>
>
> Did someone volunteer to write a wiki page that describes the proposed
> design?  And, I earnestly hope, also describes the menagerie of currently
> available array types and primops so that users can have some chance of
> picking the right one?!
>
>
>
> Thanks
>
>
>
> Simon
>
>
>
> *From:* ghc-devs [mailto:ghc-devs-boun...@haskell.org] *On Behalf Of *Ryan
> Newton
> *Sent:* 31 August 2015 23:11
> *To:* Edward Kmett; Johan Tibell
> *Cc:* Simon Marlow; Manuel M T Chakravarty; Chao-Hong Chen; ghc-devs;
> Ryan Scott; Ryan Yates
> *Subject:* Re: ArrayArrays
>
>
>
> Dear Edward, Ryan Yates, and other interested parties --
>
>
>
> So when should we meet up about this?
>
>
>
> May I propose the Tues afternoon break for everyone at ICFP who is
> interested in this topic?  We can meet out in the coffee area and
> congregate around Edward Kmett, who is tall and should be easy to find ;-).
>
>
>
> I think Ryan is going to show us how to use his new primops for combined
> array + other fields in one heap object?
>
>
>
> On Sat, Aug 29, 2015 at 9:24 PM Edward Kmett  wrote:
>
> Without a custom primitive it doesn't help much there, you have to store
> the indirection to the mask.
>
>
>
> With a custom primitive it should cut the on heap root-to-leaf path of
> everything in the HAMT in half. A shorter HashMap was actually one of the
> motivating factors for me doing this. It is rather astoundingly difficult
> to beat the performance of HashMap, so I had to start cheating pretty
> badly. ;)
>
>
>
> -Edward
>
>
>
> On Sat, Aug 29, 2015 at 5:45 PM, Johan Tibell 
> wrote:
>
> I'd also be interested to chat at ICFP to see if I can use this for my
> HAMT implementation.
>
>
>
> On Sat, Aug 29, 2015 at 3:07 PM, Edward Kmett  wrote:
>
> Sounds good to me. Right now I'm just hacking up composable accessors for
> "typed slots" in a fairly lens-like fashion, and treating the set of slots
> I define and the 'new' function I build for the data type as its API, and
> build atop that. This could eventually graduate to template-haskell, but
> I'm not entirely satisfied with the solution I have. I currently
> distinguish between what I'm calling "slots" (things that point directly to
> another SmallMutableArrayArray# sans wrapper) and "fields" which point
> directly to the usual Haskell data types because unifying the two notions
> meant that I couldn't lift some coercions out "far enough" to make them
> vanish.
>
>
>
> I'll be happy to run through my current working set of issues in person
> and -- as things get nailed down further -- in a longer lived medium than
> in personal conversations. ;)
>
>
>
> -Edward
>
>
>
> On Sat, Aug 29, 2015 at 7:59 AM, Ryan Newton  wrote:
>
> I'd also love to meet up at ICFP and discuss this.  I think the array
> primops plus a TH layer that lets (ab)use them many times without too much
> marginal cost sounds great.  And I'd like to learn how we could be either
> early users of, or help with, this infrastructure.
>
>
>
> CC'ing in Ryan Scot and Omer Agacan who may also be interested in dropping
> in on such discussions @ICFP, and Chao-Hong Chen, a Ph.D. student who is
> currently working on concurrent data structures in Haskell, but will not 

Re: ArrayArrays

2015-09-07 Thread Edward Kmett
Indeed. I can CAS today with appropriately coerced primitives.

-Edward

On Mon, Sep 7, 2015 at 4:27 PM, Ryan Newton  wrote:

> Ah, incidentally that introduces an interesting difference between
> atomicModify and CAS.  CAS should be able to work on mutable locations in
> that subset of # that are represented by a gcptr, whereas Edward pointed
> out that atomicModify cannot.
>
> (Indeed, to use lock-free algorithms with these new unboxed mutable
> structures we'll need CAS on the slots.)
>
> On Mon, Sep 7, 2015 at 4:16 PM, Edward Kmett  wrote:
>
>> I had a brief discussion with Richard during the Haskell Symposium about
>> how we might be able to let parametricity help a bit in reducing the space
>> of necessarily primops to a slightly more manageable level.
>>
>> Notably, it'd be interesting to explore the ability to allow
>> parametricity over the portion of # that is just a gcptr.
>>
>> We could do this if the levity polymorphism machinery was tweaked a bit.
>> You could envision the ability to abstract over things in both * and the
>> subset of # that are represented by a gcptr, then modifying the existing
>> array primitives to be parametric in that choice of levity for their
>> argument so long as it was of a "heap object" levity.
>>
>> This could make the menagerie of ways to pack
>> {Small}{Mutable}Array{Array}# references into a
>> {Small}{Mutable}Array{Array}#' actually typecheck soundly, reducing the
>> need for folks to descend into the use of the more evil structure
>> primitives we're talking about, and letting us keep a few more principles
>> around us.
>>
>> Then in the cases like `atomicModifyMutVar#` where it needs to actually
>> be in * rather than just a gcptr, due to the constructed field selectors it
>> introduces on the heap then we could keep the existing less polymorphic
>> type.
>>
>> -Edward
>>
>> On Mon, Sep 7, 2015 at 9:59 AM, Simon Peyton Jones > > wrote:
>>
>>> It was fun to meet and discuss this.
>>>
>>>
>>>
>>> Did someone volunteer to write a wiki page that describes the proposed
>>> design?  And, I earnestly hope, also describes the menagerie of currently
>>> available array types and primops so that users can have some chance of
>>> picking the right one?!
>>>
>>>
>>>
>>> Thanks
>>>
>>>
>>>
>>> Simon
>>>
>>>
>>>
>>> *From:* ghc-devs [mailto:ghc-devs-boun...@haskell.org] *On Behalf Of *Ryan
>>> Newton
>>> *Sent:* 31 August 2015 23:11
>>> *To:* Edward Kmett; Johan Tibell
>>> *Cc:* Simon Marlow; Manuel M T Chakravarty; Chao-Hong Chen; ghc-devs;
>>> Ryan Scott; Ryan Yates
>>> *Subject:* Re: ArrayArrays
>>>
>>>
>>>
>>> Dear Edward, Ryan Yates, and other interested parties --
>>>
>>>
>>>
>>> So when should we meet up about this?
>>>
>>>
>>>
>>> May I propose the Tues afternoon break for everyone at ICFP who is
>>> interested in this topic?  We can meet out in the coffee area and
>>> congregate around Edward Kmett, who is tall and should be easy to find ;-).
>>>
>>>
>>>
>>> I think Ryan is going to show us how to use his new primops for combined
>>> array + other fields in one heap object?
>>>
>>>
>>>
>>> On Sat, Aug 29, 2015 at 9:24 PM Edward Kmett  wrote:
>>>
>>> Without a custom primitive it doesn't help much there, you have to store
>>> the indirection to the mask.
>>>
>>>
>>>
>>> With a custom primitive it should cut the on heap root-to-leaf path of
>>> everything in the HAMT in half. A shorter HashMap was actually one of the
>>> motivating factors for me doing this. It is rather astoundingly difficult
>>> to beat the performance of HashMap, so I had to start cheating pretty
>>> badly. ;)
>>>
>>>
>>>
>>> -Edward
>>>
>>>
>>>
>>> On Sat, Aug 29, 2015 at 5:45 PM, Johan Tibell 
>>> wrote:
>>>
>>> I'd also be interested to chat at ICFP to see if I can use this for my
>>> HAMT implementation.
>>>
>>>
>>>
>>> On Sat, Aug 29, 2015 at 3:07 PM, Edward Kmett  wrote:
>>>
>>> Sounds good to me. Right now I'm just hacking up composable accessors
>>> for "typed slots" in a fairly lens-like fashion, and treating the set of
>>

Re: ArrayArrays

2015-09-07 Thread Edward Kmett
Assume we had the ability to talk about Levity in a new way and instead of
just:

data Levity = Lifted | Unlifted

type * = TYPE 'Lifted
type # = TYPE 'Unlifted

we replace had a more nuanced notion of TYPE parameterized on another data
type:

data Levity = Lifted | Unlifted
data Param = Composite | Simple Levity

and we parameterized TYPE with a Param rather than Levity.

Existing strange representations can continue to live in TYPE 'Composite

(# Int# , Double #) :: TYPE 'Composite

and we don't support parametricity in there, just like, currently we don't
allow parametricity in #.

We can include the undefined example from Richard's talk:

undefined :: forall (v :: Param). v

and ultimately lift it into his pi type when it is available just as before.

But we could let consider TYPE ('Simple 'Unlifted) as a form of 'parametric
#' covering unlifted things we're willing to allow polymorphism over
because they are just pointers to something in the heap, that just happens
to not be able to be _|_ or a thunk.

In this setting, recalling that above, I modified Richard's TYPE to take a
Param instead of Levity, we can define a type alias for things that live as
a simple pointer to a heap allocated object:

type GC (l :: Levity) = TYPE ('Simple l)
type * = GC 'Lifted

and then we can look at existing primitives generalized:

Array# :: forall (l :: Levity) (a :: GC l). a -> GC 'Unlifted
MutableArray# :: forall (l :: Levity) (a :: GC l). * -> a -> GC 'Unlifted
SmallArray# :: forall (l :: Levity) (a :: GC l). a -> GC 'Unlifted
SmallMutableArray# :: forall (l :: Levity) (a :: GC l). * -> a -> GC
'Unlifted
MutVar# :: forall (l :: Levity) (a :: GC l). * -> a -> GC 'Unlifted
MVar# :: forall (l :: Levity) (a :: GC l). * -> a -> GC 'Unlifted

Weak#, StablePtr#, StableName#, etc. all can take similar modifications.

Recall that an ArrayArray# was just an Array# hacked up to be able to hold
onto the subset of # that is collectable.

Almost all of the operations on these data types can work on the more
general kind of argument.

newArray# :: forall (s :: *) (l :: Levity) (a :: GC l). Int# -> a -> State#
s -> (# State# s, MutableArray# s a #)

writeArray# :: forall (s :: *) (l :: Levity) (a :: GC l). MutableArray# s a
-> Int# -> a -> State# s -> State# s

readArray# :: forall (s :: *) (l :: Levity) (a :: GC l). MutableArray# s a
-> Int# -> State# s -> (# State# s, a #)

etc.

Only a couple of our existing primitives _can't_ generalize this way. The
one that leaps to mind is atomicModifyMutVar, which would need to stay
constrained to only work on arguments in *, because of the way it operates.

With that we can still talk about

MutableArray# s Int

but now we can also talk about:

MutableArray# s (MutableArray# s Int)

without the layer of indirection through a box in * and without an
explosion of primops. The same newFoo, readFoo, writeFoo machinery works
for both kinds.

The struct machinery doesn't get to take advantage of this, but it would
let us clean house elsewhere in Prim and drastically improve the range of
applicability of the existing primitives with nothing more than a small
change to the levity machinery.

I'm not attached to any of the names above, I coined them just to give us a
concrete thing to talk about.

Here I'm only proposing we extend machinery in GHC.Prim this way, but an
interesting 'now that the barn door is open' question is to consider that
our existing Haskell data types often admit a similar form of parametricity
and nothing in principle prevents this from working for Maybe or [] and
once you permit inference to fire across all of GC l then it seems to me
that you'd start to get those same capabilities there as well when
LevityPolymorphism was turned on.

-Edward

On Mon, Sep 7, 2015 at 5:56 PM, Simon Peyton Jones 
wrote:

> This could make the menagerie of ways to pack
> {Small}{Mutable}Array{Array}# references into a
> {Small}{Mutable}Array{Array}#' actually typecheck soundly, reducing the
> need for folks to descend into the use of the more evil structure
> primitives we're talking about, and letting us keep a few more principles
> around us.
>
>
>
> I’m lost. Can you give some concrete examples that illustrate how levity
> polymorphism will help us?
>
>
> Simon
>
>
>
> *From:* Edward Kmett [mailto:ekm...@gmail.com]
> *Sent:* 07 September 2015 21:17
> *To:* Simon Peyton Jones
> *Cc:* Ryan Newton; Johan Tibell; Simon Marlow; Manuel M T Chakravarty;
> Chao-Hong Chen; ghc-devs; Ryan Scott; Ryan Yates
> *Subject:* Re: ArrayArrays
>
>
>
> I had a brief discussion with Richard during the Haskell Symposium about
> how we might be able to let parametricity help a bit in reducing the space
> of neces

Re: ArrayArrays

2015-09-08 Thread Edward Kmett
Once you start to include all the other primitive types there is a bit more
of an explosion. MVar#, TVar#, MutVar#, Small variants, etc. can all be
modified to carry unlifted content.

Being able to be parametric over that choice would permit a number of
things in user land to do the same thing with an open-ended set of design
possibilities that are rather hard to contemplate in advance. e.g. being
able to abstract over them could let you just use a normal (,) to carry
around unlifted parametric data types or being able to talk about [MVar# s
a] drastically reducing the number of one off data types we need to invent.

If you can talk about the machinery mentioned above then you can have
typeclasses parameterized on an argument that could be either unlifted or
lifted.

I'm not willing to fight too hard for it, but it feels more like the
"right" solution than retaining a cut-and-paste copy of the same code and
bifurcating further on each argument you want to consider such a degree of
freedom.

As such it seems like a pretty big win for a comparatively minor change to
the levity polymorphism machinery.

-Edward

On Tue, Sep 8, 2015 at 3:40 AM, Simon Marlow  wrote:

> This would be very cool, however it's questionable whether it's worth it.
>
> Without any unlifted kind, we need
>  - ArrayArray#
>  - a set of new/read/write primops for every element type,
>either built-in or made from unsafeCoerce#
>
> With the unlifted kind, we would need
>  - ArrayArray#
>  - one set of new/read/write primops
>
> With levity polymorphism, we would need
>  - none of this, Array# can be used
>
> So having an unlifted kind already kills a lot of the duplication,
> polymorphism only kills a bit more.
>
> Cheers
> Simon
>
> On 08/09/2015 00:14, Edward Kmett wrote:
>
>> Assume we had the ability to talk about Levity in a new way and instead
>> of just:
>>
>> data Levity = Lifted | Unlifted
>>
>> type * = TYPE 'Lifted
>> type # = TYPE 'Unlifted
>>
>> we replace had a more nuanced notion of TYPE parameterized on another
>> data type:
>>
>> data Levity = Lifted | Unlifted
>> data Param = Composite | Simple Levity
>>
>> and we parameterized TYPE with a Param rather than Levity.
>>
>> Existing strange representations can continue to live in TYPE 'Composite
>>
>> (# Int# , Double #) :: TYPE 'Composite
>>
>> and we don't support parametricity in there, just like, currently we
>> don't allow parametricity in #.
>>
>> We can include the undefined example from Richard's talk:
>>
>> undefined :: forall (v :: Param). v
>>
>> and ultimately lift it into his pi type when it is available just as
>> before.
>>
>> But we could let consider TYPE ('Simple 'Unlifted) as a form of
>> 'parametric #' covering unlifted things we're willing to allow
>> polymorphism over because they are just pointers to something in the
>> heap, that just happens to not be able to be _|_ or a thunk.
>>
>> In this setting, recalling that above, I modified Richard's TYPE to take
>> a Param instead of Levity, we can define a type alias for things that
>> live as a simple pointer to a heap allocated object:
>>
>> type GC (l :: Levity) = TYPE ('Simple l)
>> type * = GC 'Lifted
>>
>> and then we can look at existing primitives generalized:
>>
>> Array# :: forall (l :: Levity) (a :: GC l). a -> GC 'Unlifted
>> MutableArray# :: forall (l :: Levity) (a :: GC l). * -> a -> GC 'Unlifted
>> SmallArray# :: forall (l :: Levity) (a :: GC l). a -> GC 'Unlifted
>> SmallMutableArray# :: forall (l :: Levity) (a :: GC l). * -> a -> GC
>> 'Unlifted
>> MutVar# :: forall (l :: Levity) (a :: GC l). * -> a -> GC 'Unlifted
>> MVar# :: forall (l :: Levity) (a :: GC l). * -> a -> GC 'Unlifted
>>
>> Weak#, StablePtr#, StableName#, etc. all can take similar modifications.
>>
>> Recall that an ArrayArray# was just an Array# hacked up to be able to
>> hold onto the subset of # that is collectable.
>>
>> Almost all of the operations on these data types can work on the more
>> general kind of argument.
>>
>> newArray# :: forall (s :: *) (l :: Levity) (a :: GC l). Int# -> a ->
>> State# s -> (# State# s, MutableArray# s a #)
>>
>> writeArray# :: forall (s :: *) (l :: Levity) (a :: GC l). MutableArray#
>> s a -> Int# -> a -> State# s -> State# s
>>
>> readArray# :: forall (s :: *) (l :: Levity) (a :: GC l). MutableArray# s
>> a -> Int# -> State# s -

Re: Unlifted data types

2015-09-09 Thread Edward Kmett
I think ultimately the two views of levity that we've been talking diverge
along the same lines as the pi vs forall discussion from your Levity
polymorphism talk.

I've been focused entirely on situations where forall suffices, and no
distinction is needed in how you compile for both levities.

Maybe could be polymorphic using a mere forall in the levity of the boxed
argument it carries as it doesn't care what it is, it never forces it,
pattern matching on it just gives it back when you pattern match on it.

Eq or Ord could just as easily work over anything boxed. The particular Eq
_instance_ needs to care about the levity.

Most of the combinators for working with Maybe do need to care about that
levity however.

e.g. consider fmap in Functor, the particular instances would care. Because
you ultimately wind up using fmap to build 'f a' values and those need to
know how the let binding should work. There seems to be a pi at work there.
Correct operational behavior would depend on the levity.

But if we look at what inference should probably grab for the levity of
Functor:

you'd get:

class Functor (l : Levity) (l' : Levity') (f :: GC l -> GC l') where
   fmap :: forall a b. (a :: GC l) (b :: GC l). (a -> b) -> f a -> f b

Baed on the notion that given current practices, f would cause us to pick a
common kind for a and b, and the results of 'f'. Depending on how and if we
decided to default to * unless annotated in various situations would drive
this closer and closer to the existing Functor by default.

These are indeed distinct functors with distinct operational behavior, and
we could implement each of them by supplying separate instances, as the
levity would take part in the instance resolution like any other kind
argument.

Whether we could expect an average Haskeller to be willing to do so is an
entirely different matter.

-Edward


On Wed, Sep 9, 2015 at 12:44 PM, Dan Doel  wrote:

> On Wed, Sep 9, 2015 at 9:03 AM, Richard Eisenberg 
> wrote:
> > No functions (excepting `error` and friends) are truly levity
> polymorphic.
>
> I was talking with Ed Kmett about this yesterday, and he pointed out
> that this isn't true. There are a significant array of levity
> polymorphic functions having to do with reference types. They simply
> shuffle around pointers with the right calling convention, and don't
> really care what levity their arguments are, because they're just
> operating uniformly either way. So if we had:
>
> MVar# :: forall (l :: Levity). * -> TYPE (Boxed l) -> TYPE (Boxed
> Unlifted)
>
> then:
>
> takeMVar :: forall s (l :: Levity) (a :: TYPE (Boxed l)). MVar# s
> l a -> State# s -> (# State# s, a #)
> putMVar :: forall s (l :: Levity) (a :: Type (Boxed l)). MVar# s l
> a -> a -> State# s -> State# s
>
> are genuinely parametric in l. And the same is true for MutVar#,
> Array#, MutableArray#, etc.
>
> I think data type constructors are actually parametric, too (ignoring
> data with ! in them for the moment; the underlying constructors of
> those). Using a constructor just puts the pointers for the fields in
> the type, and matching on a constructor gives them back. They don't
> need to care whether their fields are lifted or not, they just
> preserve whatever the case is.
>
> But this:
>
> > We use levity polymorphism in the types to get GHC to use its existing
> type inference to infer strictness. By the time type inference is done, we
> must ensure that no levity polymorphism remains, because the code generator
> won't be able to deal with it.
>
> Is not parametric polymorphism; it is ad-hoc polymorphism. It even has
> the defaulting step from type classes. Except the ad-hoc has been
> given the same notation as the genuinely parametric, so you can no
> longer identify the latter. (I'm not sure I'm a great fan of the
> ad-hoc part anyway, to be honest.)
>
> -- Dan
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Deriving Contravariant and Profunctor

2015-09-11 Thread Edward Kmett
Actually it is trickier than you'd think.

With "Functor" you can pretend that contravariance doesn't exist.

With both profunctor and contravariant it is necessarily part of the puzzle.

data Compose f g a = Compose (f (g a))

* are both f and g contravariant leading to a functor?
* is f contravariant and g covariant leading to a contravariant functor?
* is f covariant and g contravariant leading to a contravariant functor?

data Wat p f a b = Wat (p (f a) b)

is p a Profunctor or a Bifunctor? is f Contravariant or a Functor?

We investigated adding TH code-generation for the contravariant package,
and ultimately rejected it on these grounds.

https://github.com/ekmett/contravariant/issues/17

-Edward



On Fri, Sep 11, 2015 at 12:49 PM, David Feuer  wrote:

> Would it be possible to add mechanisms to derive Contravariant and
> Profunctor instances? As with Functor, each algebraic datatype can
> only have one sensible instance of each of these.
>
> David Feuer
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Deriving Contravariant and Profunctor

2015-09-11 Thread Edward Kmett
They'd all act the same assuming any or all of the instances existed, but
GHC can't backtrack and figure out which way to get there, it'll only look
at the instance head.

-Edward

On Fri, Sep 11, 2015 at 2:22 PM, David Feuer  wrote:

> Oh, I see... you get horrible overlap problems there. Blech! I guess
> they'll all act the same (modulo optimized <$ and such), but GHC can't
> know that and will see them as forever incoherent.
>
> On Fri, Sep 11, 2015 at 1:52 PM, Edward Kmett  wrote:
> > Actually it is trickier than you'd think.
> >
> > With "Functor" you can pretend that contravariance doesn't exist.
> >
> > With both profunctor and contravariant it is necessarily part of the
> puzzle.
> >
> > data Compose f g a = Compose (f (g a))
> >
> > * are both f and g contravariant leading to a functor?
> > * is f contravariant and g covariant leading to a contravariant functor?
> > * is f covariant and g contravariant leading to a contravariant functor?
> >
> > data Wat p f a b = Wat (p (f a) b)
> >
> > is p a Profunctor or a Bifunctor? is f Contravariant or a Functor?
> >
> > We investigated adding TH code-generation for the contravariant package,
> and
> > ultimately rejected it on these grounds.
> >
> > https://github.com/ekmett/contravariant/issues/17
> >
> > -Edward
> >
> >
> >
> > On Fri, Sep 11, 2015 at 12:49 PM, David Feuer 
> wrote:
> >>
> >> Would it be possible to add mechanisms to derive Contravariant and
> >> Profunctor instances? As with Functor, each algebraic datatype can
> >> only have one sensible instance of each of these.
> >>
> >> David Feuer
> >> ___
> >> ghc-devs mailing list
> >> ghc-devs@haskell.org
> >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
> >
> >
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: MonadFail decisions

2015-10-13 Thread Edward Kmett
On Tue, Oct 13, 2015 at 10:29 AM, Simon Peyton Jones 
wrote:

> Dear Edward and Core Libraries Committee
>
>
>
> Can you tell us what plan you want to execute for MonadFail?
> Specifically, in https://wiki.haskell.org/MonadFail_Proposal
>
> · Is the specification in 1.3 what you have agreed?
>
The main concern I have with section 1.3 is the statement about view
pattern desugaring. It really should just borrow the failability of the
pattern part. The view pattern component adds nothing.

Getting better introspection on pattern synonyms to avoid them becoming a
huge source of MonadFail constraints would be good as well.

We can in theory incorporate improvements on this front gradually however.

> · Is the transition strategy in 1.7 exactly what you want?
>
> The "3 release policy" informs the design here a bit. Notably it puts off
warnings for this for a while, as we can't warn about it in 8.0 (or really
even 8.2) which means that at least in 8.0 this is almost entirely a
library change. Under that policy we can't do the warnings until 8.4 and
cut-over til 8.6.

For 8.0 the change to 1.7 basically comes down to "don't turn on the
warnings by default yet".

We can’t implement 8.0 without being sure of the specification!  The
> current Phab is
>
> https://phabricator.haskell.org/D1248
>
>
>
> Also, David, did our conversation at HX help you get un-stuck?
>
>
-Edward


>
>
> Thanks
>
>
>
> Simon
>
>
>
> *From:* haskell-core-librar...@googlegroups.com [mailto:
> haskell-core-librar...@googlegroups.com] *On Behalf Of *Edward Kmett
> *Sent:* 13 October 2015 01:43
> *To:* core-libraries-commit...@haskell.org
> *Subject:* [core libraries] Prelude: 3 Release Policy
>
>
>
> Recently there has been a bunch of chatter about ways to mitigate the
> amount of CPP pushed on users by changes to the Prelude.
>
>
>
> In particular the discussion has been around the idea of trying to ensure
> that it is possible to write code in _some_ way without CPP that can run
> backwards for 3 releases of GHC, preferably in a -Wall safe manner. The
> approach they may have to use may not be the idiomatic way, but in general
> it should exist.
>
>
>
> Gershom ran around at the Haskell Exchange sounding folks out about this
> idea, and it seems to codify a reasonable tension between the "change
> nothing" and "change everything" camps. The feedback thus far seems to be
> noises of "grumbling acceptance" rather than the current state of outright
> panic that we might do anything at any time.
>
>
>
> I'm personally considering this a very high priority for all changes to
> Prelude going forward.
>
>
>
> The 3 years forms a backward-facing window, not a guarantee against future
> change, but we should of course try to let folks know what is coming with a
> decent time horizon so folks can look forward as well. That is a separate
> concern, though.
>
>
>
> I'm not ready to give the "3 release policy" outright veto power over new
> functionality, but at least if we have two plans that can in the end yield
> the same end state, we should definitely err on the side of the plan that
> falls within these guidelines, and be very well ready to explain to a
> rather irate community when we violate this rubric. It shouldn't be done
> lightly, if at all, if we can help it!
>
>
>
> All in all it is a fairly weak guarantee, but it does have some impact on
> current items under consideration.
>
>
>
> Off the top of my head:
>
>
>
> * A number of new members for Floating were passed by proposal back before
> 7.10 went out the door, but haven't found their way into base yet: expm1,
> log1p, etc. are absolutely required for decent precision numerics. When the
> proposal passed we ultimately decided _not_ to include default definitions
> for these to force authors to implement them explicitly. Under the
> guidelines here, the plan would likely have to include default definitions
> for these to start when introducing them in 8.0. Then in 8.4 we could in
> theory remove the defaults and remain in compliance with the letter of the
> law here or introduce an ad hoc warning about lack of implementation, and
> remove the defaults in 8.6, depending on how gradual an introduction we
> wanted to give. We wouldn't be able to do the warnings in 8.2, however, and
> remain within the letter of the law, and we wouldn't be able to introduce
> them without defaults without violating the no-warnings guideline.
>
>
>
> * MonadFail reform proposal wouldn't be able to start issuing warnings
> about missing instances until 8.4 even if we 

Re: MonadFail decisions

2015-10-16 Thread Edward Kmett
The current intention is to go ahead with MonadFail.

It sounds like we'll need to delay the warnings themselves until around 8.4.

We can add them, but not turn them on by default in the short term. This
has the knock-on effect of delaying the whole plan a release or two, but
otherwise the plan is very actionable.

A lot of the opposition comes from fear that we 'might do anything at any
time'. If we're up front about what is coming and give sufficient notice
and the ability for folks to maintain a reasonably wide backwards
compatibility window without needing to dip into CPP or suppress warnings
them most of those fears go away.

-Edward

On Fri, Oct 16, 2015 at 12:09 PM, David Luposchainsky <
dluposchain...@googlemail.com> wrote:

> On 13.10.2015 16:29, Simon Peyton Jones wrote:
> > Also, David, did our conversation at HX help you get un-stuck?
>
> Hi Simon,
>
> yes, it was definitely a good pointer. On the other hand, I found the
> Haskell
> Exchange to be quite a frustrating event with respect to current events:
> there
> was a load of very loud, but in my opinion very wrong, categorical
> opposition to
> breaking changes in general.
> I spent quite a bit of time worrying about MonadFail in the past, but
> right now
> I'd like to wait for a "tentative yes" from the CLC before I keep going,
> because
> I'm really not sure the mob is going to make me throw away my patch.
> Granted, a
> lot of the discussion is about MRP, but many of the points brought up
> there are
> equally valid against the MFP.
>
> David
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: MonadFail decisions

2015-10-16 Thread Edward Kmett
Hi David,

I took the time to update the MonadFail wiki page to include both the
timeline currently under consideration, lengthening the timeline to finish
in 8.8 to comply with the "3 release policy" and to ensure that folks
always have a notification of pending breaking changes.

I included a couple of personal comments about the desugaring in 1.3 where
we could do better. The improvements in 1.3 could be made any time over the
8.0 and 8.2 releases before we start expecting people to cut over in 8.4
without impact.

As for the "mob", please keep in mind that the vast majority of feedback
about the MonadFail proposal has been positive and draw heart from that.
Many of the folks who were against the Foldable/Traversable generalizations
(e.g. Lennart) are heavily in favor of MFP.

-Edward






On Fri, Oct 16, 2015 at 12:09 PM, David Luposchainsky <
dluposchain...@googlemail.com> wrote:

> On 13.10.2015 16:29, Simon Peyton Jones wrote:
> > Also, David, did our conversation at HX help you get un-stuck?
>
> Hi Simon,
>
> yes, it was definitely a good pointer. On the other hand, I found the
> Haskell
> Exchange to be quite a frustrating event with respect to current events:
> there
> was a load of very loud, but in my opinion very wrong, categorical
> opposition to
> breaking changes in general.
> I spent quite a bit of time worrying about MonadFail in the past, but
> right now
> I'd like to wait for a "tentative yes" from the CLC before I keep going,
> because
> I'm really not sure the mob is going to make me throw away my patch.
> Granted, a
> lot of the discussion is about MRP, but many of the points brought up
> there are
> equally valid against the MFP.
>
> David
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: MonadFail decisions

2015-10-16 Thread Edward Kmett
Not a bad idea. I think Herbert was talking about calling it -Wcompat or
something.

On Fri, Oct 16, 2015 at 1:06 PM, Howard B. Golden  wrote:

> On Friday, October 16, 2015 9:22 AM, Edward Kmett wrote:
>
>
> > It sounds like we'll need to delay the warnings themselves until around
> 8.4.
>
> I propose an optional generic flag -fearly-warning (pun slightly intended)
> to get _all_ warnings of planned changes.
>
> Howard
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Taking a step back

2015-10-20 Thread Edward Kmett
Johan,

Thank you so much for all of your contributions to the community.

I confess, there are days when I find myself lost in maintenance hell that
I feel a desire to throw in the towel as well. (If Eric Mertens and others
hadn't picked up so much of the slack on my own projects I'm afraid I
likely would have reached the point of gravitational collapse long ago.)

I'm terribly sorry to hear that recent attempts to mitigate the impact of
changes, the three release policy which as inspired by comments you made,
haven't been enough to assuage your fears and discontent about the current
direction things are heading.

We are all poorer for the loss of your guidance.

-Edward

On Tue, Oct 20, 2015 at 9:59 AM, Johan Tibell 
wrote:

> Friends,
>
> I'm taking a step back from day-to-day library work.
>
> There are two main reasons I use Haskell: on one hand I find writing
> Haskell educational and fun. On the other I hope to make it a viable
> alternative to existing mainstream languages. With recent changes to our
> core libraries, and the general direction these are moving in, I believe
> we're moving away from becoming a viable alternative to those mainstream
> languages.
>
> This has some practical implications for how I spend my Haskell hacking
> time. Much of what I do is maintaining and working on libraries that are
> needed for real world usage, but that aren't that interesting to work on.
> I've lost the motivation to work on these.
>
> I've decided to take a step back from the core maintenance work on cabal,
> network, containers, and a few others* starting now. I've already found
> replacement maintainers for these.
>
> I still plan to hack on random side projects, including GHC, and to
> continue coming to Haskell events and conference, just with a shorter bug
> backlog to worry about. :)
>
> -- Johan Tibell
>
> * For now I will still hack on unordered-containers and ekg, as there are
> some things I'd like to experiment with there.
>
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Temporarily pinning a thread to a capability

2015-10-27 Thread Edward Kmett
Would anything go wrong with a thread id if I pinned it to a capability
after the fact?

I could in theory do so just by setting

tso->flags |= TSO_LOCKED

and then disabling this later by restoring the TSO flags.

I can't think of anything but I figured folks here might be able to think
of invariants I don't know about.

Usage scenario:

I have a number of things where I can't afford a map from a ThreadId# or
even its internal id to a per-thread value for bounded wait-free
structures.

On the other hand, I can afford one entry per capability and to make a
handful of primitives that can't be preempted, letting me use normal
writes, not even a CAS, to update the capability-local variable in a
primitive (indexing into an array of size based on the number of
capabilities). This lets me bound the amount of "helpers" to consider by
the capability count rather than the potentially much larger and much more
variable number of live threads.

However, I may need to access this stuff in "pure" code that wasn't written
with my needs in mind, so I need to at least temporarily pin the current
thread to a fixed capability for the duration when that happens.

This isn't perfect, it won't react to a growing number of capabilities
nicely in the future, but it does handle a lot of things I can't do now at
all without downgrading to lock-free and starving a lot of computations, so
I'm hoping the answer is "it all works". =)

-Edward
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Unlifted data types

2015-10-27 Thread Edward Kmett
The idea of treating !S as a subtype of S and then relying on the potential
for new impredicativity machinery to let us just talk about how !S <= S
makes me really happy.

data Nat = Z | S !Nat

Pattern matching on S could give back the tighter type !Nat rather than Nat
for the argument, and if we ever have to show that to a user, the
'approximation' machinery would show it to users as Nat, concealing this
implementation detail. Similarly matching with an as-pattern as part of a
pattern that evaluates could do the same.

The constructor is a bit messier. It should really give back S :: Nat ->
Nat as what the constructor should behave as rather than S :: !Nat -> Nat,
because this will match existing behavior. Then the exposed constructor
would force the argument before storing it away, just like we do today and
we could recover via a sort of peephole optimization the elimination of the
jump into the closure to evaluate when it is fed something known to be of
type !Nat by some kind of "case/(!)-coercion" rule in the optimizer.

I'm partial to those parts of the idea and think it works pretty well.

I'm not sure how well it mixes with all the other discussions on levity
polymorphism though. Notably: Trying to get to having !Nat live in an
Unlifted kind, while Nat has a different kind seems likely to cause all
sorts of headaches. =/

-Edward

On Tue, Oct 27, 2015 at 7:42 PM, Dan Doel  wrote:

> Hello,
>
> I've added a section with my notes on the minimal semantics required
> to address what Haskell lacks with respect to strict types.
>
> Ed Kmett pointed me to some stuff that I think may fix all the
> problems with the !T sort of solution. It builds on the new constraint
> being considered for handling impredicativity. The quick sketch goes
> like this. Given the declaration:
>
> data Nat = Z | S !Nat
>
> then:
>
> Nat :: *
> !Nat :: Unlifted
> S :: Nat -> Nat
>
> But we also have:
>
> !Nat <~ Nat
>
> and the witness of this is just an identity function, because all
> values of type !Nat are legitimate values of type Nat. Then we can
> have:
>
> case n of
>   S m -> ...
>   Z -> ...
>
> where m has type !Nat, but we can still call `S m` and the like,
> because !Nat <~ Nat. If we do use `S m`, the S call will do some
> unnecessary evaluation of m, but this can (hopefully) be fixed with an
> optimization based on knowing that m has type !Nat, which we are
> weakening to Nat.
>
> Thoughts?
>
> -- Dan
>
>
> On Thu, Oct 8, 2015 at 8:36 AM, Richard Eisenberg 
> wrote:
> >
> > On Oct 8, 2015, at 6:02 AM, Simon Peyton Jones 
> wrote:
> >
> >> What's the wiki page?
> >
> > https://ghc.haskell.org/trac/ghc/wiki/UnliftedDataTypes
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Unlifted data types

2015-10-28 Thread Edward Kmett
On Wed, Oct 28, 2015 at 5:05 AM, Simon Peyton Jones 
wrote:

> I'm out of bandwidth at the moment, but let me just remark that this is
> swampy territory. It's easy to mess up.
>
> A particular challenge is polymorphism:
>
>   map :: forall a b. (a->b) -> [a] -> [b]
>   map f (x:xs) = (f x) : (map f xs)
>
> In the compiled code for map, is a thunk built for (f x), or is it
> evaluated eagerly.  Well, if you can instantiate the 'b' with !Int, say,
> then it should be evaluated eagerly. But if you instantiate with Int, then
> build a thunk.   So, we really need two compiled versions of 'map'.  Or
> perhaps four if we take 'b' into account.  In general an exponential number.
>

That's one reason that GHC doesn't let you instantiate a polymorphic type
> variable with an unlifted type, even if it is boxed.
>

This is one of the things we'd like to be able to fix. Right now I have a
small explosion of code going on that is being duplicated over and over to
parameterize over different unlifted types.

In the discussions about levity/lifting so far Dan and I have been trying
to tease apart what cases can be handled "forall" style rather than "pi"
style to borrow the split from Richard's presentation, just to get at a
sense of what really could be talked about without needing different
calling conventions, despite lifting. There are situations where we are
truly polymorphic in lifting, e.g. (==) from Eq and compare from Ord don't
care if the arguments of type 'a' are lifted or not.

Until you go to write a function application that returns a value of that
type. If all you do is rearrange them then that machinery can be parametric
in the choice. `map` on the other hand, cares about the selection because
of the `f x` application.

(Similarly, `min` and `max` from Ord do care about the convention on hand.)

One could see a world wherein you could parameterize such an instance on
levity explicitly, but it is pretty exhausting to think about.


> Another big issue is that *any* mixture of subtyping and (Haskell-style)
> parametric polymorphism gets very complicated very fast.  Especially when
> you add higher kinds.  (Then you need variance annotations, and before long
> you want variance polymorphism.)  I'm extremely dubious about adding
> subtyping to Haskell.  That's one reason Scala is so complicated.
>

I was actually quite surprised to see a subtyping relationship rear its
head in:

https://ghc.haskell.org/trac/ghc/attachment/wiki/ImpredicativePolymorphism/Impredicative-2015/impredicativity.pdf

But re-imagining GHC is good too.  Swampy territory it may be, but it's
> also important, and there really *ought* to be a more seamless of combining
> strictness and laziness.


I'm somewhat dubious of most approaches that try to mix strictness and
laziness under one umbrella. That is why trying to tease out the small
handful of cases where we are truly parametric in levity seems interesting.
Finding out some situations existed where we really don't care if a type is
lifted or not was eye opening to me personally, at least.

-Edward


> |  -Original Message-
> |  From: Dan Doel [mailto:dan.d...@gmail.com]
> |  Sent: 27 October 2015 23:42
> |  To: Richard Eisenberg
> |  Cc: Simon Peyton Jones; ghc-devs
> |  Subject: Re: Unlifted data types
> |
> |  Hello,
> |
> |  I've added a section with my notes on the minimal semantics required to
> |  address what Haskell lacks with respect to strict types.
> |
> |  Ed Kmett pointed me to some stuff that I think may fix all the problems
> with
> |  the !T sort of solution. It builds on the new constraint being
> considered
> |  for handling impredicativity. The quick sketch goes like this. Given the
> |  declaration:
> |
> |  data Nat = Z | S !Nat
> |
> |  then:
> |
> |  Nat :: *
> |  !Nat :: Unlifted
> |  S :: Nat -> Nat
> |
> |  But we also have:
> |
> |  !Nat <~ Nat
> |
> |  and the witness of this is just an identity function, because all
> values of
> |  type !Nat are legitimate values of type Nat. Then we can
> |  have:
> |
> |  case n of
> |S m -> ...
> |Z -> ...
> |
> |  where m has type !Nat, but we can still call `S m` and the like, because
> |  !Nat <~ Nat. If we do use `S m`, the S call will do some unnecessary
> |  evaluation of m, but this can (hopefully) be fixed with an optimization
> |  based on knowing that m has type !Nat, which we are weakening to Nat.
> |
> |  Thoughts?
> |
> |  -- Dan
> |
> |
> |  On Thu, Oct 8, 2015 at 8:36 AM, Richard Eisenberg 
> wrote:
> |  >
> |  > On Oct 8, 2015, at 6:02 AM, Simon Peyton Jones  >
> |  wrote:
> |  >
> |  >> What's the wiki page?
> |  >
> |  > https://ghc.haskell.org/trac/ghc/wiki/UnliftedDataTypes
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mai

Re: Unlifted data types

2015-10-28 Thread Edward Kmett
On Wed, Oct 28, 2015 at 9:19 AM, Richard Eisenberg 
wrote:

> I don't have terribly much to add, but I do want to counter one point:
>
> On Oct 28, 2015, at 5:48 AM, Edward Kmett  wrote:
> >  There are situations where we are truly polymorphic in lifting, e.g.
> (==) from Eq and compare from Ord don't care if the arguments of type 'a'
> are lifted or not.
>
> But these do, I think. In running code, if (==) is operating over a lazy
> type, it has to check if the pointer points to a thunk. If (==) is
> operating over a strict one, it can skip the check. This is not a big
> difference, but it *is* a difference.
>

Yes, but this is the job of the particular instance. Remember the instance
gets to know the type it is working at, and its corresponding levity.

class Eq (l :: Levity) (t :: Type l) where
  (==) :: a -> a -> Bool

instance Eq @Unlifted (SmallMutableArray s a) where
  (==) = sameSmallMutableArray

instance Eq @Lifted [] where
  (==) = ...

Your objection arises for things like

instance Eq @l (Foo @l)

Where the same code has to execute with different levities, but if I can't
even case or seq on a value with polymorphic levity, and can't construct
such a value but merely pass it around then such code is still sound. It
isn't safe to write functions that return values of polymorphic levity. I
can however hand them back as (# a #). This is how we handle indexing into
a array today.

If we had a Maybe that was levity polymorphic in its argument

Maybe :: forall (l :: Levity). Type l -> Lifted

instance Eq @l a => Eq @Lifted (Maybe @l a) where
  Just a == Just b = a == b
  _ == _ = False

is still okay under these rules, it never case analyzes a value of
polymorphic levity, never seq's it. Neither of those things is legal
because you can't 'enter' the closure.

If it was levity polymorphic in the result type

Maybe :: forall (i :: Levity) (j :: Levity). Type i -> Type j

then your objection comes true.

I can't naively write:

instance Eq @i a => Eq j (Maybe @i @j a) where
  Just a == Just b = a == b
  _ == _ = False

without compiling the same code twice, because of the act of case analysis.

If we don't have real 'strict data types' in Lifted this situation never
arises though.

Even if we do I can write separate:

instance Eq @i a => Eq Lifted (Maybe @i Lifted a)
instance Eq @i a => Eq Unlifted (Maybe @i Unlifted a)

instances, unless we can do better by writing a class constraint on the
levity that we can use in a clever way here.

I'm mostly concerned with the middle case where we don't overload data
types on their levity, and try to recover the ability to talk about strict
data types by other more explicit means, but rather permit them to accept
arguments of different levities. There none of the code I care about
actually needs to act differently based on levity.

Anything that results in a function space there has to care about levity,
but until a type occurs on the right hand side of an (->) or I go to seq a
value of that type or case analyze it, I don't need to care about if its
lifted or unlifted.

With Dan's (!S) then things get more complicated in ways I don't fully
understand the ramifications of yet, as you might be able to lift some of
those restrictions.

A little more thinking about this has led here: The distinction isn't
> really forall vs. pi. That is, in the cases where the levity matters, we
> don't really need to pi-quantify. Instead, it's exactly like type classes.
>

In many ways pi comes down to doing typeclass like things, you're tracking
information from the type system. The vehicle we have for doing that today
is typeclasses. I've been thinking about anything that i have that actually
needs the "pi" there as a form of "constraint" passing all along, with the
constraint being whatever introspection you need to allow on the type to
carry on.

Imagine, for a moment, that we have an alternate universe where strict is
> the default, and we have
>
> > data Lazy a = Thunk (() -> a) | WHNF a
>
> The WHNF is a bit of a lie, because this representation would mean that
> the contents of a WHNF are fully evaluated. But let's not get hung up on
> that point.
>
> Then, we have
>
> > type family BaseType a where
> >   BaseType (Lazy a) = a
> >   BaseType a = a
> >
> > class Forceable a where
> >   force :: a -> BaseType a
> >
> > instance Forceable a where
> >   force = id
> >
> > instance Forceable (Lazy a) where
> >  force (Thunk f) = f ()
> >  force (WHNF a) = a
>
> Things that need to behave differently depending on strictness just take a
> dictionary of the Forceable class. Equivalently, they make a runtime
> decision of whether to for

Re: Temporarily pinning a thread to a capability

2015-10-28 Thread Edward Kmett
If the number of capabilities is increased or decreased while everything I
have here is running I'm going to have to blow up the world anyways.

Basically I'll need to rely on an invariant that setNumCapabilities is
called before you spin up these Par-like computations.

-Edward

On Wed, Oct 28, 2015 at 4:28 PM, Ryan Yates  wrote:

> A thread with TSO_LOCKED can be migrated if the number of capabilities
> decreases.
>
> Ryan
>
> On Tue, Oct 27, 2015 at 11:35 PM, Edward Kmett  wrote:
>
>> Would anything go wrong with a thread id if I pinned it to a capability
>> after the fact?
>>
>> I could in theory do so just by setting
>>
>> tso->flags |= TSO_LOCKED
>>
>> and then disabling this later by restoring the TSO flags.
>>
>> I can't think of anything but I figured folks here might be able to think
>> of invariants I don't know about.
>>
>> Usage scenario:
>>
>> I have a number of things where I can't afford a map from a ThreadId# or
>> even its internal id to a per-thread value for bounded wait-free
>> structures.
>>
>> On the other hand, I can afford one entry per capability and to make a
>> handful of primitives that can't be preempted, letting me use normal
>> writes, not even a CAS, to update the capability-local variable in a
>> primitive (indexing into an array of size based on the number of
>> capabilities). This lets me bound the amount of "helpers" to consider by
>> the capability count rather than the potentially much larger and much more
>> variable number of live threads.
>>
>> However, I may need to access this stuff in "pure" code that wasn't
>> written with my needs in mind, so I need to at least temporarily pin the
>> current thread to a fixed capability for the duration when that happens.
>>
>> This isn't perfect, it won't react to a growing number of capabilities
>> nicely in the future, but it does handle a lot of things I can't do now at
>> all without downgrading to lock-free and starving a lot of computations, so
>> I'm hoping the answer is "it all works". =)
>>
>> -Edward
>>
>> ___
>> ghc-devs mailing list
>> ghc-devs@haskell.org
>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>>
>>
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: too many lines too long

2015-11-10 Thread Edward Kmett
Heck, I've been able to use 132 columns since my VT-220 days. ;)

-Edward

On Mon, Nov 9, 2015 at 5:45 PM, Simon Peyton Jones 
wrote:

> In my view 80 chars is too short.  It was justified in the days of
> 80-column CRTs, but that just isn't a restriction any more.   I routinely
> edit in a much wider window.
>
> Clearly there's a judgement call here.  But I'd prefer 120 cols say.
>
> Simon
>
> -Original Message-
> From: ghc-devs [mailto:ghc-devs-boun...@haskell.org] On Behalf Of Richard
> Eisenberg
> Sent: 09 November 2015 21:03
> To: ghc-devs Devs 
> Subject: too many lines too long
>
> Hi devs,
>
> We seem to be uncommitted to the ideal of 80-character lines. Almost every
> patch on Phab I look through has a bunch of "line too long" lint errors. No
> one seems to do much about these. And Phab's very very loud indication of a
> lint error makes reviewing the code harder.
>
> I like the ideal of 80-character lines. I aim for this ideal in my
> patches, falling short sometimes, of course. But I think the current
> setting of requiring everyone to "explain" away their overlong lines during
> `arc diff` and then trying hard to ignore the lint errors during code
> review is wrong. And it makes us all inured to more serious lint errors.
>
> How about this: after `arc diff` is run, it will count the number of
> overlong lines before and after the patch. If there are more after, have
> the last thing `arc diff` outputs be a stern telling-off of the dev, along
> the lines of
>
> > Before your patch, 15 of the edited lines were over 80 characters.
> > Now, a whopping 28 of them are. Can't you do better? Please?
>
> Would this be ignored more or followed more? Who knows. But it would sure
> be less annoying. :)
>
> What do others think?
>
> Thanks,
> Richard
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
>
> https://na01.safelinks.protection.outlook.com/?url=http%3a%2f%2fmail.haskell.org%2fcgi-bin%2fmailman%2flistinfo%2fghc-devs&data=01%7c01%7csimonpj%40064d.mgd.microsoft.com%7cebcdeaa0675a490898dc08d2e94927cc%7c72f988bf86f141af91ab2d7cd011db47%7c1&sdata=6IXQEBFIJnDRWCSKmNxdVsWQm2bqPVPn133kblshukU%3d
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Pre-Proposal: Introspective Template Haskell

2015-11-11 Thread Edward Kmett
In practice I find that almost every piece of template-haskell code I've
written gets broken by something every other release of GHC, so it hasn't
exactly been a shining beacon of backwards compatibility thus far.

Invariably it is always missing _something_ that I need, and anything that
ties it to a more canonical form like this would be a very good thing.

I'd strongly support this move.

A sample just from my current working directory:

haskell> grep -r MIN_VERSION_template_haskell */src

bifunctors/src/Data/Bifunctor/TH/Internal.hs:#if
MIN_VERSION_template_haskell(2,10,0)

bifunctors/src/Data/Bifunctor/TH/Internal.hs:#if
MIN_VERSION_template_haskell(2,7,0)

bifunctors/src/Data/Bifunctor/TH/Internal.hs:#if
MIN_VERSION_template_haskell(2,10,0)

bifunctors/src/Data/Bifunctor/TH/Internal.hs:#if
MIN_VERSION_template_haskell(2,8,0)

bifunctors/src/Data/Bifunctor/TH/Internal.hs:#if
MIN_VERSION_template_haskell(2,8,0)

bifunctors/src/Data/Bifunctor/TH/Internal.hs:#if
MIN_VERSION_template_haskell(2,8,0)

bifunctors/src/Data/Bifunctor/TH.hs:#ifndef MIN_VERSION_template_haskell

bifunctors/src/Data/Bifunctor/TH.hs:#if __GLASGOW_HASKELL__ < 710 &&
MIN_VERSION_template_haskell(2,8,0)

bifunctors/src/Data/Bifunctor/TH.hs:#if MIN_VERSION_template_haskell(2,7,0)

bifunctors/src/Data/Bifunctor/TH.hs:#if MIN_VERSION_template_haskell(2,7,0)

bifunctors/src/Data/Bifunctor/TH.hs:#if MIN_VERSION_template_haskell(2,7,0)

bifunctors/src/Data/Bifunctor/TH.hs:#if MIN_VERSION_template_haskell(2,7,0)

bifunctors/src/Data/Bifunctor/TH.hs:#if MIN_VERSION_template_haskell(2,7,0)

bifunctors/src/Data/Bifunctor/TH.hs:# if __GLASGOW_HASKELL__ >= 710 ||
!(MIN_VERSION_template_haskell(2,8,0))

bifunctors/src/Data/Bifunctor/TH.hs:#if MIN_VERSION_template_haskell(2,7,0)

free/src/Control/Monad/Free/TH.hs:#if MIN_VERSION_template_haskell(2,10,0)

lens/src/Control/Lens/Internal/FieldTH.hs:#if
MIN_VERSION_template_haskell(2,8,0)

lens/src/Control/Lens/Internal/TH.hs:#ifndef MIN_VERSION_template_haskell

lens/src/Control/Lens/Internal/TH.hs:#define
MIN_VERSION_template_haskell(x,y,z) (defined(__GLASGOW_HASKELL__) &&
__GLASGOW_HASKELL__ >= 706)

lens/src/Control/Lens/Internal/TH.hs:#if MIN_VERSION_template_haskell(2,9,0)

lens/src/Control/Lens/Plated.hs:#if !(MIN_VERSION_template_haskell(2,8,0))

lens/src/Control/Lens/TH.hs:#ifndef MIN_VERSION_template_haskell

lens/src/Control/Lens/TH.hs:#define MIN_VERSION_template_haskell(x,y,z)
(defined(__GLASGOW_HASKELL__) && __GLASGOW_HASKELL__ >= 706)

lens/src/Control/Lens/TH.hs:#if !(MIN_VERSION_template_haskell(2,7,0))

lens/src/Control/Lens/TH.hs:#if MIN_VERSION_template_haskell(2,10,0)

lens/src/Control/Lens/TH.hs:#if !(MIN_VERSION_template_haskell(2,7,0))

lens/src/Language/Haskell/TH/Lens.hs:#ifndef MIN_VERSION_template_haskell

lens/src/Language/Haskell/TH/Lens.hs:#define
MIN_VERSION_template_haskell(x,y,z) 1

lens/src/Language/Haskell/TH/Lens.hs:#if MIN_VERSION_template_haskell(2,9,0)

lens/src/Language/Haskell/TH/Lens.hs:#if MIN_VERSION_template_haskell(2,8,0)

lens/src/Language/Haskell/TH/Lens.hs:#if MIN_VERSION_template_haskell(2,9,0)

lens/src/Language/Haskell/TH/Lens.hs:#if
MIN_VERSION_template_haskell(2,10,0)

lens/src/Language/Haskell/TH/Lens.hs:#if
MIN_VERSION_template_haskell(2,10,0)

lens/src/Language/Haskell/TH/Lens.hs:#if MIN_VERSION_template_haskell(2,8,0)

lens/src/Language/Haskell/TH/Lens.hs:#if MIN_VERSION_template_haskell(2,9,0)

lens/src/Language/Haskell/TH/Lens.hs:#if
MIN_VERSION_template_haskell(2,10,0)

lens/src/Language/Haskell/TH/Lens.hs:#if MIN_VERSION_template_haskell(2,9,0)

lens/src/Language/Haskell/TH/Lens.hs:#if MIN_VERSION_template_haskell(2,8,0)

lens/src/Language/Haskell/TH/Lens.hs:#if MIN_VERSION_template_haskell(2,8,0)

lens/src/Language/Haskell/TH/Lens.hs:#if
MIN_VERSION_template_haskell(2,10,0)

lens/src/Language/Haskell/TH/Lens.hs:#if MIN_VERSION_template_haskell(2,8,0)

lens/src/Language/Haskell/TH/Lens.hs:#if
MIN_VERSION_template_haskell(2,10,0)

lens/src/Language/Haskell/TH/Lens.hs:#if MIN_VERSION_template_haskell(2,8,0)

lens/src/Language/Haskell/TH/Lens.hs:#if MIN_VERSION_template_haskell(2,8,0)

lens/src/Language/Haskell/TH/Lens.hs:#if
!MIN_VERSION_template_haskell(2,10,0)

lens/src/Language/Haskell/TH/Lens.hs:#if MIN_VERSION_template_haskell(2,9,0)

lens/src/Language/Haskell/TH/Lens.hs:#if MIN_VERSION_template_haskell(2,8,0)

lens/src/Language/Haskell/TH/Lens.hs:#if
!MIN_VERSION_template_haskell(2,10,0)

lens/src/Language/Haskell/TH/Lens.hs:#if
!MIN_VERSION_template_haskell(2,10,0)

lens/src/Language/Haskell/TH/Lens.hs:#if MIN_VERSION_template_haskell(2,8,0)

lens/src/Language/Haskell/TH/Lens.hs:#if MIN_VERSION_template_haskell(2,8,0)

lens/src/Language/Haskell/TH/Lens.hs:#if MIN_VERSION_template_haskell(2,8,0)

lens/src/Language/Haskell/TH/Lens.hs:#if MIN_VERSION_template_haskell(2,9,0)

lens/src/Language/Haskell/TH/Lens.hs:#if
MIN_VERSION_template_haskell(2,10,0)

lens/src/Language/Haskell/TH/Lens.hs:#if
MIN_VERSION_template_haskell(2,10,0)

lens/sr

Re: Pre-Proposal: Introspective Template Haskell

2015-11-11 Thread Edward Kmett
On Wed, Nov 11, 2015 at 12:50 PM, Richard Eisenberg 
wrote:

>
> This is a very good point. We would want to bless some API that would
> remain stable. Then, clients that go around that API get what they deserve.
> A starting point for the stable API would be today's template-haskell (less
> some unsafe features, like exposing NameG).
>

As a data point, in a couple of packages I wind up forced into using
mkNameG_v and mkNameG_tc in order to avoid incurring a dependency on a
stage2 compiler today. Removing them would force me to drop support for
stage1-only platforms offered by some linux distributions.

If you're going to drop support for it, please consider offering me some
horrible back door to get at the functionality that I can't currently
replace by other means.

-Edward
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Pre-Proposal: Introspective Template Haskell

2015-11-11 Thread Edward Kmett
That would be a sufficient "horrible backdoor" for me. :)

-Edward

> On Nov 11, 2015, at 3:03 PM, Richard Eisenberg  wrote:
> 
> 
>> On Nov 11, 2015, at 2:25 PM, Edward Kmett  wrote:
>> 
>> As a data point, in a couple of packages I wind up forced into using 
>> mkNameG_v and mkNameG_tc in order to avoid incurring a dependency on a 
>> stage2 compiler today. Removing them would force me to drop support for 
>> stage1-only platforms offered by some linux distributions. 
>> 
>> If you're going to drop support for it, please consider offering me some 
>> horrible back door to get at the functionality that I can't currently 
>> replace by other means.
> 
> I've had to use these functions, too, mostly because TH didn't export the 
> functionality I needed. But this wouldn't be problematic in the new scenario: 
> just depend on the ghc package instead of template-haskell. Then you can do 
> whatever you like. :)
> 
>> -Edward
> 
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Allow ambiguous types (with warning) by default

2015-12-05 Thread Edward Kmett
So you are saying you want users to write a ton of code that happens to
have signatures that can never be called and only catch it when they go to
try to actually use it in a concrete situation much later?

I don't really show how this would be a better default.

When and if users see the problem later they have to worry about if they
are doing something wrong at the definition site or the call site. With the
status quo it complains at the right time that you aren't going to sit
there flailing around trying to fix a call site that can never be fixed.

-Edward

On Sat, Dec 5, 2015 at 5:38 PM, David Feuer  wrote:

> The ambiguity check produces errors that are quite surprising to the
> uninitiated. When the check is suppressed, the errors at use sites are
> typically much easier to grasp. On the other hand, there's obviously a lot
> of value to catching mistakes as soon as possible. Would it be possible to
> turn that into a warning by default?
>
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Allow ambiguous types (with warning) by default

2015-12-05 Thread Edward Kmett
If you aren't the one writing the code that can't be called you may never
see the warning. It'll be tucked away in a cabal or stack build log
somewhere.

-Edward

On Sun, Dec 6, 2015 at 12:06 AM, David Feuer  wrote:

> No, I want it to *warn* by default. If I write
>
> foo :: something that will fail the ambiguity check
> bar = something that uses foo in a (necessarily) ambiguous way
>
> the current default leads me to do this:
>
> 1. Attempt to compile. Get an ambiguity error on foo whose exact cause
> is hard for me to see.
> 2. Enable AllowAmbiguousTypes and recompile. Get an error on bar whose
> exact cause is completely obvious, and that makes it perfectly clear
> what I need to do to fix foo.
> 3. Fix foo, and disable AllowAmbiguousTypes.
>
> I'd much rather go with
>
> 1. Attempt to compile. Get an ambiguity *warning* on foo whose exact
> cause is hard for me to see, but also an error on bar whose exact
> cause is completely obvious, and that makes it perfectly clear what I
> need to do to fix foo.
> 2. Fix foo.
>
> Simple example of how it is currently:
>
> > let foo :: Num a => F a; foo = undefined; bar :: Int; bar = foo
>
> :14:12:
> Couldn't match expected type ‘F a’ with actual type ‘F a0’
> NB: ‘F’ is a type function, and may not be injective
> The type variable ‘a0’ is ambiguous
> In the ambiguity check for the type signature for ‘foo’:
>   foo :: forall a. Num a => F a
> To defer the ambiguity check to use sites, enable AllowAmbiguousTypes
> In the type signature for ‘foo’: foo :: Num a => F a
>
> Couldn't match what with what? Huh? Where did a0 come from?
>
> > :set -XAllowAmbiguousTypes
> > let foo :: Num a => F a; foo = undefined; bar :: Int; bar = foo
>
> :16:61:
> Couldn't match expected type ‘Int’ with actual type ‘F a0’
> The type variable ‘a0’ is ambiguous
> In the expression: foo
> In an equation for ‘bar’: bar = foo
>
> Aha! That's the problem! It doesn't know what a0 is! How can I tell it
> what a0 is? Oh! I can't, because foo doesn't give me a handle on it.
> Guess I have to fix foo.
>
> I'd really, really like to get *both* of those messages in one go,
> with the first one preferably explaining itself a bit better.
>
> On Sat, Dec 5, 2015 at 11:51 PM, Edward Kmett  wrote:
> > So you are saying you want users to write a ton of code that happens to
> have
> > signatures that can never be called and only catch it when they go to
> try to
> > actually use it in a concrete situation much later?
> >
> > I don't really show how this would be a better default.
> >
> > When and if users see the problem later they have to worry about if they
> are
> > doing something wrong at the definition site or the call site. With the
> > status quo it complains at the right time that you aren't going to sit
> there
> > flailing around trying to fix a call site that can never be fixed.
> >
> > -Edward
> >
> > On Sat, Dec 5, 2015 at 5:38 PM, David Feuer 
> wrote:
> >>
> >> The ambiguity check produces errors that are quite surprising to the
> >> uninitiated. When the check is suppressed, the errors at use sites are
> >> typically much easier to grasp. On the other hand, there's obviously a
> lot
> >> of value to catching mistakes as soon as possible. Would it be possible
> to
> >> turn that into a warning by default?
> >>
> >>
> >> ___
> >> ghc-devs mailing list
> >> ghc-devs@haskell.org
> >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
> >>
> >
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Kinds of type synonym arguments

2015-12-21 Thread Edward Kmett
I brought up the subject of allowing newtypes in kind # (or even in any
kind that ends in * or # after a chain of ->'s to get more powerful
Coercible instances) at ICFP this year and Simon seemed to think it'd be a
pretty straightforward modification to the typechecker.

I confess, he's likely waiting for me to actually sit down and give the
idea a nice writeup. ;)

This would be good for many things, especially when it comes to improving
the type safety of various custom c-- tricks.

-Edward

On Sun, Dec 20, 2015 at 2:14 PM, Ömer Sinan Ağacan 
wrote:

> I have another related question: What about allowing primitive types
> in newtypes?
>
> λ:4> newtype Blah1 = Blah1 Int
> λ:5> newtype Blah2 = Blah2 Int#
>
> :5:23: error:
> • Expecting a lifted type, but ‘Int#’ is unlifted
> • In the type ‘Int#’
>   In the definition of data constructor ‘Blah2’
>   In the newtype declaration for ‘Blah2’
>
> Ideally second definition should be OK, and kind of Blah2 should be #. Is
> this
> too hard to do?
>
> 2015-12-16 17:22 GMT-05:00 Richard Eisenberg :
> >
> > On Dec 16, 2015, at 2:06 PM, Ömer Sinan Ağacan 
> wrote:
> >>
> >> In any case, this is not that big deal. When I read the code I thought
> this
> >> should be a trivial change but apparently it's not.
> >
> > No, it's not. Your example (`f :: (Int#, b) -> b`) still has an unboxed
> thing in a boxed tuple. Boxed tuples simply can't (currently) hold unboxed
> things. And changing that is far from trivial. It's not the polymorphism
> that's the problem -- it's the unboxed thing in a boxed tuple.
> >
> > Richard
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Reify and separating renamer+TH from type-checking

2016-01-13 Thread Edward Kmett
Worse, 'reify' is in many cases the very reason why folks are using
template-haskell in the first place to build instances or classes based on
properties of data types above the splice in the current module.

On Fri, Jan 8, 2016 at 2:40 PM, Edward Z. Yang  wrote:

> I implemented the refactoring to run the renamer and TH splices all
> first before doing any type-checking, but actually there's a problem:
> Template Haskell splices can call 'reify', which needs the type
> information in order to supply the information about the identifiers
> in question.  I can't think of any good way around this problem.
>
> Edward
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


  1   2   3   >