Re: Optimizing "counting" GADTs

2016-05-25 Thread David Feuer
Partially. Unfortunately, bidirectional pattern synonyms tie the types of
the pattern synonyms to the types of the smart constructors for no good
reason, making them (currently) inappropriate. But fixing that problem
would offer one way to this optimization, I think.
On May 25, 2016 8:37 PM, "Carter Schonwald" 
wrote:

could this be simulated/modeled with pattern synonyms?

On Wed, May 25, 2016 at 7:51 PM, David Feuer  wrote:

> I've started a wiki page,
> https://ghc.haskell.org/trac/ghc/wiki/OptimizeCountingGADTs , to consider
> optimizing GADTs that look like natural numbers but that possibly have
> "heavy zeros". Please take a look.
>
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Optimizing "counting" GADTs

2016-05-25 Thread David Feuer
I've started a wiki page,
https://ghc.haskell.org/trac/ghc/wiki/OptimizeCountingGADTs , to consider
optimizing GADTs that look like natural numbers but that possibly have
"heavy zeros". Please take a look.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Unpacking single-field, single-strict-constructor GADTs and existentials

2016-05-25 Thread David Feuer
I've started a wiki page at
https://ghc.haskell.org/trac/ghc/wiki/NewtypeOptimizationForGADTS

On Wed, May 25, 2016 at 3:27 AM, Simon Peyton Jones
 wrote:
> I'm not following the details of this discussion.  Would it be possible to 
> write a compact summary, with the key examples, in the appropriate ticket?
>
> I think that https://ghc.haskell.org/trac/ghc/ticket/10016 is one such 
> ticket, but I think there may be more than one issue at stake here.  For 
> example, #10016 can be done in a strongly typed way in Core; but #1965 cannot 
> (so far as I know).
>
> It could also help to have a wiki page to summarise the cluster of issues, 
> pointing to the appropriate tickets for individual cases.
>
> An articulate summary will make it much more likely that progress is made! 
> Thanks.
>
> Simon
>
> | -Original Message-
> | From: ghc-devs [mailto:ghc-devs-boun...@haskell.org] On Behalf Of David 
> Feuer
> | Sent: 24 May 2016 23:14
> | To: Carter Schonwald 
> | Cc: ghc-devs 
> | Subject: Re: Unpacking single-field, single-strict-constructor GADTs and
> | existentials
> |
> | Unboxing, per se, is not required; only newtype optimization. I
> | believe Ed would probably have mentioned something when I discussed
> | the issue with him if he'd already had an idea for hacking around it.
> | Instead, he said he wanted the feature too.
> |
> | On Tue, May 24, 2016 at 6:03 PM, Carter Schonwald
> |  wrote:
> | > Phrased differently: there's a subclass of existential data types which
> | have
> | > a well behaved unboxed memory layout?
> | >
> | > @ David : have you tried simulating this in userland using eds structs /
> | > structures lib?
> | >
> | > On Tuesday, May 24, 2016, David Feuer  wrote:
> | >>
> | >> I should mention that while this does not require UNPACKing sum types (or
> | >> any of the difficult trade-offs that involves), it lets programmers
> | >> accomplish such unpacking by hand under sufficiently general conditions 
> to
> | >> be quite useful in practice. As long as the set of types involved is
> | closed,
> | >> it'll do.
> | >>
> | >> David Feuer  writes:
> | >>
> | >> > Given
> | >> >
> | >> > data Big a = B1 !(Small1 a) | B2 !(Small2 a) | B3 !(Small3 a), where 
> the
> | >> > Small types are (possibly recursive) sums, it's generally possible to
> | >> > express that as something like
> | >> >
> | >> > data Selector = One | Two | Three
> | >> > data Big a = forall (x :: Selector) .
> | >> >Big !(BigG x a)
> | >> > data BigG x a where
> | >> >   GB1a :: some -> fields -> BigG 'One a
> | >> >   GB1b :: fields -> BigG 'One a
> | >> >   GB2a :: whatever -> BigG 'Two a
> | >> >   GB3a :: yeah -> BigG 'Three a
> | >> >
> | >> > Making one big GADT from all the constructors of the "small" types, and
> | >> > then wrapping it up in an existential. That's what I meant about
> | >> > "unpacking". But for efficiency purposes, that wrapper needs the 
> newtype
> | >> > optimization.
> | >>
> | >> Yes, but you'd need to unbox a sum in this case, no? I think this is the
> | >> first issue that you need to solve before you can talk about dealing
> | >> with the polymorphism issue (although hopefully Ömer will make progress
> | >> on this for 8.2).
> | >>
> | >> Cheers,
> | >>
> | >> - Ben
> | ___
> | ghc-devs mailing list
> | ghc-devs@haskell.org
> | 
> https://na01.safelinks.protection.outlook.com/?url=http%3a%2f%2fmail.haskell.
> | org%2fcgi-bin%2fmailman%2flistinfo%2fghc-
> | 
> devs&data=01%7c01%7csimonpj%40064d.mgd.microsoft.com%7ce98f7b01dbeb47cc8d3908
> | 
> d38420b38b%7c72f988bf86f141af91ab2d7cd011db47%7c1&sdata=gFnWAB1of%2fp%2b0IXkD
> | CgcBbyxHkS7%2b4BusMl%2fs0rUySM%3d
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Unpacking single-field, single-strict-constructor GADTs and existentials

2016-05-25 Thread David Feuer
#1965 *as modified by comments #7 and #9* is pretty much what I'm hoping for.

On Wed, May 25, 2016 at 3:27 AM, Simon Peyton Jones
 wrote:
> I'm not following the details of this discussion.  Would it be possible to 
> write a compact summary, with the key examples, in the appropriate ticket?
>
> I think that https://ghc.haskell.org/trac/ghc/ticket/10016 is one such 
> ticket, but I think there may be more than one issue at stake here.  For 
> example, #10016 can be done in a strongly typed way in Core; but #1965 cannot 
> (so far as I know).
>
> It could also help to have a wiki page to summarise the cluster of issues, 
> pointing to the appropriate tickets for individual cases.
>
> An articulate summary will make it much more likely that progress is made! 
> Thanks.
>
> Simon
>
> | -Original Message-
> | From: ghc-devs [mailto:ghc-devs-boun...@haskell.org] On Behalf Of David 
> Feuer
> | Sent: 24 May 2016 23:14
> | To: Carter Schonwald 
> | Cc: ghc-devs 
> | Subject: Re: Unpacking single-field, single-strict-constructor GADTs and
> | existentials
> |
> | Unboxing, per se, is not required; only newtype optimization. I
> | believe Ed would probably have mentioned something when I discussed
> | the issue with him if he'd already had an idea for hacking around it.
> | Instead, he said he wanted the feature too.
> |
> | On Tue, May 24, 2016 at 6:03 PM, Carter Schonwald
> |  wrote:
> | > Phrased differently: there's a subclass of existential data types which
> | have
> | > a well behaved unboxed memory layout?
> | >
> | > @ David : have you tried simulating this in userland using eds structs /
> | > structures lib?
> | >
> | > On Tuesday, May 24, 2016, David Feuer  wrote:
> | >>
> | >> I should mention that while this does not require UNPACKing sum types (or
> | >> any of the difficult trade-offs that involves), it lets programmers
> | >> accomplish such unpacking by hand under sufficiently general conditions 
> to
> | >> be quite useful in practice. As long as the set of types involved is
> | closed,
> | >> it'll do.
> | >>
> | >> David Feuer  writes:
> | >>
> | >> > Given
> | >> >
> | >> > data Big a = B1 !(Small1 a) | B2 !(Small2 a) | B3 !(Small3 a), where 
> the
> | >> > Small types are (possibly recursive) sums, it's generally possible to
> | >> > express that as something like
> | >> >
> | >> > data Selector = One | Two | Three
> | >> > data Big a = forall (x :: Selector) .
> | >> >Big !(BigG x a)
> | >> > data BigG x a where
> | >> >   GB1a :: some -> fields -> BigG 'One a
> | >> >   GB1b :: fields -> BigG 'One a
> | >> >   GB2a :: whatever -> BigG 'Two a
> | >> >   GB3a :: yeah -> BigG 'Three a
> | >> >
> | >> > Making one big GADT from all the constructors of the "small" types, and
> | >> > then wrapping it up in an existential. That's what I meant about
> | >> > "unpacking". But for efficiency purposes, that wrapper needs the 
> newtype
> | >> > optimization.
> | >>
> | >> Yes, but you'd need to unbox a sum in this case, no? I think this is the
> | >> first issue that you need to solve before you can talk about dealing
> | >> with the polymorphism issue (although hopefully Ömer will make progress
> | >> on this for 8.2).
> | >>
> | >> Cheers,
> | >>
> | >> - Ben
> | ___
> | ghc-devs mailing list
> | ghc-devs@haskell.org
> | 
> https://na01.safelinks.protection.outlook.com/?url=http%3a%2f%2fmail.haskell.
> | org%2fcgi-bin%2fmailman%2flistinfo%2fghc-
> | 
> devs&data=01%7c01%7csimonpj%40064d.mgd.microsoft.com%7ce98f7b01dbeb47cc8d3908
> | 
> d38420b38b%7c72f988bf86f141af91ab2d7cd011db47%7c1&sdata=gFnWAB1of%2fp%2b0IXkD
> | CgcBbyxHkS7%2b4BusMl%2fs0rUySM%3d
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Unpacking single-field, single-strict-constructor GADTs and existentials

2016-05-24 Thread David Feuer
Unboxing, per se, is not required; only newtype optimization. I
believe Ed would probably have mentioned something when I discussed
the issue with him if he'd already had an idea for hacking around it.
Instead, he said he wanted the feature too.

On Tue, May 24, 2016 at 6:03 PM, Carter Schonwald
 wrote:
> Phrased differently: there's a subclass of existential data types which have
> a well behaved unboxed memory layout?
>
> @ David : have you tried simulating this in userland using eds structs /
> structures lib?
>
> On Tuesday, May 24, 2016, David Feuer  wrote:
>>
>> I should mention that while this does not require UNPACKing sum types (or
>> any of the difficult trade-offs that involves), it lets programmers
>> accomplish such unpacking by hand under sufficiently general conditions to
>> be quite useful in practice. As long as the set of types involved is closed,
>> it'll do.
>>
>> David Feuer  writes:
>>
>> > Given
>> >
>> > data Big a = B1 !(Small1 a) | B2 !(Small2 a) | B3 !(Small3 a), where the
>> > Small types are (possibly recursive) sums, it's generally possible to
>> > express that as something like
>> >
>> > data Selector = One | Two | Three
>> > data Big a = forall (x :: Selector) .
>> >Big !(BigG x a)
>> > data BigG x a where
>> >   GB1a :: some -> fields -> BigG 'One a
>> >   GB1b :: fields -> BigG 'One a
>> >   GB2a :: whatever -> BigG 'Two a
>> >   GB3a :: yeah -> BigG 'Three a
>> >
>> > Making one big GADT from all the constructors of the "small" types, and
>> > then wrapping it up in an existential. That's what I meant about
>> > "unpacking". But for efficiency purposes, that wrapper needs the newtype
>> > optimization.
>>
>> Yes, but you'd need to unbox a sum in this case, no? I think this is the
>> first issue that you need to solve before you can talk about dealing
>> with the polymorphism issue (although hopefully Ömer will make progress
>> on this for 8.2).
>>
>> Cheers,
>>
>> - Ben
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Unpacking single-field, single-strict-constructor GADTs and existentials

2016-05-24 Thread David Feuer
I should mention that while this does not require UNPACKing sum types (or
any of the difficult trade-offs that involves), it lets programmers
accomplish such unpacking by hand under sufficiently general conditions to
be quite useful in practice. As long as the set of types involved is
closed, it'll do.
David Feuer  writes:

> Given
>
> data Big a = B1 !(Small1 a) | B2 !(Small2 a) | B3 !(Small3 a), where the
> Small types are (possibly recursive) sums, it's generally possible to
> express that as something like
>
> data Selector = One | Two | Three
> data Big a = forall (x :: Selector) .
>Big !(BigG x a)
> data BigG x a where
>   GB1a :: some -> fields -> BigG 'One a
>   GB1b :: fields -> BigG 'One a
>   GB2a :: whatever -> BigG 'Two a
>   GB3a :: yeah -> BigG 'Three a
>
> Making one big GADT from all the constructors of the "small" types, and
> then wrapping it up in an existential. That's what I meant about
> "unpacking". But for efficiency purposes, that wrapper needs the newtype
> optimization.

Yes, but you'd need to unbox a sum in this case, no? I think this is the
first issue that you need to solve before you can talk about dealing
with the polymorphism issue (although hopefully Ömer will make progress
on this for 8.2).

Cheers,

- Ben
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Unpacking single-field, single-strict-constructor GADTs and existentials

2016-05-24 Thread David Feuer
No, because the pattern matching semantics are different. Matching on
the constructor *must* force the contents to maintain type safety.
It's really strict data with the newtype optimization, rather than a
bona fide newtype.

On Tue, May 24, 2016 at 4:18 PM, Ben Gamari  wrote:
> David Feuer  writes:
>
>> Not really. It's really just the newtype optimization, although it's not a
>> newtype.
>
> Ahh, I see. Yes, you are right. I was being silly.
>
> However, in this case wouldn't it make more sense to just call it a newtype?
>
> Cheers,
>
> - Ben
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Unpacking single-field, single-strict-constructor GADTs and existentials

2016-05-24 Thread David Feuer
Not really. It's really just the newtype optimization, although it's not a
newtype.
On May 24, 2016 12:43 PM, "Ben Gamari"  wrote:

> David Feuer  writes:
>
> > Given
> >
> > data Big a = B1 !(Small1 a) | B2 !(Small2 a) | B3 !(Small3 a), where the
> > Small types are (possibly recursive) sums, it's generally possible to
> > express that as something like
> >
> > data Selector = One | Two | Three
> > data Big a = forall (x :: Selector) .
> >Big !(BigG x a)
> > data BigG x a where
> >   GB1a :: some -> fields -> BigG 'One a
> >   GB1b :: fields -> BigG 'One a
> >   GB2a :: whatever -> BigG 'Two a
> >   GB3a :: yeah -> BigG 'Three a
> >
> > Making one big GADT from all the constructors of the "small" types, and
> > then wrapping it up in an existential. That's what I meant about
> > "unpacking". But for efficiency purposes, that wrapper needs the newtype
> > optimization.
>
> Yes, but you'd need to unbox a sum in this case, no? I think this is the
> first issue that you need to solve before you can talk about dealing
> with the polymorphism issue (although hopefully Ömer will make progress
> on this for 8.2).
>
> Cheers,
>
> - Ben
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Unpacking single-field, single-strict-constructor GADTs and existentials

2016-05-24 Thread David Feuer
Given

data Big a = B1 !(Small1 a) | B2 !(Small2 a) | B3 !(Small3 a), where the
Small types are (possibly recursive) sums, it's generally possible to
express that as something like

data Selector = One | Two | Three
data Big a = forall (x :: Selector) .
   Big !(BigG x a)
data BigG x a where
  GB1a :: some -> fields -> BigG 'One a
  GB1b :: fields -> BigG 'One a
  GB2a :: whatever -> BigG 'Two a
  GB3a :: yeah -> BigG 'Three a

Making one big GADT from all the constructors of the "small" types, and
then wrapping it up in an existential. That's what I meant about
"unpacking". But for efficiency purposes, that wrapper needs the newtype
optimization.
On May 24, 2016 4:16 AM, "Ben Gamari"  wrote:

> David Feuer  writes:
>
> > Data.IntMap could be cleaned up some if single-field, single strict
> > constructor GADTs/existentials could be unpacked even when wrapping a sum
> > type. We could then have
> >
> > data Status = E | NE
> > data IntMap' (s :: Status) a where
> >   Bin :: ... -> ... -> !(IntMap' NE a) -> !(IntMap' NE a) -> IntMap' NE a
> >   Tip :: ... -> a -> IntMap' NE a
> >   Nil :: IntMap' E a
> > data IntMap a =
> >   forall s . IM {-# UNPACK #-} !(IntMap' s a)
> >
> I'm not sure I understand how the existential helps you unpack this sum.
> Surely I'm missing something.
>
> > The representation would be the same as that of a newtype, but the
> pattern
> > matching semantics would be strict. In the GADT case, this would
> > essentially allow any fixed concrete datatype to serve directly as a
> > witness for an arbitrary set of type equalities demanded on construction.
> >
> > Is there any hope something like this could happen?
>
> Ignoring the sum issue for a moment:
>
> My understanding is that it ought to be possible to unpack at least
> single-constructor types in an existentially quantified datacon,
> although someone needs to step up to do it. A closely related issue
> (existentials in newtypes) was discussed by dons in a Stack Overflow
> question [1] quite some time ago.
>
> As far as I understand as long as the existentially-quantified argument
> is unconstrained (therefore there is no need to carry a dictionary) and
> of kind * (therefore has a uniform representation) there is no reason
> why unpacking shouldn't be possible.
>
> The case that you cite looks to be even easier since the existential is
> a phantom so there is no need to represent it at all. It seems to me
> like it might not be so difficult to treat this case in particular.
> It's possible all that is necessary would be to adjust the unpackability
> criteria in MkId.
>
> It actually looks like there's a rather closely related ticket already
> open, #10016.
>
> Cheers,
>
> - Ben
>
>
> [1]
> http://stackoverflow.com/questions/5890094/is-there-a-way-to-define-an-existentially-quantified-newtype-in-ghc-haskell
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Unpacking single-field, single-strict-constructor GADTs and existentials

2016-05-19 Thread David Feuer
Data.IntMap could be cleaned up some if single-field, single strict
constructor GADTs/existentials could be unpacked even when wrapping a sum
type. We could then have

data Status = E | NE
data IntMap' (s :: Status) a where
  Bin :: ... -> ... -> !(IntMap' NE a) -> !(IntMap' NE a) -> IntMap' NE a
  Tip :: ... -> a -> IntMap' NE a
  Nil :: IntMap' E a
data IntMap a =
  forall s . IM {-# UNPACK #-} !(IntMap' s a)

The representation would be the same as that of a newtype, but the pattern
matching semantics would be strict. In the GADT case, this would
essentially allow any fixed concrete datatype to serve directly as a
witness for an arbitrary set of type equalities demanded on construction.

Is there any hope something like this could happen?
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: suboptimal ghc code generation in IO vs equivalent pure code case

2016-05-14 Thread David Feuer
Well, a few weeks ago Bertram Felgenhauer came up with a version of IO that
acts more like lazy ST. That could be just the thing. He placed it in the
public domain/CC0 and told me I could put it up on Hackage if I want. I'll
try to do that this week, but no promises. I could forward his email if you
just want to try it out.
On May 14, 2016 2:31 PM, "Harendra Kumar"  wrote:

> The difference seems to be entirely due to memory pressure. At list size
> 1000 both pure version and IO version perform equally. But as the size of
> the list increases the pure version scales linearly while the IO version
> degrades exponentially. Here are the execution times per list element in ns
> as the list size increases:
>
> Size of list  Pure   IO
> 1000   8.7  8.3
> 1 8.7  18
> 10   8.8  63
> 100 9.3  786
>
> This seems to be due to increased GC activity in the IO case. The GC stats
> for list size 1 million are:
>
> IO case:   %GC time  66.1%  (61.1% elapsed)
> Pure case:   %GC time   2.6%  (3.3% elapsed)
>
> Not sure if there is a way to write this code in IO monad which can reduce
> this overhead.
>
> -harendra
>
>
> On 14 May 2016 at 17:10, Harendra Kumar  wrote:
> >
> > You are right about the way IO code is generated because of the ordering
> semantics. The IO version builds the list entirely in a recursive fashion
> before returning it while the pure code builds it lazily. I wrote very
> simple versions to keep things simpler:
> >
> > Pure version:
> >
> > f [] = []
> > f (x : xs) = x : f xs
> >
> >
> > time11.08 ms   (10.86 ms .. 11.34 ms)
> > Measured for a million elements in the list
> >
> >  104,041,264 bytes allocated in the heap
> >   16,728 bytes copied during GC
> >   35,992 bytes maximum residency (1 sample(s))
> >   21,352 bytes maximum slop
> >1 MB total memory in use (0 MB lost due to fragmentation)
> >
> >
> > IO version:
> > f [] = return []
> > f (x : xs) = do
> > rest <- f xs
> > return $ x : rest
> >
> > time 79.66 ms   (75.49 ms .. 82.55 ms)
> >
> >  208,654,560 bytes allocated in the heap
> >  121,045,336 bytes copied during GC
> >   27,679,344 bytes maximum residency (8 sample(s))
> >  383,376 bytes maximum slop
> >   66 MB total memory in use (0 MB lost due to fragmentation)
> >
> > Even though this is a small program not doing much and therefore
> enhancing even small differences to a great degree, I am not sure if I can
> completely explain the difference in slowness of the order of 7.5x by just
> the recursive vs lazy building of the list. I am wondering if there is
> anything that is worth further investigating and improving here.
> >
> > -harendra
> >
> > On 12 May 2016 at 05:41, Dan Doel  wrote:
> > >
> > > On Tue, May 10, 2016 at 4:45 AM, Harendra Kumar
> > >  wrote:
> > > > Thanks Dan, that helped. I did notice and suspect the update frame
> and the
> > > > unboxed tuple but given my limited knowledge about ghc/core/stg/cmm
> I was
> > > > not sure what is going on. In fact I thought that the intermediate
> tuple
> > > > should get optimized out since it is required only because of the
> realworld
> > > > token which is not real. But it might be difficult to see that at
> this
> > > > level?
> > >
> > > The token exists as far as the STG level is concerned, I think,
> > > because that is the only thing ensuring that things happen in the
> > > right order. And the closure must be built to have properly formed
> > > STG. So optimizing away the closure building would have to happen at a
> > > level lower than STG, and I guess there is no such optimization. I'm
> > > not sure how easy it would be to do.
> > >
> > > > What you are saying may be true for the current implementation but
> in theory
> > > > can we eliminate the intermediate closure?
> > >
> > > I don't know. I'm a bit skeptical that building this one closure is
> > > the only thing causing your code to be a factor of 5 slower. For
> > > instance, another difference in the core is that the recursive call
> > > corresponding to the result s2 happens before allocating the
> > > additional closure. That is the case statement that unpacks the
> > > unboxed tuple. And the whole loop happens this way, so it is actually
> > > building a structure corresponding to the entire output list in memory
> > > rather eagerly.
> > >
> > > By contrast, your pure function is able to act in a streaming fashion,
> > > if consumed properly, where only enough of the result is built to keep
> > > driving the rest of the program. It probably runs in constant space,
> > > while your IO-based loop has a footprint linear in the size of the
> > > input list (in addition to having slightly more overhead per character
> > > because of the one extra thunk), because it is a more eager program.
> > >
> > > And having an asymptotically larger memory footprint i

Can we do something slightly nicer about (^)?

2016-04-26 Thread David Feuer
Every time someone writes, say, x^20, the literal exponent defaults to
Integer. This is the wrong default whenever the literal is in the Word
range. Fixing this goes beyond the capabilities of RULES pragmas, but I
imagine it would be a fairly simple thing to accomplish in the internal
rule language.

David
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Pattern synonym type flexibility

2016-04-20 Thread David Feuer
Think about it this way. The matching aspect of a pattern synonym is
basically about defining a function to a somewhat weird type, except of
course optimized and maybe a bit more general.

data HList :: [*] -> * where
  HNil :: HList '[]
  HCons :: a -> HList as -> HList (a ': as)

data PatternResult :: ([*] -> Constraint) -> * where
  PatternResult :: provides ts => HList ts -> PatternResult provides

type Matcher requires provides = forall x . requires x => x ->
PatternResult provides

The smart constructor side of a pattern synonym is much, much simpler! It's
just a regular old Haskell value! The only special bit is that it's
treated, syntactically, as a constructor. There's simply nothing else worth
saying about it, so the less said the better.
On Apr 20, 2016 1:48 PM, "David Feuer"  wrote:

> I don't know what that means. There's no way to enforce duality at the
> term level. Enforcing it at the type level prevents me from doing what I
> want and serves no apparent purpose. Remember that pattern synonyms are all
> about providing nice syntax, not adding essential expressiveness.
> On Apr 20, 2016 1:41 PM, "Carter Schonwald" 
> wrote:
>
> Shouldn't the design simply be both directions are the dual of the other,
> and pure in some sense ?
>
>
> On Wednesday, April 20, 2016, David Feuer  wrote:
>
>> To some degree, it probably could be. But I believe that imposing any
>> substantial relationship between the smart constructor and the pattern
>> synonym is likely to fall squarely into the category of things that are
>> subtle, hard, and almost completely useless. In the arrangement I
>> suggested, people would be free to do some things that "don't make sense",
>> and that doesn't bother me in the least.
>> On Apr 20, 2016 1:27 PM, "Carter Schonwald" 
>> wrote:
>>
>>> Would that duality be related to the given vs wanted constraints ?
>>>
>>> On Wednesday, April 20, 2016, David Feuer  wrote:
>>>
>>>> As far as I can tell from the 7.10 documentation, it's impossible to
>>>> make a bidirectional pattern synonym used as a constructor have a
>>>> different type signature than when used as a pattern. Has this been
>>>> improved in 8.0? I really want something like
>>>>
>>>> class FastCons x xs | xs -> x where
>>>>   fcons :: x -> xs -> xs
>>>> class FastViewL x xs | xs -> x where
>>>>   fviewl :: xs -> ViewL x xs
>>>>
>>>> pattern x :<| xs <- (fviewl -> ConsL x xs) where
>>>>   x :<| xs = fcons x xs
>>>>
>>>> This would allow users to learn just *one* name, :<|, that they can
>>>> use for sequences that are consable or viewable even if they may not
>>>> be the other.
>>>>
>>>> If this is not yet possible, then I think the most intuitive approach
>>>> is to sever the notions of "pattern synonym" and "smart constructor".
>>>> So I'd write
>>>>
>>>> pattern x :<| xs <- (fviewl -> ConsL x xs)
>>>> constructor (:<|) = fcons
>>>>
>>>> The current syntax could easily be desugared to produce *both* a
>>>> pattern synonym and a smart constructor in the bidirectional case.
>>>> ___
>>>> ghc-devs mailing list
>>>> ghc-devs@haskell.org
>>>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>>>>
>>>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Pattern synonym type flexibility

2016-04-20 Thread David Feuer
I don't know what that means. There's no way to enforce duality at the term
level. Enforcing it at the type level prevents me from doing what I want
and serves no apparent purpose. Remember that pattern synonyms are all
about providing nice syntax, not adding essential expressiveness.
On Apr 20, 2016 1:41 PM, "Carter Schonwald" 
wrote:

Shouldn't the design simply be both directions are the dual of the other,
and pure in some sense ?


On Wednesday, April 20, 2016, David Feuer  wrote:

> To some degree, it probably could be. But I believe that imposing any
> substantial relationship between the smart constructor and the pattern
> synonym is likely to fall squarely into the category of things that are
> subtle, hard, and almost completely useless. In the arrangement I
> suggested, people would be free to do some things that "don't make sense",
> and that doesn't bother me in the least.
> On Apr 20, 2016 1:27 PM, "Carter Schonwald" 
> wrote:
>
>> Would that duality be related to the given vs wanted constraints ?
>>
>> On Wednesday, April 20, 2016, David Feuer  wrote:
>>
>>> As far as I can tell from the 7.10 documentation, it's impossible to
>>> make a bidirectional pattern synonym used as a constructor have a
>>> different type signature than when used as a pattern. Has this been
>>> improved in 8.0? I really want something like
>>>
>>> class FastCons x xs | xs -> x where
>>>   fcons :: x -> xs -> xs
>>> class FastViewL x xs | xs -> x where
>>>   fviewl :: xs -> ViewL x xs
>>>
>>> pattern x :<| xs <- (fviewl -> ConsL x xs) where
>>>   x :<| xs = fcons x xs
>>>
>>> This would allow users to learn just *one* name, :<|, that they can
>>> use for sequences that are consable or viewable even if they may not
>>> be the other.
>>>
>>> If this is not yet possible, then I think the most intuitive approach
>>> is to sever the notions of "pattern synonym" and "smart constructor".
>>> So I'd write
>>>
>>> pattern x :<| xs <- (fviewl -> ConsL x xs)
>>> constructor (:<|) = fcons
>>>
>>> The current syntax could easily be desugared to produce *both* a
>>> pattern synonym and a smart constructor in the bidirectional case.
>>> ___
>>> ghc-devs mailing list
>>> ghc-devs@haskell.org
>>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>>>
>>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Pattern synonym type flexibility

2016-04-20 Thread David Feuer
To some degree, it probably could be. But I believe that imposing any
substantial relationship between the smart constructor and the pattern
synonym is likely to fall squarely into the category of things that are
subtle, hard, and almost completely useless. In the arrangement I
suggested, people would be free to do some things that "don't make sense",
and that doesn't bother me in the least.
On Apr 20, 2016 1:27 PM, "Carter Schonwald" 
wrote:

> Would that duality be related to the given vs wanted constraints ?
>
> On Wednesday, April 20, 2016, David Feuer  wrote:
>
>> As far as I can tell from the 7.10 documentation, it's impossible to
>> make a bidirectional pattern synonym used as a constructor have a
>> different type signature than when used as a pattern. Has this been
>> improved in 8.0? I really want something like
>>
>> class FastCons x xs | xs -> x where
>>   fcons :: x -> xs -> xs
>> class FastViewL x xs | xs -> x where
>>   fviewl :: xs -> ViewL x xs
>>
>> pattern x :<| xs <- (fviewl -> ConsL x xs) where
>>   x :<| xs = fcons x xs
>>
>> This would allow users to learn just *one* name, :<|, that they can
>> use for sequences that are consable or viewable even if they may not
>> be the other.
>>
>> If this is not yet possible, then I think the most intuitive approach
>> is to sever the notions of "pattern synonym" and "smart constructor".
>> So I'd write
>>
>> pattern x :<| xs <- (fviewl -> ConsL x xs)
>> constructor (:<|) = fcons
>>
>> The current syntax could easily be desugared to produce *both* a
>> pattern synonym and a smart constructor in the bidirectional case.
>> ___
>> ghc-devs mailing list
>> ghc-devs@haskell.org
>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>>
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Pattern synonym type flexibility

2016-04-20 Thread David Feuer
As far as I can tell from the 7.10 documentation, it's impossible to
make a bidirectional pattern synonym used as a constructor have a
different type signature than when used as a pattern. Has this been
improved in 8.0? I really want something like

class FastCons x xs | xs -> x where
  fcons :: x -> xs -> xs
class FastViewL x xs | xs -> x where
  fviewl :: xs -> ViewL x xs

pattern x :<| xs <- (fviewl -> ConsL x xs) where
  x :<| xs = fcons x xs

This would allow users to learn just *one* name, :<|, that they can
use for sequences that are consable or viewable even if they may not
be the other.

If this is not yet possible, then I think the most intuitive approach
is to sever the notions of "pattern synonym" and "smart constructor".
So I'd write

pattern x :<| xs <- (fviewl -> ConsL x xs)
constructor (:<|) = fcons

The current syntax could easily be desugared to produce *both* a
pattern synonym and a smart constructor in the bidirectional case.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Could we promote unlifted tuples?

2016-03-19 Thread David Feuer
At present, currying and uncurrying at the type level doesn't seem to
work terribly well. In particular, the kinds

(a, b) -> c

and

a -> b -> c

aren't really isomorphic, because (a, b) can be stuck. This makes some
things (like expressing Atkey-style indexed functors in terms of
McBride-style ones) rather awkward, and difficult or impossible to
really get quite right. The thought came to me that maybe we could
allow unlifted tuples to be promoted. Then something of the kind (# a,
b #) would unconditionally unify with '(# a, b #). The restrictions on
how values of unlifted tuple types can be used would presumably
translate directly to restrictions on how types of unlifted tuple kind
can be used.

David Feuer
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Pattern synonym thoughts

2016-02-24 Thread David Feuer
There are two features I think would make pattern synonyms even nicer:

1. If a pattern synonym is defined in the same module as one of the type
constructors in the type of thing it matches, then it should be possible to
export it "attached" to one or more of those constructors (normally but not
necessarily the outermost one g. An example of how this might look:

module Foo (A (.., S))
data A = X | Y
pattern S = X

Someone writing a module importing Foo wouldn't even need to know that S
was a pattern synonym--they'd just

import Foo (A (..))

and be done with it. The (.., S) here means "export all A's constructors,
along with the pattern synonym S".

2. Pattern synonym groups

This vague idea attacks a long-standing efficiency concern. It's not fully
formed in my mind, but I'd love to be able to write groups of pattern
synonyms that all use a *guaranteed shared* computed view of the underlying
structure. Once the first synonym in the group is encountered, the view is
calculated and held until the pattern match completes or the last synonym
in the group is rejected.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Missing definitions of associated types

2016-02-18 Thread David Feuer
You make a good point about people who use overlapping instances deserving
whatever they get (I'd personally love to see that whole mess removed and
replaced with something less intrusive). The bit that most severely breaks
my intuition here is that under normal, well-behaved circumstances, every
instance of a class with associated data has its own distinct associated
type(s). That is, there is a one-to-many relationship between instances and
types. When a definition is missing, that breaks, and the relationship may
become many-to-many. I suppose we may need to settle for this as long as
overlapping instances in their present form are around.

On Feb 18, 2016 12:49 PM, "Reid Barton"  wrote:

> Well, I see your point; but you also can't define a
>
> On Thu, Feb 18, 2016 at 12:00 PM, David Feuer 
> wrote:
>
>> It seems to be that a missing associated type definition should be an
>> error, by default, rather than a warning. The current behavior under those
>> circumstances strikes me as very strange, particularly for data families
>> and particularly in the presence of overlapping.
>>
>
>> This compiles with just a warning because Assoc Char *falls through* to
>> the general case. WAT? This breaks all my intuition about what associated
>> types are supposed to be about.
>>
>>
> Well, I see your point; but you also can't give a definition for Assoc
> Char in the Foo Char instance, because open data family instances are not
> allowed to overlap. So if failing to give a definition for an associated
> data family is an error, then it's impossible to use overlapping instances
> with classes that have associated data families. Is that your intention?
>
> I don't have a strong opinion here. I'm mildly inclined to say that people
> using overlapping instances have already signed themselves up for weird
> things happening, and we may as well let them do whatever other weird
> things they want.
>
> Regards,
> Reid Barton
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Missing definitions of associated types

2016-02-18 Thread David Feuer
It seems to be that a missing associated type definition should be an
error, by default, rather than a warning. The current behavior under those
circumstances strikes me as very strange, particularly for data families
and particularly in the presence of overlapping.

{-# LANGUAGE TypeFamilies #-}
class Foo a where
  data Assoc a
  foo :: proxy a -> Assoc a

instance {-# OVERLAPPABLE #-} Foo a where
  data Assoc a = AssocGeneral
  foo _ = AssocGeneral

instance {-# OVERLAPS #-} Foo Char where
  foo _ = AssocGeneral

blah :: Assoc Char
blah = foo (Proxy :: Proxy Char)

This compiles with just a warning because Assoc Char *falls through* to the
general case. WAT? This breaks all my intuition about what associated types
are supposed to be about.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Dropping bzip2 release tarballs?

2016-02-01 Thread David Feuer
Does this really strain storage infrastructure? There are only a few
blobs per release. If that's really a problem, sufficiently ancient
ones can presumably be pruned down to a single format without too many
complaints (e.g., if someone wants GHC 7.6, they may not be able to
have their choice of format). Bandwidth seems an entirely legitimate
concern, but thankfully a symmetric one—most users will want to
download the smallest available format, and those who are willing to
pay the extra time to download another likely have a good reason.

On Wed, Jan 13, 2016 at 11:19 AM, Ben Gamari  wrote:
> tl;dr do you rely on the .bz2 release tarballs on downloads.haskell.org?
>   If so, let us know!
>
>
> Hello everyone,
>
> As you may have noticed, a few releases ago we started producing
> xz-compressed binary distributions in addition to the usual bzip2
> tarballs.
>
> While preparing 8.0.1-rc1 it was suggested that we move to distributing
> xz tarballs exclusively. Not only would this move reduce storage and
> bandwidth demands on our infrastructure but it would also simplify the
> job of producing the distributions. Indeed there is plenty of precendent
> for projects who have moved exclusively to xz (the Linux kernel and git
> being two examples).
>
> Of course, these reasons alone aren't sufficient to abandon those who
> might rely on our bzip2 tarballs. If you feel strongly that we should
> continue to distribute bzip2 tarballs, please let us know.
>
> Thanks!
>
> - Your friendly GHC packaging gnomes
>
> ___
> Glasgow-haskell-users mailing list
> glasgow-haskell-us...@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Type class for sanity

2016-01-25 Thread David Feuer
You're correct. Please forget that name.
On Jan 25, 2016 12:33 PM, "wren romano"  wrote:

> On Mon, Jan 25, 2016 at 7:34 AM, Richard Eisenberg 
> wrote:
> > But I suggest a different name. Ground? Terminating? NormalForm?
> Irreducible? ValueType? I don't love any of these, but I love Sane less.
>
> I'm also strongly opposed to using "sane". That name is ableist and
> quite offputting to many Haskellers I know (myself included). To say
> nothing of the fact that the name doesn't provide any useful
> understanding of what precisely it's supposed to be categorizing.
>
> --
> Live well,
> ~wren
> ___
> Libraries mailing list
> librar...@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Type class for sanity

2016-01-25 Thread David Feuer
I don't care about the name at all. Unstuck? Would we want to distinguish
between whnf (e.g., Proxy Any) and nf, or is only nf sufficiently useful?
On Jan 25, 2016 7:34 AM, "Richard Eisenberg"  wrote:

> +1
>
> This would be very easy to implement, too.
>
> But I suggest a different name. Ground? Terminating? NormalForm?
> Irreducible? ValueType? I don't love any of these, but I love Sane less.
>
> On Jan 24, 2016, at 4:24 PM, David Feuer  wrote:
>
> > Since type families can be stuck, it's sometimes useful to restrict
> > things to sane types. At present, the most convenient way I can see to
> > do this in general is with Typeable:
> >
> > type family Foo x where
> >  Foo 'True = Int
> >
> > class Typeable (Foo x) => Bar x where
> >  blah :: proxy x -> Foo x
> >
> > This will prevent anyone from producing the bogus instance
> >
> > instance Bar 'False where
> >  blah _ = undefined
> >
> > Unfortunately, the Typeable constraint carries runtime overhead. One
> > possible way around this, I think, is with a class that does just
> > sanity checking and nothing more:
> >
> > class Sane (a :: k)
> > instance Sane Int
> > instance Sane Char
> > instance Sane 'False
> > instance Sane 'True
> > instance Sane '[]
> > instance Sane '(:)
> > instance Sane (->)
> > instance Sane 'Just
> > instance Sane 'Nothing
> > instance (Sane f, Sane x) => Sane (f x)
> >
> > To really do its job properly, Sane would need to have instances for
> > all sane types and no more. An example of an insane instance of Sane
> > would be
> >
> > instance Sane (a :: MyKind)
> >
> > which would include stuck types of kind MyKind.
> >
> > Would it be useful to add such an automatic-only class to GHC?
> >
> > David
> > ___
> > Libraries mailing list
> > librar...@haskell.org
> > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries
>
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Type class for sanity

2016-01-24 Thread David Feuer
See https://typesandkinds.wordpress.com/2015/09/09/what-are-type-families/
for some discussion. A type family application is stuck if it can't reduce
further and has not reached a proper type. Given the aforementioned type
family Foo, the application Foo 'False is stuck. It's a type of kind *, and
it's uninhabited (but not as nicely uninhabited as Void--it offers no ex
falso). This actually turns out to be useful for some things. GHC offers

type family Any :: k where {}

which is, at least,

1. A safe intermediate target for unsafeCoerce
2. An utterly unsatisfiable  constraint (see the definition of Bottom in
the GitHub master of the reflection package)

But sometimes you want to know something's *not* a stuck type family. See
the issue I filed earlier today at
https://github.com/kwf/ComonadSheet/issues/6 for an example--the code tries
to make a certain instance impossible to produce, but the possibility of
stuckness defeats it as its currently written.
On Jan 25, 2016 1:01 AM, "Jeffrey Brown"  wrote:

> "Stuck type" is proving difficult to Google. Do you recommend any
> references?
>
> On Sun, Jan 24, 2016 at 1:24 PM, David Feuer 
> wrote:
>
>> Since type families can be stuck, it's sometimes useful to restrict
>> things to sane types. At present, the most convenient way I can see to
>> do this in general is with Typeable:
>>
>> type family Foo x where
>>   Foo 'True = Int
>>
>> class Typeable (Foo x) => Bar x where
>>   blah :: proxy x -> Foo x
>>
>> This will prevent anyone from producing the bogus instance
>>
>> instance Bar 'False where
>>   blah _ = undefined
>>
>> Unfortunately, the Typeable constraint carries runtime overhead. One
>> possible way around this, I think, is with a class that does just
>> sanity checking and nothing more:
>>
>> class Sane (a :: k)
>> instance Sane Int
>> instance Sane Char
>> instance Sane 'False
>> instance Sane 'True
>> instance Sane '[]
>> instance Sane '(:)
>> instance Sane (->)
>> instance Sane 'Just
>> instance Sane 'Nothing
>> instance (Sane f, Sane x) => Sane (f x)
>>
>> To really do its job properly, Sane would need to have instances for
>> all sane types and no more. An example of an insane instance of Sane
>> would be
>>
>> instance Sane (a :: MyKind)
>>
>> which would include stuck types of kind MyKind.
>>
>> Would it be useful to add such an automatic-only class to GHC?
>>
>> David
>> ___
>> Libraries mailing list
>> librar...@haskell.org
>> http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries
>>
>
>
>
> --
> Jeffrey Benjamin Brown
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Type class for sanity

2016-01-24 Thread David Feuer
Since type families can be stuck, it's sometimes useful to restrict
things to sane types. At present, the most convenient way I can see to
do this in general is with Typeable:

type family Foo x where
  Foo 'True = Int

class Typeable (Foo x) => Bar x where
  blah :: proxy x -> Foo x

This will prevent anyone from producing the bogus instance

instance Bar 'False where
  blah _ = undefined

Unfortunately, the Typeable constraint carries runtime overhead. One
possible way around this, I think, is with a class that does just
sanity checking and nothing more:

class Sane (a :: k)
instance Sane Int
instance Sane Char
instance Sane 'False
instance Sane 'True
instance Sane '[]
instance Sane '(:)
instance Sane (->)
instance Sane 'Just
instance Sane 'Nothing
instance (Sane f, Sane x) => Sane (f x)

To really do its job properly, Sane would need to have instances for
all sane types and no more. An example of an insane instance of Sane
would be

instance Sane (a :: MyKind)

which would include stuck types of kind MyKind.

Would it be useful to add such an automatic-only class to GHC?

David
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Non-exported class constraints in errors

2015-12-08 Thread David Feuer
The latest implementation of Data.Constraint.Forall uses

type family Forall (p :: k -> Constraint) :: Constraint where
  Forall p = Forall_ p
class p (Skolem p) => Forall_ (p :: k -> Constraint)
instance p (Skolem p) => Forall_ (p :: k -> Constraint)

The trouble is that errors relating to Forall are reported with Skolem
verbiage, making them incomprehensible to users -- the Skolem type family
is not exported. It would be nice if GHC could keep track of what led to
the p (Skolem p) constraint, and report that. The sanest idea I've come up
with thus far is a "buck stops here" pragma. When GHC attempts to solve a
constraint so marked, it will hold onto that fact, and report any failures
against that. So

{-# TheBuckStopsHere Forall #-}

would make GHC take note when trying to solve Forall p, and report that it
failed to solve Forall p, rather than that it failed to solve constraints
further down the line.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


RE: Allow ambiguous types (with warning) by default

2015-12-08 Thread David Feuer
Sure, I'll open the ticket. It may well turn out that there's a less
invasive way to accomplish my ultimate goal (more useful error messages for
ambiguous types) without allowing the cascade you're worried about. In

foo :: Num a => F a -> G a

Something more like the following would be much better than what we
currently get:

The type variable `a' in the type of `foo :: Num a => F a -> G a' is
ambiguous. This means that code attempting to use `foo' will not be able to
give it a signature determining `a'. In particular: `a' appears only as an
argument to the type families `F' and `G', which may not be injective.
Therefore a signature at the use site fixing `F a' and `G a' to particular
types will not fix `a' to a particular type.
On Dec 7, 2015 8:31 AM, "Simon Peyton Jones"  wrote:

> |  OK, fine. Is there a way to make it an error, but keep checking the
> |  rest of the module? My goal is *get both messages if possible*, within
> |  a module. I'm not tied to any particular mechanism of doing so.
>
> Yes it'd be possible.  A bit fiddly, but certainly possible.
>
> Of course, doing so can lead to a cascade of other errors, but in this
> case you seem to actively want those follow-on errors.
>
> Would you like to open a ticket with a few illustrative examples to
> motivate your proposal?
>
> Simon
>
> |
> |  On Sun, Dec 6, 2015 at 12:13 AM, Edward Kmett 
> |  wrote:
> |  > If you aren't the one writing the code that can't be called you may
> |  > never see the warning. It'll be tucked away in a cabal or stack
> |  build
> |  > log somewhere.
> |  >
> |  > -Edward
> |  >
> |  > On Sun, Dec 6, 2015 at 12:06 AM, David Feuer 
> |  wrote:
> |  >>
> |  >> No, I want it to *warn* by default. If I write
> |  >>
> |  >> foo :: something that will fail the ambiguity check bar = something
> |  >> that uses foo in a (necessarily) ambiguous way
> |  >>
> |  >> the current default leads me to do this:
> |  >>
> |  >> 1. Attempt to compile. Get an ambiguity error on foo whose exact
> |  >> cause is hard for me to see.
> |  >> 2. Enable AllowAmbiguousTypes and recompile. Get an error on bar
> |  >> whose exact cause is completely obvious, and that makes it
> |  perfectly
> |  >> clear what I need to do to fix foo.
> |  >> 3. Fix foo, and disable AllowAmbiguousTypes.
> |  >>
> |  >> I'd much rather go with
> |  >>
> |  >> 1. Attempt to compile. Get an ambiguity *warning* on foo whose
> |  exact
> |  >> cause is hard for me to see, but also an error on bar whose exact
> |  >> cause is completely obvious, and that makes it perfectly clear what
> |  I
> |  >> need to do to fix foo.
> |  >> 2. Fix foo.
> |  >>
> |  >> Simple example of how it is currently:
> |  >>
> |  >> > let foo :: Num a => F a; foo = undefined; bar :: Int; bar = foo
> |  >>
> |  >> :14:12:
> |  >> Couldn't match expected type ‘F a’ with actual type ‘F a0’
> |  >> NB: ‘F’ is a type function, and may not be injective
> |  >> The type variable ‘a0’ is ambiguous
> |  >> In the ambiguity check for the type signature for ‘foo’:
> |  >>   foo :: forall a. Num a => F a
> |  >> To defer the ambiguity check to use sites, enable
> |  AllowAmbiguousTypes
> |  >> In the type signature for ‘foo’: foo :: Num a => F a
> |  >>
> |  >> Couldn't match what with what? Huh? Where did a0 come from?
> |  >>
> |  >> > :set -XAllowAmbiguousTypes
> |  >> > let foo :: Num a => F a; foo = undefined; bar :: Int; bar = foo
> |  >>
> |  >> :16:61:
> |  >> Couldn't match expected type ‘Int’ with actual type ‘F a0’
> |  >> The type variable ‘a0’ is ambiguous
> |  >> In the expression: foo
> |  >> In an equation for ‘bar’: bar = foo
> |  >>
> |  >> Aha! That's the problem! It doesn't know what a0 is! How can I tell
> |  >> it what a0 is? Oh! I can't, because foo doesn't give me a handle on
> |  it.
> |  >> Guess I have to fix foo.
> |  >>
> |  >> I'd really, really like to get *both* of those messages in one go,
> |  >> with the first one preferably explaining itself a bit better.
> |  >>
> |  >> On Sat, Dec 5, 2015 at 11:51 PM, Edward Kmett 
> |  wrote:
> |  >> > So you are saying you want users to write a ton of code that
> |  >> > happens to 

Re: Allow ambiguous types (with warning) by default

2015-12-05 Thread David Feuer
OK, fine. Is there a way to make it an error, but keep checking the
rest of the module? My goal is *get both messages if possible*, within
a module. I'm not tied to any particular mechanism of doing so.

On Sun, Dec 6, 2015 at 12:13 AM, Edward Kmett  wrote:
> If you aren't the one writing the code that can't be called you may never
> see the warning. It'll be tucked away in a cabal or stack build log
> somewhere.
>
> -Edward
>
> On Sun, Dec 6, 2015 at 12:06 AM, David Feuer  wrote:
>>
>> No, I want it to *warn* by default. If I write
>>
>> foo :: something that will fail the ambiguity check
>> bar = something that uses foo in a (necessarily) ambiguous way
>>
>> the current default leads me to do this:
>>
>> 1. Attempt to compile. Get an ambiguity error on foo whose exact cause
>> is hard for me to see.
>> 2. Enable AllowAmbiguousTypes and recompile. Get an error on bar whose
>> exact cause is completely obvious, and that makes it perfectly clear
>> what I need to do to fix foo.
>> 3. Fix foo, and disable AllowAmbiguousTypes.
>>
>> I'd much rather go with
>>
>> 1. Attempt to compile. Get an ambiguity *warning* on foo whose exact
>> cause is hard for me to see, but also an error on bar whose exact
>> cause is completely obvious, and that makes it perfectly clear what I
>> need to do to fix foo.
>> 2. Fix foo.
>>
>> Simple example of how it is currently:
>>
>> > let foo :: Num a => F a; foo = undefined; bar :: Int; bar = foo
>>
>> :14:12:
>> Couldn't match expected type ‘F a’ with actual type ‘F a0’
>> NB: ‘F’ is a type function, and may not be injective
>> The type variable ‘a0’ is ambiguous
>> In the ambiguity check for the type signature for ‘foo’:
>>   foo :: forall a. Num a => F a
>> To defer the ambiguity check to use sites, enable AllowAmbiguousTypes
>> In the type signature for ‘foo’: foo :: Num a => F a
>>
>> Couldn't match what with what? Huh? Where did a0 come from?
>>
>> > :set -XAllowAmbiguousTypes
>> > let foo :: Num a => F a; foo = undefined; bar :: Int; bar = foo
>>
>> :16:61:
>> Couldn't match expected type ‘Int’ with actual type ‘F a0’
>> The type variable ‘a0’ is ambiguous
>> In the expression: foo
>> In an equation for ‘bar’: bar = foo
>>
>> Aha! That's the problem! It doesn't know what a0 is! How can I tell it
>> what a0 is? Oh! I can't, because foo doesn't give me a handle on it.
>> Guess I have to fix foo.
>>
>> I'd really, really like to get *both* of those messages in one go,
>> with the first one preferably explaining itself a bit better.
>>
>> On Sat, Dec 5, 2015 at 11:51 PM, Edward Kmett  wrote:
>> > So you are saying you want users to write a ton of code that happens to
>> > have
>> > signatures that can never be called and only catch it when they go to
>> > try to
>> > actually use it in a concrete situation much later?
>> >
>> > I don't really show how this would be a better default.
>> >
>> > When and if users see the problem later they have to worry about if they
>> > are
>> > doing something wrong at the definition site or the call site. With the
>> > status quo it complains at the right time that you aren't going to sit
>> > there
>> > flailing around trying to fix a call site that can never be fixed.
>> >
>> > -Edward
>> >
>> > On Sat, Dec 5, 2015 at 5:38 PM, David Feuer 
>> > wrote:
>> >>
>> >> The ambiguity check produces errors that are quite surprising to the
>> >> uninitiated. When the check is suppressed, the errors at use sites are
>> >> typically much easier to grasp. On the other hand, there's obviously a
>> >> lot
>> >> of value to catching mistakes as soon as possible. Would it be possible
>> >> to
>> >> turn that into a warning by default?
>> >>
>> >>
>> >> ___
>> >> ghc-devs mailing list
>> >> ghc-devs@haskell.org
>> >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>> >>
>> >
>
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Allow ambiguous types (with warning) by default

2015-12-05 Thread David Feuer
No, I want it to *warn* by default. If I write

foo :: something that will fail the ambiguity check
bar = something that uses foo in a (necessarily) ambiguous way

the current default leads me to do this:

1. Attempt to compile. Get an ambiguity error on foo whose exact cause
is hard for me to see.
2. Enable AllowAmbiguousTypes and recompile. Get an error on bar whose
exact cause is completely obvious, and that makes it perfectly clear
what I need to do to fix foo.
3. Fix foo, and disable AllowAmbiguousTypes.

I'd much rather go with

1. Attempt to compile. Get an ambiguity *warning* on foo whose exact
cause is hard for me to see, but also an error on bar whose exact
cause is completely obvious, and that makes it perfectly clear what I
need to do to fix foo.
2. Fix foo.

Simple example of how it is currently:

> let foo :: Num a => F a; foo = undefined; bar :: Int; bar = foo

:14:12:
Couldn't match expected type ‘F a’ with actual type ‘F a0’
NB: ‘F’ is a type function, and may not be injective
The type variable ‘a0’ is ambiguous
In the ambiguity check for the type signature for ‘foo’:
  foo :: forall a. Num a => F a
To defer the ambiguity check to use sites, enable AllowAmbiguousTypes
In the type signature for ‘foo’: foo :: Num a => F a

Couldn't match what with what? Huh? Where did a0 come from?

> :set -XAllowAmbiguousTypes
> let foo :: Num a => F a; foo = undefined; bar :: Int; bar = foo

:16:61:
Couldn't match expected type ‘Int’ with actual type ‘F a0’
The type variable ‘a0’ is ambiguous
In the expression: foo
In an equation for ‘bar’: bar = foo

Aha! That's the problem! It doesn't know what a0 is! How can I tell it
what a0 is? Oh! I can't, because foo doesn't give me a handle on it.
Guess I have to fix foo.

I'd really, really like to get *both* of those messages in one go,
with the first one preferably explaining itself a bit better.

On Sat, Dec 5, 2015 at 11:51 PM, Edward Kmett  wrote:
> So you are saying you want users to write a ton of code that happens to have
> signatures that can never be called and only catch it when they go to try to
> actually use it in a concrete situation much later?
>
> I don't really show how this would be a better default.
>
> When and if users see the problem later they have to worry about if they are
> doing something wrong at the definition site or the call site. With the
> status quo it complains at the right time that you aren't going to sit there
> flailing around trying to fix a call site that can never be fixed.
>
> -Edward
>
> On Sat, Dec 5, 2015 at 5:38 PM, David Feuer  wrote:
>>
>> The ambiguity check produces errors that are quite surprising to the
>> uninitiated. When the check is suppressed, the errors at use sites are
>> typically much easier to grasp. On the other hand, there's obviously a lot
>> of value to catching mistakes as soon as possible. Would it be possible to
>> turn that into a warning by default?
>>
>>
>> ___
>> ghc-devs mailing list
>> ghc-devs@haskell.org
>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>>
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: ambiguous type stuff

2015-12-05 Thread David Feuer
I think I didn't explain myself well enough. I'm not talking about expanded
defaulting, although that may be tied up with the same mechanisms. Perhaps
the best thing is just to work on the error message text for certain
ambiguous type situations. Notably, situations where adding a proxy
argument or similar is needed to avoid the error tend to be pretty
mysterious until you realize that the system is not complaining because it
can't determine a type, but rather because there is no way to pinpoint that
type at the use site.

If F is a type family then

f :: Foo x => F x

has a perfectly sensible translation, namely

f :: forall x . Foo x -> F x

The problem is purely one of inference on the Haskell side: there is no way
to add a signature *at the call site* that determines what value of x to
pass to f. The error message about x being ambiguous because F is a type
family has to be read a bit backwards. At a *use site*, it'll be pretty
clear-cut what's ambiguous, and it will soon become clear why it can't be
pinned down. Perhaps it would pay to change the error message to explain
why the type signature renders the term unusable.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Allow ambiguous types (with warning) by default

2015-12-05 Thread David Feuer
The ambiguity check produces errors that are quite surprising to the
uninitiated. When the check is suppressed, the errors at use sites are
typically much easier to grasp. On the other hand, there's obviously a lot
of value to catching mistakes as soon as possible. Would it be possible to
turn that into a warning by default?
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Context for typed holes

2015-10-22 Thread David Feuer
I just closed mine as a duplicate of yours.
On Oct 23, 2015 1:55 AM, "Andres Löh"  wrote:

> Actually, #9091 was the one I was really looking for ... reported by
> me. See also the discussion about "given" vs. "wanted" constraints.
>
> Cheers,
>   Andres
>
> On Fri, Oct 23, 2015 at 7:48 AM, David Feuer 
> wrote:
> > I opened https://ghc.haskell.org/trac/ghc/ticket/10954 for this. #9479,
> by
> > Dominique Devriese, is complementary--she wants instance information for
> a
> > *hole* with an ambiguous type.
> >
> > On Oct 23, 2015 1:28 AM, "Andres Löh"  wrote:
> >>
> >> Hi.
> >>
> >> On Oct 23, 2015 01:15, "Manuel M T Chakravarty" 
> >> wrote:
> >> >
> >> > I think, this is a good point. Maybe you should make a ticket for it.
> >>
> >> #9479, I think.
> >>
> >> Cheers,
> >> Andres
> >>
> >> >> David Feuer :
> >> >>
> >> >> Unless something has changed really recently that I've missed, the
> >> >> typed holes messages are missing some really important information:
> instance
> >> >> information for types in scope. When I am trying to fill in a hole,
> I look
> >> >> to the "relevant bindings" to show me what pieces I have available
> to use.
> >> >> Those pieces don't include contexts! Is there something
> fundamentally hard
> >> >> about adding this information? I'd only want instance information
> for type
> >> >> variables--providing it for concrete types would make too much
> noise. I'd
> >> >> also want information on equality constraints, of course.
> >> >>
> >> >> ___
> >> >> ghc-devs mailing list
> >> >> ghc-devs@haskell.org
> >> >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
> >> >
> >> >
> >> >
> >> > ___
> >> > ghc-devs mailing list
> >> > ghc-devs@haskell.org
> >> > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
> >> >
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Context for typed holes

2015-10-22 Thread David Feuer
I opened https://ghc.haskell.org/trac/ghc/ticket/10954 for this. #9479, by
Dominique Devriese, is complementary--she wants instance information for a
*hole* with an ambiguous type.
On Oct 23, 2015 1:28 AM, "Andres Löh"  wrote:

> Hi.
>
> On Oct 23, 2015 01:15, "Manuel M T Chakravarty" 
> wrote:
> >
> > I think, this is a good point. Maybe you should make a ticket for it.
>
> #9479, I think.
>
> Cheers,
> Andres
>
> >> David Feuer :
> >>
> >> Unless something has changed really recently that I've missed, the
> typed holes messages are missing some really important information:
> instance information for types in scope. When I am trying to fill in a
> hole, I look to the "relevant bindings" to show me what pieces I have
> available to use. Those pieces don't include contexts! Is there something
> fundamentally hard about adding this information? I'd only want instance
> information for type variables--providing it for concrete types would make
> too much noise. I'd also want information on equality constraints, of
> course.
> >>
> >> ___
> >> ghc-devs mailing list
> >> ghc-devs@haskell.org
> >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
> >
> >
> >
> > ___
> > ghc-devs mailing list
> > ghc-devs@haskell.org
> > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
> >
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Coercion logic

2015-10-22 Thread David Feuer
No, I've not tested against head. I'd not heard anything new about
that! That sounds exciting. Sorry about the noise if it's all finished
already.

David

On Thu, Oct 22, 2015 at 9:57 AM, Richard Eisenberg  wrote:
> The Coercible solver has evolved steadily. It should know that (Coercible a b 
> <=> Coercible b a). Do you have a concrete example of where it's not doing 
> this? Have you tested against HEAD?
>
> Thanks,
> Richard
>
> On Oct 22, 2015, at 9:56 AM, David Feuer  wrote:
>
>> At present, any time we write a function with a `Coercible`
>> constraint, we must take great care to choose `Coercible a b` or
>> `Coercible b a` depending on which will ultimately lead to fewer silly
>> conversions. This is particularly sad because the whole Coercible
>> mechanism guarantees that these have exactly the same run-time
>> representation, and because People Wiser Than Me believe Coercible
>> should *always* remain symmetric. My (admittedly reptilian) brain
>> wonders what it would take to tell the type checker that
>>
>> forall a b . Coercible a b ~ Coercible b a
>>
>> and have it over with.
>>
>> David Feuer
>> ___
>> ghc-devs mailing list
>> ghc-devs@haskell.org
>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Coercion logic

2015-10-22 Thread David Feuer
At present, any time we write a function with a `Coercible`
constraint, we must take great care to choose `Coercible a b` or
`Coercible b a` depending on which will ultimately lead to fewer silly
conversions. This is particularly sad because the whole Coercible
mechanism guarantees that these have exactly the same run-time
representation, and because People Wiser Than Me believe Coercible
should *always* remain symmetric. My (admittedly reptilian) brain
wonders what it would take to tell the type checker that

forall a b . Coercible a b ~ Coercible b a

and have it over with.

David Feuer
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Moving forall with coerce

2015-10-19 Thread David Feuer
It appears, as far as I can tell, that GHC can't move a forall past an
-> with coerce. I was playing around with the MonadTrans instance for
Codensity, wanting (essentially) to write

lift = coerce (>>=)

This is legal:

instance MonadTrans Codensity where
  lift = frob
frob :: forall m a . Monad m => m a -> Codensity m a
frob = coerce (blah (Mag (>>=)) :: m a -> Mag2 m a)
newtype Mag m a = Mag (forall b . m a -> (a -> m b) -> m b)
newtype Mag2 m a = Mag2 (forall b . (a -> m b) -> m b)
blah :: Mag m a -> m a -> Mag2 m a
blah (Mag p) q = Mag2 (p q)

The problem is that there doesn't *seem* to be a way to implement blah
as a coercion. GHC doesn't recognize that

Mag m a

has the same representation as

m a -> Mag2 m a

The essential difference, as far as I can see, is that the `forall b`
is shifted across `m a ->`. It seems that this really shouldn't be an
issue, because

1. b is not free in `m a`
2. Type lambdas all compile away

Would it be worth trying to extend the Coercible mechanism to deal
with these kinds of situations? Or is there already some way to do it
that I've missed?

David
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Context for typed holes

2015-10-08 Thread David Feuer
Unless something has changed really recently that I've missed, the typed
holes messages are missing some really important information: instance
information for types in scope. When I am trying to fill in a hole, I look
to the "relevant bindings" to show me what pieces I have available to use.
Those pieces don't include contexts! Is there something fundamentally hard
about adding this information? I'd only want instance information for type
variables--providing it for concrete types would make too much noise. I'd
also want information on equality constraints, of course.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


MIN_VERSION macros

2015-09-25 Thread David Feuer
Cabal defines MIN_VERSION_* macros that allow CPP in a Haskell source file
to get information about the versions of the packages that module is being
compiled against. Unfortunately, these macros are not available when not
compiling with cabal, so packages must either

1. Insist on cabal compilation. This is not very friendly to developers.
2. Make "pessimistic" assumptions, assuming that all the packages are old.
This makes it annoying to test new features while also leading to
compilation or run-time failures when packages have removed it changed
features.
3. Attempt to guess the version based on the GHC version. This works
reasonably well for base, ghc-prim, containers, etc., but not so well/at
all for others.

Would there be some way to get GHC itself to provide these macros to all
modules that request CPP?

David Feuer
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Deriving Contravariant and Profunctor

2015-09-11 Thread David Feuer
Oh, I see... you get horrible overlap problems there. Blech! I guess
they'll all act the same (modulo optimized <$ and such), but GHC can't
know that and will see them as forever incoherent.

On Fri, Sep 11, 2015 at 1:52 PM, Edward Kmett  wrote:
> Actually it is trickier than you'd think.
>
> With "Functor" you can pretend that contravariance doesn't exist.
>
> With both profunctor and contravariant it is necessarily part of the puzzle.
>
> data Compose f g a = Compose (f (g a))
>
> * are both f and g contravariant leading to a functor?
> * is f contravariant and g covariant leading to a contravariant functor?
> * is f covariant and g contravariant leading to a contravariant functor?
>
> data Wat p f a b = Wat (p (f a) b)
>
> is p a Profunctor or a Bifunctor? is f Contravariant or a Functor?
>
> We investigated adding TH code-generation for the contravariant package, and
> ultimately rejected it on these grounds.
>
> https://github.com/ekmett/contravariant/issues/17
>
> -Edward
>
>
>
> On Fri, Sep 11, 2015 at 12:49 PM, David Feuer  wrote:
>>
>> Would it be possible to add mechanisms to derive Contravariant and
>> Profunctor instances? As with Functor, each algebraic datatype can
>> only have one sensible instance of each of these.
>>
>> David Feuer
>> ___
>> ghc-devs mailing list
>> ghc-devs@haskell.org
>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Deriving Contravariant and Profunctor

2015-09-11 Thread David Feuer
Would it be possible to add mechanisms to derive Contravariant and
Profunctor instances? As with Functor, each algebraic datatype can
only have one sensible instance of each of these.

David Feuer
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Access to class defaults and derived instances

2015-08-24 Thread David Feuer
I'm not sure if it really could work out at all. The concept is that I want
the newtype wrapper to get the class defaults the wrapped type would have
gotten (whether the wrapped type is actually a class instance or not).
On Aug 24, 2015 8:39 AM, "Richard Eisenberg"  wrote:

> I have a hard time fully understanding this request without more context.
> But I do think I understand the last paragraph. And it seems bound to
> create class incoherence. What if someone else *does* write that orphan
> instance you're avoiding writing?
>
> Richard
>
> On Aug 22, 2015, at 12:54 PM, David Feuer  wrote:
>
> From time to time, a library lacks an instance for something that I want.
> For example, I may need to convert
>
> data Foo = Bar (Vector Baz)
>
> to FishFood, but (to avoid unreasonable dependencies) Vector doesn't have
> a ToFishFood instance, so I can't just write
>
> instance ToFishFood Foo
>
> and (using Generic magic) be done with it. Instead, I must write the
> instance completely by hand, which could be painful. I *could* write an
> orphan instance, but orphans are evil.
>
> What I wish I could do:
>
> newtype Vec a = Vec (Vector a)
>
> instance ToFishFood a => (newtype Vec) a where
>   -- if needed
>   toFishFood (v :: Vector a) = ...
>
> That is, I want to write a super-secret orphan instance for Vector and
> transfer it to Vec via GND precisely when it is legal to do so. The secret
> instance could itself be derived (if the constructors are visible) or could
> make use of default member definitions.
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
>
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Access to class defaults and derived instances

2015-08-22 Thread David Feuer
>From time to time, a library lacks an instance for something that I want.
For example, I may need to convert

data Foo = Bar (Vector Baz)

to FishFood, but (to avoid unreasonable dependencies) Vector doesn't have a
ToFishFood instance, so I can't just write

instance ToFishFood Foo

and (using Generic magic) be done with it. Instead, I must write the
instance completely by hand, which could be painful. I *could* write an
orphan instance, but orphans are evil.

What I wish I could do:

newtype Vec a = Vec (Vector a)

instance ToFishFood a => (newtype Vec) a where
  -- if needed
  toFishFood (v :: Vector a) = ...

That is, I want to write a super-secret orphan instance for Vector and
transfer it to Vec via GND precisely when it is legal to do so. The secret
instance could itself be derived (if the constructors are visible) or could
make use of default member definitions.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: MonadFail proposal (MFP): Moving fail out of Monad

2015-06-11 Thread David Feuer
Pattern matching on `undefined` is not like pattern match failure.
Single-constructor types are only special if they're unlifted:
`newtype` and GHC's unboxed tuples are the only examples I know of,
and you can't use unboxed tuples in this context.

On Thu, Jun 11, 2015 at 11:28 AM, Wolfgang Jeltsch
 wrote:
> Are you sure that desugaring works this way? If yes, this should be
> considered a bug and be fixed, I would say. It is very illogical.
>
> All the best,
> Wolfgang
>
> Am Donnerstag, den 11.06.2015, 16:23 +0100 schrieb David Turner:
>> AIUI the point about ⊥ and (⊥, ⊥) being different doesn't matter here:
>> a bind for a single-constructor datatype never desugars in a way that
>> uses fail (which isn't to say that it can't be undefined)
>>
>> For instance:
>>
>> runErrorT (do { (_,_) <- return undefined; return () } :: ErrorT String IO 
>> ())
>>
>> throws an exception, even though the bind is in ErrorT where fail just
>> returns left:
>>
>> runErrorT (do { fail "oops"; return () } :: ErrorT String IO ())
>>
>> => Left "oops"
>>
>> Hope that helps, and hope I understand correctly!
>>
>> David
>>
>>
>> On 11 June 2015 at 16:08, Wolfgang Jeltsch  
>> wrote:
>> > Hi David,
>> >
>> > thank you very much for this proposal. I think having fail in Monad is
>> > just plain wrong, and I am therefore very happy to see it being moved
>> > out.
>> >
>> > I have some remarks, though:
>> >
>> >> A class of patterns that are conditionally failable are `newtype`s,
>> >> and single constructor `data` types, which are unfailable by
>> >> themselves, but may fail if matching on their fields is done with
>> >> failable paterns.
>> >
>> > The part about single-constructor data types is not true. A
>> > single-constructor data type has a value ⊥ that is different from
>> > applying the data constructor to ⊥’s. For example, ⊥ and (⊥, ⊥) are two
>> > different values. Matching ⊥ against the pattern (_, _) fails, matching
>> > (⊥, ⊥) against (_, _) succeeds. So single-constructor data types are not
>> > different from all other data types in this respect. The dividing line
>> > really runs between data types and newtypes. So only matches against
>> > patterns C p where C is a newtype constructor and p is unfailable should
>> > be considered unfailable.
>> >
>> >>   - Applicative `do` notation is coming sooner or later, `fail` might
>> >> be useful in this more general scenario. Due to the AMP, it is
>> >> trivial to change the `MonadFail` superclass to `Applicative`
>> >> later. (The name will be a bit misleading, but it's a very small
>> >> price to pay.)
>> >
>> > I think it would be very misleading having a MonadFail class that might
>> > have instances that are not monads, and that this is a price we should
>> > not pay. So we should not name the class MonadFail. Maybe, Fail would be
>> > a good name.
>> >
>> >> I think we should keep the `Monad` superclass for three main reasons:
>> >>
>> >>   - We don't want to see `(Monad m, MonadFail m) =>` all over the place.
>> >
>> > But exactly this will happen if we change the superclass of (Monad)Fail
>> > from Monad to Applicative. So it might be better to impose a more
>> > light-weight constraint in the first place. Functor m might be a good
>> > choice.
>> >
>> > All the best,
>> > Wolfgang
>> >
>> > ___
>> > ghc-devs mailing list
>> > ghc-devs@haskell.org
>> > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
>
> ___
> Libraries mailing list
> librar...@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: [Haskell-cafe] Generalized Newtype Deriving not allowed in Safe Haskell

2015-04-10 Thread David Feuer
I think a module exporting some but not all data constructors of a type is
fundamentally broken behavior. I would generally be in favor of prohibiting
it altogether, and I would be strongly opposed to letting continued support
for it break anything else.
On Apr 10, 2015 9:05 AM, "Douglas McClean" 
wrote:

> I don't think that 2+3 is equivalent to 2', because an explicit import
> list or hiding list could've brought only some of the datatype's
> constructors into visibility.
>
> On Fri, Apr 10, 2015 at 8:07 AM, Richard Eisenberg 
> wrote:
>
>> Here's an idea: For a module to be Safe, then for each exported datatype,
>> one of the following must hold:
>> 1) The datatype comes with a role annotation.
>> 2) The module exports all of the datatype's constructors.
>> 3) If the datatype is defined in a place other than the current module,
>> the current module exports no fewer data constructors than are exported in
>> the datatype's defining module.
>>
>> Why?
>> 1) The role annotation, even if it has no effect, shows that the
>> programmer has considered roles. Any mistake here is clearly the
>> programmer's fault.
>> 2) This datatype is clearly meant not to be abstract. `Coercible` then
>> gives clients no more power than they already have.
>> 3) This is subtler. It is a common idiom to export a datatype's
>> constructors from a package-internal module, but then never to export the
>> constructors beyond the package. If such a datatype has a role annotation
>> (in its defining module, of course), then we're fine, even if it is
>> exported abstractly later. However, suppose we are abstractly re-exporting
>> a datatype that exports its constructors from its defining module. If there
>> is no role annotation on the datatype, we're in trouble and should fail.
>> BUT, if the datatype were exported abstractly in its defining module, then
>> we don't need to fail on re-export, because nothing has changed.
>>
>>
>> Actually, we could simplify the conditions. Change (2) to:
>>
>> 2') The module exports all of the datatype's visible constructors.
>>
>> I think explaining in terms of separate rules (2) and (3) is a little
>> clearer, because the re-export case is slightly subtle, and this subtlety
>> can be lost in (2').
>>
>> This proposal would require tracking (in interface files, too) whether or
>> not a datatype comes with a role annotation. This isn't hard, though. It
>> might even help in pretty-printing.
>>
>>
>> An alternative would be to have a way of setting roles differently on
>> export than internally. I don't think this breaks the type system, but it's
>> yet another thing to specify and support. And we'd have to consider the
>> possibility that some module will import a datatype from multiple
>> re-exporting modules, each with different ascribed role annotations. Is
>> this an error? Does GHC take some sort of least upper bound? I prefer not
>> to go here, but there's nothing terribly wrong with this approach.
>>
>> Richard
>>
>> On Apr 10, 2015, at 9:37 AM, David Terei  wrote:
>>
>> > I'll prepare a patch for the userguide soon.
>> >
>> > As for something better, yes I think we can and should. It's on my
>> > todo list :) Basically, the new-GND design has all the mechanisms to
>> > be safe, but sadly the defaults are rather worrying. Without explicit
>> > annotations from the user, module abstractions are broken. This is why
>> > we left GND out of Safe Haskell for the moment as it is a subtle and
>> > easy mistake to make.
>> >
>> > If the module contained explicit role annotations then it could be
>> > allowed. The discussion in
>> > https://ghc.haskell.org/trac/ghc/ticket/8827 has other solutions that
>> > I prefer, such as only exporting the Coerce instance if all the
>> > constructors are exported, it seems that the ship sailed on these
>> > bigger changes sadly.
>> >
>> > Cheers,
>> > David
>> >
>> > On 9 April 2015 at 00:56, Simon Peyton Jones 
>> wrote:
>> >> There is a long discussion on
>> https://ghc.haskell.org/trac/ghc/ticket/8827
>> >> about whether the new Coercible story makes GND ok for Safe Haskell.
>> At a
>> >> type-soundness level, definitely yes.  But there are other
>> less-clear-cut
>> >> issues like “breaking abstractions” to consider.  The decision on the
>> ticket
>> >> (comment:36) seems to be: GND stays out of Safe Haskell for now, but
>> there
>> >> is room for a better proposal.
>> >>
>> >>
>> >>
>> >> I don’t have an opinion myself. David Terei and David Mazieres are in
>> the
>> >> driving seat, but I’m sure they’ll be responsive to user input.
>> >>
>> >>
>> >>
>> >> However, I think the user manual may not have kept up with #8827.  The
>> >> sentence “GeneralizedNewtypeDeriving — It can be used to violate
>> constructor
>> >> access control, by allowing untrusted code to manipulate protected data
>> >> types in ways the data type author did not intend, breaking invariants
>> they
>> >> have established.”  vanished from the 7.8 user manual (links below).
>> Maybe
>> >> it 

More at-use-site extension pragmas please

2015-03-15 Thread David Feuer
For people working on libraries intended to be portable but
using/offering GHC specials when compiled with GHC, it's nice to be
able to verify quickly that nothing relies on a GHC extension that
shouldn't. For example, I just submitted a pull request to add an
IsString instance to Data.Sequence:

instance IsString (Seq Char) where
  fromString = fromList

This instance requires FlexibleInstances to compile. Since that option
is global, GHC will no longer complain about non-standard instance
heads *anywhere* in the (large) module. The obvious long-term
solution, I believe, is to add {-# FlexibleInstance #-} (and {-#
FlexibleContext #-}) pragmas. Well, that or get some sort of
termination checker into Haskell 2015!

David
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Proposal: Turn on ScopedTypeVariables by default

2015-02-23 Thread David Feuer
I know this will be controversial, because it can break (weird) code and
because it's not Haskell 2010, but hey, you can't make brain salad without
breaking a few heads. ScopedTypeVariables is just awesome for two
fundamental reasons:

1. It lets you write type signatures for more things.
2. It lets you write more precise type signatures for many things.

As a consequence of those two,

3. It helps you get much better error messages from the type checker.

And for all that,

4. It's really easy to use.

What do other people think?
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Behavior change of Data.Char

2015-02-20 Thread David Feuer
I don't think so. There's no guarantee that future versions will maintain
it, and I don't know that we want to take responsibility for continually
checking on that.

David

On Feb 20, 2015 Simon Peyton Jones wrote:
> It'd be good to document this condition/invariant in the Haddocks,
wouldn't it?!
>
> Simon
>
> |  -Original Message-
> |  From: ghc-devs [mailto:ghc-devs-boun...@haskell.org] On Behalf Of
> |  Herbert Valerio Riedel
> |  Sent: 19 February 2015 10:42
> |  To: Kazu Yamamoto
> |  Cc: librar...@haskell.org; ghc-devs@haskell.org
> |  Subject: Re: Behavior change of Data.Char
> |
> |  On 2015-02-19 at 06:19:18 +0100, Kazu Yamamoto () wrote:
> |  > It seems to me that some characters of GHC 7.10.1RC2 behave
> |  > differently from those of GHC 7.8.4:
> |  >
> |  >   7.8.4   7.10.1RC2
> |  > isLower (char 170)TrueFalse
> |
> |  Fwiw, the motivation for that particular change may be (I'm just
> |  guessing here) to have the following condition hold:
> |
> |\c -> isLower c `implies` (not . isLower . toUpper) c
> |
> |  i.e. if something is 'lower-case', then applying 'toUpper' should
> |  result in a character that is not 'lower-case' anymore. This didn't
> |  hold with 7.8.4's Unicode 6, but now holds with 7.10.1's Unicode 7
> |  definitions.
> |
> |  Cheers,
> |hvr
> |  ___
> |  ghc-devs mailing list
> |  ghc-devs@haskell.org
> |  http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Behavior change of Data.Char

2015-02-18 Thread David Feuer
7.10 uses a newer version of Unicode, which could explain differences.

On Thu, Feb 19, 2015 at 12:19 AM, Kazu Yamamoto  wrote:
> Hi,
>
> It seems to me that some characters of GHC 7.10.1RC2 behave
> differently from those of GHC 7.8.4:
>
>   7.8.4   7.10.1RC2
> isLower (char 170)TrueFalse
> isSymbol (chr 182)TrueFalse
> isPunctuation (chr 182)   FaseTrue
>
> Is this intentional?
>
> I noticed this because I received a bug report:
>
> https://github.com/kazu-yamamoto/word8/issues/3
>
> As you can see, 167 also behaves differently.
>
> --Kazu
> ___
> Libraries mailing list
> librar...@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Merge FlexibleContexts with FlexibleInstances?

2015-02-05 Thread David Feuer
In my limited experience thus far, it seems to me that a substantial
majority of modules that start out needing one of these end up needing the
other one too. They appear to be two sides of the same coin, each allowing
for (slightly) more powerful termination checking. Should the two just be
made synonyms, to cut down a tiny bit on the boilerplate LANGUAGE pragmas?
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: What is the story behind the type of undefined?

2015-02-01 Thread David Feuer
Yes it does. Thanks. For the sake of consistency, I'd rather even have
separate functions with funny-looking types than hidden magic. That
is, we could hypothetically have

undefined# :: forall (a :: #) . a
error# :: forall (a :: #) . String -> a

There's no mechanism in Haskell to create things with such types, but
at least that would make the strange types explicit. Of course, if
someone can do something better, well, better is better.

On Sun, Feb 1, 2015 at 2:07 PM, Adam Gundry  wrote:
> Hi David,
>
> See Note [Error and friends have an "open-tyvar" forall] in MkCore. The
> short answer is that error and undefined are treated magically by GHC:
> the actual type of undefined is
>
> forall (a :: OpenKind) . a
>
> and both * and # are subkinds of OpenKind.
>
> (There is a plan to get rid of this subkinding in favour of normal
> polymorphism, but it hasn't been implemented yet. See
> https://ghc.haskell.org/trac/ghc/wiki/NoSubKinds for more details.)
>
> Hope this helps,
>
> Adam
>
>
> On 01/02/15 18:54, David Feuer wrote:
>> If I define
>>
>> {-# LANGUAGE MagicHash #-}
>>
>> g :: Int# -> Int
>> g 3# = 3
>>
>> myUndefined = undefined
>>
>> then this gives a sensible type error about a kind mismatch:
>>
>> usual :: Int
>> usual = g myUndefined
>>
>> but this, oddly enough, compiles:
>>
>> peculiar :: Int
>> peculiar = g undefined
>>
>> GHCi and the definition in GHC.Error agree that
>>
>> undefined :: a
>>
>> So why am I allowed to use it as a type of kind #?
>
>
>
> --
> Adam Gundry, Haskell Consultant
> Well-Typed LLP, http://www.well-typed.com/
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


What is the story behind the type of undefined?

2015-02-01 Thread David Feuer
If I define

{-# LANGUAGE MagicHash #-}

g :: Int# -> Int
g 3# = 3

myUndefined = undefined

then this gives a sensible type error about a kind mismatch:

usual :: Int
usual = g myUndefined

but this, oddly enough, compiles:

peculiar :: Int
peculiar = g undefined

GHCi and the definition in GHC.Error agree that

undefined :: a

So why am I allowed to use it as a type of kind #?
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: GHC support for the new "record" package

2015-01-26 Thread David Feuer
>> I don?t think anyone is suggesting adding any of lens are they?  Which
>> bits did you think were being suggested for addition?
>>
> I was mostly referring to the use of the (a -> f b) -> s -> f t form.

All right. If nobody's suggesting it, I'll suggest it. Is it really
that evil? Why does it occupy such a strange place off to the side of
the rest of the Haskell ecosystem?

David
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: vectorisation code?

2015-01-19 Thread David Feuer
Richard Eisenberg wrote:
> Here's an alternate suggestion: in SimplCore, keep the call to vectorise
around, but commented out (not just with CPP, for better syntax
highlighting). Include a Note explaining what `vectorise` does and why it's
not there at the moment. However, move the actual vectorisation code
somewhere else in the repo, outside of the source directories (`utils`? a
new `attic` directory?).

I don't know too much about git, but I would think we'd want to remove it
from master and add a commit putting it back in to a dph branch.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Milestones

2015-01-09 Thread David Feuer
I took the liberty of pushing back the milestones for a few tickets
that looked unlikely to be acted on for 7.10.1, based on a combination
of severity, recent activity, and perceived intrusiveness. If anyone
objects, please move them back.

#9314: Each object file in a static archive file (.a) is loaded into
its own mmap()ed page
#8440: Get rid of the remaining static flags
#8634: Relax functional dependency coherence check ("liberal coverage
condition")

David
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: seq#: do we actually need it as a primitive?

2015-01-08 Thread David Feuer
On Thu, Jan 8, 2015 at 8:42 AM, Roman Cheplyaka  wrote:

> Also, where can I find the 'instance Monad IO' as understood by GHC?
> grep didn't find one.

It's in GHC.Base.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


seq#: do we actually need it as a primitive?

2015-01-07 Thread David Feuer
I've read about the inlining issues surrounding
Control.Exception.evaluate that seem to have prompted the creation of
seq#, but I'm still missing something. Isn't  seq# a sthe same as
  let !a' = a in (# s, a' #) ?

David
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: The future of the Haskell98 and Haskell2010 packages

2014-11-18 Thread David Feuer
I think you're right, and that's a strong reason to come up with an update
to the Haskell Report. Include in it, at least:

-- Big-ticket items
0. Monoid
1. Foldable, Traversable
2. Applicative
3. Applicative => Monad
-- side notes
4. inits = map reverse . scanl (flip (:)) []  -- efficiency—not optimal but
not hilariously bad
5. unwords = intercalate " "  -- increased, more intuitive laziness

On Tue, Nov 18, 2014 at 11:44 AM, Richard Eisenberg 
wrote:

> I support this direction. But I disagree with one statement you've made:
>
> On Nov 18, 2014, at 11:07 AM, Austin Seipp  wrote:
> > To be clear: GHC can still typecheck, compile, and efficiently execute
> > Haskell 2010 code. It is merely the distribution of compatible
> > packages that has put us in something of a bind.
>
> GHC 7.10 will not be able to compile a Haskell2010-compliant Monad
> instance. In fact, as far as I can see, there is no way to write a Monad
> instance that is both portable to other Haskell compilers and acceptable to
> GHC 7.10. I think this point should be included in the manual (if I'm
> right).
>
> This makes me a little sad, but I don't disagree with any of the decisions
> we've made along the way.
>
> Richard
> ___
> Glasgow-haskell-users mailing list
> glasgow-haskell-us...@haskell.org
> http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


RE: [GHC] #9781: Make list monad operations fuse

2014-11-11 Thread David Feuer
Note also that there are fairly clear reasons that fusion is flakier than
many other optimizations. In particular, it requires the compiler to do
things that seem *weird*, and that in most other cases are just *bad
ideas*. At least, it requires:

1. Inlining things that look large and/or expensive. This is done in the
hope that they will fuse with other things, producing a whole that is
smaller and cheaper than the sum of its parts. It is done with the
knowledge that these things will (in most cases) be written back to smaller
cheaper forms if they don't fuse. That is, *our* knowledge of that—the
compiler has absolutely no way to know what shenanigans we're up to.

2. Refraining from floating things that look like they should be floated.
GHC likes to pull constants and such out because doing so improves sharing
and also improves other analyses. But if it pulls our producer away from
our consumer, they will not fuse. I think Simon's simplifier changes a few
months ago helped with this issue, but I don't know that it is (or can ever
be) resolved completely.
On Nov 11, 2014 11:54 AM, "David Feuer"  wrote:

>
> On Nov 11, 2014 6:04 AM, "Simon Peyton Jones" 
> wrote:
> > It’s true that, particularly for fusion, inlining can make a huge
> difference.  And GHC really does need help… it’s extremely hard for it to
> make the “right” choice all the time.
>
> The inliner does indeed do amazing things, and list fusion does indeed do
> lovely things for user code. It's just not the most *reliable* optimization
> in the compiler. I don't think there's anything wrong with admitting that
> and trying to avoid relying on it too heavily in *library* code. Kim-Ee was
> right that expanding out mapM by hand bloated the source. I've since
> defined `sequence=mapM id` to resolve that problem, and doing so does not
> hurt the benchmarks—it relies only on inlining id (which is quite reliable)
> and beta-reducing (which is also quite reliable).
>
> > I strongly agree with Kim-Ee that we should play the game of “optimise
> by randomly mutating the program and pick the version that (today) happens
> to run faster”.  But I don’t think David is doing that.   There is, at
> least a Note: [List comprehensions and inlining].
>
> I've been trying to leave a trail of comments and notes as I go. I may
> need to go further.
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


RE: [GHC] #9781: Make list monad operations fuse

2014-11-11 Thread David Feuer
On Nov 11, 2014 6:04 AM, "Simon Peyton Jones"  wrote:
> It’s true that, particularly for fusion, inlining can make a huge
difference.  And GHC really does need help… it’s extremely hard for it to
make the “right” choice all the time.

The inliner does indeed do amazing things, and list fusion does indeed do
lovely things for user code. It's just not the most *reliable* optimization
in the compiler. I don't think there's anything wrong with admitting that
and trying to avoid relying on it too heavily in *library* code. Kim-Ee was
right that expanding out mapM by hand bloated the source. I've since
defined `sequence=mapM id` to resolve that problem, and doing so does not
hurt the benchmarks—it relies only on inlining id (which is quite reliable)
and beta-reducing (which is also quite reliable).

> I strongly agree with Kim-Ee that we should play the game of “optimise by
randomly mutating the program and pick the version that (today) happens to
run faster”.  But I don’t think David is doing that.   There is, at least a
Note: [List comprehensions and inlining].

I've been trying to leave a trail of comments and notes as I go. I may need
to go further.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: [GHC] #9781: Make list monad operations fuse

2014-11-11 Thread David Feuer
On Nov 11, 2014 3:56 AM, "Kim-Ee Yeoh"  wrote:
>
> From the patch fragment at
>
> https://phabricator.haskell.org/D455?id=1311#inline-3123
>
> What's the justification for expanding out the definition of mapM from
"sequence . map f" into do-notation and duplicated code?
>
> Observe how mapM now duplicates code from sequence.

One good option might be to redefine sequence in terms of mapM.

> The absence of benchmarks is bad enough. What's worse is that the given
excuse boils down to "Pity the poor compiler! Let's take over its work
instead. Like, just in case, you know."

The excuse is that it actually makes a big difference for nofib. Why? I
would have to guess it relates to what inlines when. The inliner is a
finicky beast. In response to your insults, I will say that although GHC
has beautiful ideas in it, a lot of the details of the optimization passes
and how they fit together *are* a bit of a crapshoot, chosen by benchmarks
rather than theory. Theory sometimes comes up behind and explains why the
benchmarks do what they do, but you can't expect every little change to be
backed up by some deep theoretical reason.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: Reviving the LTS Discussions (ALT: A separate LTS branch)

2014-11-07 Thread David Feuer
GHC is an open source project. People work on it because

1. They enjoy it and find it interesting,
2. They need it to work well to support their own software,
3. They're trying to write a paper/get a degree/impress their peers, or, in
very rare cases,
4. Someone pays them to do it.
People are also willing to do some kinds of minor maintenance work because
5. They feel a sense of obligation to the community
but this is not likely, on its own, to keep many people active.

What does this have to do with LTS releases? The fact is that having people
who want an LTS release does not necessarily mean that anyone else should
do much of anything about it. If they don't really care, they're likely to
half-build an LTS process and then get sidetracked.

So what do I think should be done about this? I think "GHC headquarters"
should make a standing offer to any person, group, or company interested in
producing an LTS release: an offer of Trac, and Phabricator, and
Harbormaster, and generally all the infrastructure that GHC already uses.
Also an offer of advice on how to manage releases, deal with common issues,
etc. But a promise of programming power seems likely to be an empty one,
and I don't see the point of trying to push it. If someone wants an LTS
release, they need to either make one themselves or pay someone to do the
job.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: RFC: Dropping Windows XP support

2014-11-07 Thread David Feuer
+1. Windows XP was Microsoft's most successful OS thus far, but it's pretty
much dead now. One potentially related potential concern: how will this
change affect Wine support?

On Fri, Nov 7, 2014 at 1:16 PM, Austin Seipp  wrote:

> Hi all,
>
> This is a quick discussion about the current system requirements for
> Windows builds.
>
> Spurred by a recent[1] LLVM discussion, I'd like to raise the question
> of dropping support for Windows XP, and bumping the minimum required
> version to Windows Vista or even Windows 7.
>
> For one, Microsoft doesn't support XP anymore, so most people are
> moving off it anyway. 'Soon' even XP Embedded will be obsoleted.
>
> But second, Vista and beyond introduced useful new APIs we could use.
> I was digging through the LLVM thread and two came out to me:
>
>  1) We could switch to using slim reader/writer locks, which in some
> workloads may work out better than critical sections (they'll win on
> more read-heavy workloads). The downsides is there's no recursive
> locking but we don't use that anyway (and recursive locks are
> considered bad by many anyway[2]).
>
>  2) We could probably use an actual condition variables API that was
> introduced with Vista. Currently we use a giant EVENT object to
> emulate the API, which could be replaced with the real deal.
>
> Both of these could be nice wins for simplicity and performance I think.
>
> I know there are some corporate users out there who this may impact,
> and users as well. I'd like to know what people think. Particularly
> what version we should standardize on.
>
> FWIW, I don't plan on changing any of this until the 7.12 release at least.
>
> [1] http://article.gmane.org/gmane.comp.compilers.llvm.devel/78419
> [2] http://www.zaval.org/resources/library/butenhof1.html
>
> --
> Regards,
>
> Austin Seipp, Haskell Consultant
> Well-Typed LLP, http://www.well-typed.com/
> ___
> Glasgow-haskell-users mailing list
> glasgow-haskell-us...@haskell.org
> http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


RE: thenIO removal

2014-11-03 Thread David Feuer
Simon Peyton Jones wrote:

>
> It's not a big deal.
>
> You can probably replace both those bindIOName uses with bindMName (i.e
> (>>=)), in TcRnDriver.  That will just make GHCi generate code with uses of
> overloaded (>>=) that must be evaluated, rather than calling bindIO
> directly.  It should work just fine, but it'll make everything a tiny bit
> slower and more indirect.  If it simplified the code a lot, then fine, but
> it doesn't really.  So I'm inclined to leave it.
>

That's fine; I don't know why my search didn't turn that up (perhaps it's
referenced in some weird indirect way). If we're going to keep thenIO, we
should surely define (*>) = (>>) = thenIO, right?
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


thenIO removal

2014-11-02 Thread David Feuer
GHC.Base has a function, thenIO, that isn't used anywhere in the libraries
or compiler, and isn't exported anywhere "public". But for some reason,
it's listed in compiler/prelude/PrelNames.lhs, which causes a validation
failure if I remove it. Is there a reason that a completely unused function
is wired in? Is it a historical artifact, or an optimization that was never
completed, or something else? Should I wipe it out of PrelNames, or do we
want to use it for something?

David
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: Understanding core2core optimisation pipeline

2014-10-30 Thread David Feuer
On Thu, Oct 30, 2014 Jan Stolarek wrote:

>
> 2. First pass of full laziness is followed by floating in. At that stage
> we have not yet run the
> demand analysis and yet the code that does the floating-in checks whether
> a binder is one-shot
> (FloatIn.okToFloatInside called by FloatIn.fiExpr AnnLam case). This
> suggests that cardinality
> analysis is done earlier (but when?) and that demand analysis is not the
> same thing as
> cardinality analysis.
>

If you're looking at super-recent code, that could be Joachim Breitner's
work. He's exposed the one-shot stuff at the Haskell level with the
experimental magic oneShot function, intended primarily for use in the
libraries to make foldl-as-foldr and related things be analyzed more
reliably. The old GHC arity analysis combined with his Call Arity get
almost everything right, but there are occasional corner cases where things
go wrong, and when they do the results tend to be extremely bad.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Is USE_REPORT_PRELUDE still useful?

2014-10-28 Thread David Feuer
A lot of code in GHC.List and perhaps elsewhere compiles differently
depending on whether USE_REPORT_PRELUDE is defined. Not all code differing
from the Prelude implementation. Furthermore, I don't know to what extent,
if any, such code actually works these days. Some of it certainly was not
usable for *years* because GHC.List did not import GHC.Num. Should we

1. Convert all those code blocks to comments?

2. Go through everything, check it to make sure it's written as in the
Prelude or has an alternative block, and then actually set up all the
infrastructure so that works?

3. Leave it alone?

My general inclination is to go to 1.

I don't *really* like option 3 for four reasons:

a. It leaves untouched code to rot

b. It forces us to run CPP on files that otherwise have no need for it.

c. It interrupts the flow of the code with stuff that *looks* like real
code (and is highlighted as such) but is actually inactive.

d. It's not hard to accidentally move code into or out of the #ifdef blocks.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: Call Arity, oneShot or both

2014-10-28 Thread David Feuer
Simon Peyton Jones wrote:

>
> But since it is plausible that there are cases out there where it might
> help, even if just a little, we could go forward  ?unless the
> implementation becomes ugly.
>

Based on our experience with Call Arity, it's much more likely that it will
help a lot in a few cases than that it will help a little in a lot of cases.

David
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: Call Arity, oneShot, or both

2014-10-27 Thread David Feuer
Joachim Breitner כתב

That would be great! But do we have evidence of this user-written code
> that benefits? So far I have only seen relevant improvement due to
> list-fusion a left-foldish function.
>

I was under the impression that the transformation was much more general
than that, improving various recursive forms. Was I wrong? But aside from
that, I would be astonished if the library authors were the only ones
writing left-accumulating folds!

David
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: Call Arity, oneShot, or both

2014-10-26 Thread David Feuer
> There is also the option of combining both. Then we do not get the
> regression, but still the improvement for fft2:

I *definitely* think we should leave Call Arity in place by default unless
and until something strictly better comes along. One very nice feature is
that it works for a lot of user-written code of various kinds without the
user having to do *anything* special. oneShot seems more limited in
applicability, for use primarily in library code. So I would personally
think that it should be added, with an option, of course, to turn it off. I
would also go for documenting it as experimental and provisional.

David
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Improving specialization, redux

2014-10-24 Thread David Feuer
I spoke with Simon today, and I think I have a bit of a better idea now of
what's going on with specialization, and why it sometimes fails to
specialize things as much as it could. Apparently, the replacement of (sel
@ type dict) by sel.type is accomplished by the use of a rewrite rule
generated by the specializer when it specializes an overloaded function
using that (or something similar to that, anyway). The trouble is that as
things are currently done, there's no way in general to get the right
replacement to stick on the RHS of the rule when such forms arise in other
contexts. The type checker constructs dictionaries in type checking, but
afterwards we can't produce new ones.

The most likely way to solve this is probably to (somehow) decouple
dictionary creation and selection from type checking so as to expose those
facilities to later phases. The concept would be something vaguely like
this: instead of creating rewrite rules, the specializer would instead
check whether the appropriate dictionary already existed, and create it if
not. The simplifier would then check for a dictionary every time it
encountered a class method whose instance is determined. No, I have no idea
how much work such a change would involve.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: Avoiding the hazards of orphan instances without dependency problems

2014-10-22 Thread David Feuer
As far as I can tell, all the ideas for "really" solving the problem are
either half-baked ideas, ideas requiring a complete re-conception of
Haskell (offering both ups and downs), or long term lines of research that
will probably get somewhere good some day, but not today. Yes, it would be
great to get a beautiful modular instance system into Haskell, but unless
I'm missing some development, that's not too likely to happen in a year or
three. That's why I think it would be nice to create a system that will
ease some of the pain without limiting further developments.

On Wed, Oct 22, 2014 at 3:59 PM, Jan Stolarek 
wrote:

> These are certainly good points and I'm far from claiming that I have
> solved all the potential
> problems that may arise (if I had I would probably be implementing this
> right now). But I still
> believe that pragmas are not a good solution, while control of imports and
> exports is. Unless the
> problems turn out to be impossible to overcome.
>
> Janek
>
> Dnia środa, 22 października 2014, David Feuer napisał:
> > You're not the first one to come up with this idea (and I don't know who
> > is). Unfortunately, there are some complications. I'm pretty sure there
> are
> > simpler examples than this, but this is what I could think of. Suppose we
> > have
> >
> > module PotatoModule (Root (..), T (..)) where  -- Does not export
> instance
> > Root T
> > class Root t where
> >   cook :: t -> String
> >
> > data T = T
> > data Weird :: * -> * where
> >   Weird :: Root t => t -> Weird t
> >
> > instance Root T where
> >   cook T = "Boil, then eat straight out of the pot."
> >
> > potato :: Weird T
> > potato = Weird T
> >
> > -- --
> >
> > module ParsnipModule where
> > import PotatoModule
> >
> > instance Root T where
> >   cook T = "Slice into wedges or rounds and put in the soup."
> >
> > parsnip :: Weird T
> > parsnip = Weird T
> >
> > mash :: Weird t -> Weird t -> String
> > mash (Weird x) (Weird y) = cook x ++ cook y
> >
> > mush :: String
> > mush = mash potato parsnip
> >
> > -- --
> >
> > OK, so what happens when we compile mash?  Well, we have a bit of a
> > problem! When we mash the potato and the parsnip, the mash function gets
> > access to two different dictionaries for Root T, and two values of type
> T.
> > There is absolutely nothing to indicate whether we should use the
> > dictionary that's "in the air" because Root T has an instance in
> > ParsnipModule, the dictionary that we pull out of parsnip (which is the
> > same), or the dictionary we pull out of potato (which is different). I
> > think inlining and specialization will make things even stranger and less
> > predictable. In particular, the story of what goes on with inlining gets
> > much harder to understand at the Haskell level: if mash and mush are put
> > into a third module, and potato and parsnip are inlined there, that
> becomes
> > a type error, because there's no visible Root T instance there!
> >
> > On Wed, Oct 22, 2014 at 12:56 PM, Jan Stolarek 
> >
> > wrote:
> > > It seems that my previous mail went unnoticed. Perhaps because I didn't
> > > provide enough
> > > justification for my solution. I'll try to make up for that now.
> > >
> > > First of all let's remind ourselves why orphan instances are a problem.
> > > Let's say package A
> > > defines some data types and package B defines some type classes. Now,
> > > package C might make data
> > > types from A instances of type classes from B. Someone who imports C
> will
> > > have these instances in
> > > scope. But since C defines neither the data types nor the type classes
> it
> > > might be surprising for
> > > the user of C that C makes A data types instances of B type classes. So
> > > we issue a warning that
> > > this is potentially dangerous. Of course person implementing C might
> > > suppress these warnings so
> > > the user of C can end up with unexpected instances without knowing
> > > anything.
> > >
> > > I feel that devising some sort of pragmas to define which orphan
> > > instances are allowed does not
> > > address the heart of the problem. And the heart of the problem is that
> we
> > > can't control importing
> > > and exporting of instances. Pragmas are just a workaround, not a real

Re: Avoiding the hazards of orphan instances without dependency problems

2014-10-22 Thread David Feuer
You're not the first one to come up with this idea (and I don't know who
is). Unfortunately, there are some complications. I'm pretty sure there are
simpler examples than this, but this is what I could think of. Suppose we
have

module PotatoModule (Root (..), T (..)) where  -- Does not export instance
Root T
class Root t where
  cook :: t -> String

data T = T
data Weird :: * -> * where
  Weird :: Root t => t -> Weird t

instance Root T where
  cook T = "Boil, then eat straight out of the pot."

potato :: Weird T
potato = Weird T

-- --

module ParsnipModule where
import PotatoModule

instance Root T where
  cook T = "Slice into wedges or rounds and put in the soup."

parsnip :: Weird T
parsnip = Weird T

mash :: Weird t -> Weird t -> String
mash (Weird x) (Weird y) = cook x ++ cook y

mush :: String
mush = mash potato parsnip

-- --

OK, so what happens when we compile mash?  Well, we have a bit of a
problem! When we mash the potato and the parsnip, the mash function gets
access to two different dictionaries for Root T, and two values of type T.
There is absolutely nothing to indicate whether we should use the
dictionary that's "in the air" because Root T has an instance in
ParsnipModule, the dictionary that we pull out of parsnip (which is the
same), or the dictionary we pull out of potato (which is different). I
think inlining and specialization will make things even stranger and less
predictable. In particular, the story of what goes on with inlining gets
much harder to understand at the Haskell level: if mash and mush are put
into a third module, and potato and parsnip are inlined there, that becomes
a type error, because there's no visible Root T instance there!

On Wed, Oct 22, 2014 at 12:56 PM, Jan Stolarek 
wrote:

> It seems that my previous mail went unnoticed. Perhaps because I didn't
> provide enough
> justification for my solution. I'll try to make up for that now.
>
> First of all let's remind ourselves why orphan instances are a problem.
> Let's say package A
> defines some data types and package B defines some type classes. Now,
> package C might make data
> types from A instances of type classes from B. Someone who imports C will
> have these instances in
> scope. But since C defines neither the data types nor the type classes it
> might be surprising for
> the user of C that C makes A data types instances of B type classes. So we
> issue a warning that
> this is potentially dangerous. Of course person implementing C might
> suppress these warnings so
> the user of C can end up with unexpected instances without knowing
> anything.
>
> I feel that devising some sort of pragmas to define which orphan instances
> are allowed does not
> address the heart of the problem. And the heart of the problem is that we
> can't control importing
> and exporting of instances. Pragmas are just a workaround, not a real
> solution. It would be much
> better if we could just write this (warning, half-baked idea ahead):
>
>   module BazModule ( instance Bar Foo ) where
>
>   import FooModule (Foo (...)) -- import Foo data type from FooModule
>   import BarModule (class Bar) -- import class Bar from BazModule
>
>   instance Bar Foo ...
>
> And then someone importing BazModule can decide to import the instance:
>
>  module User where
>  import FooModule (Foo(..))
>  import BarModule (class Bar)
>  import BazModule (instance Bar Foo)
>
> Of course requiring that classes and instances are exported and imported
> just like everything else
> would be a backawrds incompatible change and would therefore require
> effort similar to AMP
> proposal, ie. first release GHC version that warns about upcoming change
> and only enforce the
> change some time later.
>
> Janek
>
> Dnia wtorek, 21 października 2014, RodLogic napisał:
> > One other benefit of multiple files to use a single module name is that
> it
> > would be easy to separate testing code from real code even when testing
> > internal/non-exported functions.
> >
> > On Tue, Oct 21, 2014 at 1:22 PM, John Lato  wrote:
> > > Perhaps you misunderstood my proposal if you think it would prevent
> > > anyone else from defining instances of those classes?  Part of the
> > > proposal was also adding support to the compiler to allow for a
> multiple
> > > files to use a single module name.  That may be a larger technical
> > > challenge, but I think it's achievable.
> > >
> > > I think one key difference is that my proposal puts the onus on class
> > > implementors, and David's puts the onus on datatype implementors, so
> they
> > > certainly are complementary and cou

Re: Avoiding the hazards of orphan instances without dependency problems

2014-10-21 Thread David Feuer
On Oct 21, 2014 1:22 PM, "John Lato"  wrote:
>
> Perhaps you misunderstood my proposal if you think it would prevent
anyone else from defining instances of those classes?  Part of the proposal
was also adding support to the compiler to allow for a multiple files to
use a single module name.  That may be a larger technical challenge, but I
think it's achievable.

You are right; I definitely did not realize this. What happens when files
using the same module name both define instances for the same class and
type(s)? I don't know nearly enough about how these things work to know if
there's a nice way to catch this. Could you explain a bit more about how it
would work? Also, what exactly would be in scope in each of these? Would
adding a file to the module necessitate recompilation of everything
depending on it?

> I think one key difference is that my proposal puts the onus on class
implementors, and David's puts the onus on datatype implementors, so they
certainly are complementary and could co-exist.

Mine puts the onus on either, actually, to support both the pattern of a
maintainer maintaining a class with instances and of one maintaining a type
with instances. To a certain extent these could even be mixed. For example,
a module in base could delegate a number of instances of a certain class,
but we wouldn't want pragmas relating to Hackagy types in there.

One nice thing about my approach is that any program that's correct *with*
the pragma is also correct *without* it—it's entirely negative. In
particular, if someone should come up with a broader/better/ultimate
solution to the orphan instance problem, the pragma could just go away
without breaking anything. Something using multiple files to define one
module inherently requires more support from the future.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: Avoiding the hazards of orphan instances without dependency problems

2014-10-21 Thread David Feuer
As I said before, it still doesn't solve the problem I'm trying to solve.
Look at a package like criterion, for example. criterion depends on aeson.
Why? Because statistics depends on it. Why? Because statistics wants a
couple types it defines to be instances of classes defined in aeson. John
Lato's proposal would require the pragma to appear in the relevant aeson
module, and would prevent *anyone* else from defining instances of those
classes. With my proposal, statistics would be able to declare

{-# InstanceIn Statistics.AesonInstances AesonModule.AesonClass
StatisticsType #-}

Then it would split the Statistics.AesonInstances module off into a
statistics-aeson package and accomplish its objective without stepping on
anyone else. We'd get a lot more (mostly tiny) packages, but in exchange
the dependencies would get much thinner.
On Oct 21, 2014 11:52 AM, "Stephen Paul Weber" 
wrote:

> Somebody claiming to be John Lato wrote:
>
>> Thinking about this, I came to a slightly different scheme.  What if we
>> instead add a pragma:
>>
>> {-# OrphanModule ClassName ModuleName #-}
>>
>
> I really like this.  It solve all the real orphan instance cases I've had
> in my libraries.
>
> --
> Stephen Paul Weber, @singpolyma
> See  for how I prefer to be contacted
> edition right joseph
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: Help understanding Specialise.lhs

2014-10-20 Thread David Feuer
I'll let you know as soon as I can what times I'm available
Thursday/Friday. I don't know that the pattern I describe is common (now),
but it's a straightforward application of constraints on GADT constructors.
Whether people *like* such constraints is another question—there seem to be
good reasons to use them and good reasons not to use them. At the moment,
the lack of specialization is a good reason not to.

You'll see the same thing if you look at the Core for the code down below
the line. By the way, I tried experimentally adding {-# SPECIALIZE eval ::
Expr Int -> Int #-} and got a warning about the pragma being used on a
non-overloaded function. In theory, the function is not overloaded, but in
practice it effectively is; I would hope to be able to do that and get a
specialized version like this:

  evalInt :: Expr Int -> Int
  evalInt (N n) = n
  -- No B case, because Int is not Bool
  evalInt (Add a b) = evalNum a `+.Int` evalNum b -- Specialized addition
  evalInt (Mul a b) = evalNum a `*.Int` evalNum b -- Specialized
multiplication
  -- No EqNum case, because Int is not Bool

-- --

module Calc (checkInt, eval) where


data Expr a where
  N :: Num n => n -> Expr n
  B :: Bool -> Expr Bool
  Add :: Num n => Expr n -> Expr n -> Expr n
  Mul :: Num n => Expr n -> Expr n -> Expr n
  EqNum  :: (Num e, Eq e) => Expr e -> Expr e -> Expr Bool

infixl 6 `Add`
infixl 7 `Mul`
infix 4 `EqNum`

eval :: Expr a -> a
eval (N n) = n
eval (B b) = b
eval (Add a b) = evalNum a + evalNum b
eval (Mul a b) = evalNum a * evalNum b
eval (EqNum a b) = evalNum a == evalNum b

{-# SPECIALIZE evalNum :: Expr Int -> Int #-}
evalNum :: Num a => Expr a -> a
evalNum (N n) = n
evalNum (Add a b) = evalNum a + evalNum b
evalNum (Mul a b) = evalNum a * evalNum b

{-# SPECIALIZE check :: Int -> Int -> Int -> Bool #-}
check :: (Eq n, Num n) => n -> n -> n -> Bool
check x y z = eval $ N x `Add` N y `Mul` N z `EqNum` N z `Mul` N y `Add` N x

checkInt :: Int -> Int -> Int -> Bool
checkInt x y z = check x y z


On Mon, Oct 20, 2014 at 12:11 PM, Simon Peyton Jones 
wrote:

>  David
>
>
>
> If you want to suggest a couple of possible alternative 20-min slots in
> work time (London time zone), not Mon-Weds this week, then maybe we can
> find a mutually convenient time.
>
>
>
> Do you have reason to suppose that the pattern you describe below is
> common?  That is, if implemented, would it make a big difference to
> programs we care about?
>
>
>
> Simon
>
>
>
> *From:* David Feuer [mailto:david.fe...@gmail.com]
> *Sent:* 20 October 2014 13:58
> *To:* Simon Peyton Jones
> *Cc:* ghc-devs
> *Subject:* Re: Help understanding Specialise.lhs
>
>
>
> To be super-clear about at least one aspect: I don't want Tidy Core to
> ever contain something that looks like this:
>
> GADTTest.potato
>   :: GHC.Types.Int -> GADTTest.Silly GHC.Types.Int -> GHC.Types.Int
> GADTTest.potato =
>   \ (x_asZ :: GHC.Types.Int)
> (ds_dPR :: GADTTest.Silly GHC.Types.Int) ->
> case ds_dPR of _ { GADTTest.Silly $dNum_aLV ds1_dPS ->
> GHC.Num.+ @ GHC.Types.Int $dNum_aLV x_asZ x_asZ
> }
>
> Here we see GHC.Num.+ applied to GHC.Types.Int and $dNum_aLV.  We
> therefore know that $dNum_aLV must be GHC.Num.$fNumInt, so GHC.Num.+ can
> eat these arguments and produce GHC.Num.$fNumInt_$c+. But for some reason,
> GHC fails to recognize and exploit this fact! I would like help
> understanding why that is, and what I can do to fix it.
>
>
>
> On Mon, Oct 20, 2014 at 7:53 AM, David Feuer 
> wrote:
>
> On Oct 20, 2014 5:05 AM, "Simon Peyton Jones" 
> wrote:
> > I’m unclear what you are trying to achieve with #9701.  I urge you to
> write a clear specification that we all agree about before burning cycles
> hacking code.
>
> What I'm trying to achieve is to make specialization work in a situation
> where it currently does not. It appears that when the type checker
> determines that a GADT carries a certain dictionary, the specializer
> happily uses it *even once the concrete type is completely known*. What we
> would want to do in that case is to replace the use of the GADT-carried
> dictionary with a use of the known dictionary for that type.
>
> > There are a lot of comments at the top of Specialise.lhs.  But it is,
> I’m afraid, a tricky pass.  I could skype.
>
> I would appreciate that. What day/time are you available?
>
>
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: Help understanding Specialise.lhs

2014-10-20 Thread David Feuer
To be super-clear about at least one aspect: I don't want Tidy Core to ever
contain something that looks like this:

GADTTest.potato
  :: GHC.Types.Int -> GADTTest.Silly GHC.Types.Int -> GHC.Types.Int
GADTTest.potato =
  \ (x_asZ :: GHC.Types.Int)
(ds_dPR :: GADTTest.Silly GHC.Types.Int) ->
case ds_dPR of _ { GADTTest.Silly $dNum_aLV ds1_dPS ->
GHC.Num.+ @ GHC.Types.Int $dNum_aLV x_asZ x_asZ
}

Here we see GHC.Num.+ applied to GHC.Types.Int and $dNum_aLV.  We therefore
know that $dNum_aLV must be GHC.Num.$fNumInt, so GHC.Num.+ can eat these
arguments and produce GHC.Num.$fNumInt_$c+. But for some reason, GHC fails
to recognize and exploit this fact! I would like help understanding why
that is, and what I can do to fix it.

On Mon, Oct 20, 2014 at 7:53 AM, David Feuer  wrote:

> On Oct 20, 2014 5:05 AM, "Simon Peyton Jones" 
> wrote:
> > I’m unclear what you are trying to achieve with #9701.  I urge you to
> write a clear specification that we all agree about before burning cycles
> hacking code.
>
> What I'm trying to achieve is to make specialization work in a situation
> where it currently does not. It appears that when the type checker
> determines that a GADT carries a certain dictionary, the specializer
> happily uses it *even once the concrete type is completely known*. What we
> would want to do in that case is to replace the use of the GADT-carried
> dictionary with a use of the known dictionary for that type.
>
> > There are a lot of comments at the top of Specialise.lhs.  But it is,
> I’m afraid, a tricky pass.  I could skype.
>
> I would appreciate that. What day/time are you available?
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


RE: Help understanding Specialise.lhs

2014-10-20 Thread David Feuer
On Oct 20, 2014 5:05 AM, "Simon Peyton Jones"  wrote:
> I’m unclear what you are trying to achieve with #9701.  I urge you to
write a clear specification that we all agree about before burning cycles
hacking code.

What I'm trying to achieve is to make specialization work in a situation
where it currently does not. It appears that when the type checker
determines that a GADT carries a certain dictionary, the specializer
happily uses it *even once the concrete type is completely known*. What we
would want to do in that case is to replace the use of the GADT-carried
dictionary with a use of the known dictionary for that type.

> There are a lot of comments at the top of Specialise.lhs.  But it is, I’m
afraid, a tricky pass.  I could skype.

I would appreciate that. What day/time are you available?
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: Avoiding the hazards of orphan instances without dependency problems

2014-10-19 Thread David Feuer
OK, so first off, I don't have anything against your pragma; I just think
that something akin to mine would be good to have too. Mine was not
intended to require both class and type to be in scope; if one of them is
not, then it should be given its full name:

{-# InstanceIn Module Foo.Class Type #-}
{-# InstanceIn Module Class Bar.Type #-}

As Edward Kmett explained to me, there are reasons for module authors not
to want to include instances for lens stuff—in particular, they apparently
tend to use a lot of non-portable code, but even aside from that, they may
just not want to have to deal with maintaining that particular code. This
leads to a slew of instances being dumped into lens modules, forcing the
lens package to depend on a bunch of others. What I'm suggesting is that
sticking {-# InstanceIn Data.Text.Lens Strict Data.Text.Lazy.Text
Data.Text.Text #-} into Control.Lens.Iso (and so on) would allow
Data.Text.Lens to be broken off into a separate package, removing the text
dependency from lens.

Note also: I described a way to (try to) support overlapping instances for
mine, but I think it would be valuable to offer mine even without that
feature (dropping the context stuff), if it's just too complex.

On Sun, Oct 19, 2014 at 9:43 PM, John Lato  wrote:

> I fail to see how this doesn't help lens, unless we're assuming no buy-in
> from class declarations.  Also, your approach would require c*n pragmas to
> be declared, whereas mine only requires c.  Also your method seems to
> require having both the class and type in scope, in which case one could
> simply declare the instance in that module anyway.
>
> On Mon, Oct 20, 2014 at 9:29 AM, David Feuer 
> wrote:
>
>> I don't think your approach is flexible enough to accomplish the purpose.
>> For example, it does almost nothing to help lens. Even my approach should,
>> arguably, be extended transitively, allowing the named module to delegate
>> that authority, but such an extension could easily be put off till later.
>> On Oct 19, 2014 7:17 PM, "John Lato"  wrote:
>>
>>> Thinking about this, I came to a slightly different scheme.  What if we
>>> instead add a pragma:
>>>
>>> {-# OrphanModule ClassName ModuleName #-}
>>>
>>> and furthermore require that, if OrphanModule is specified, all
>>> instances can *only* appear in the module where the class is defined, the
>>> involved types are defined, or the given OrphanModule?  We would also need
>>> to add support for the compiler to understand that multiple modules may
>>> appear under the same name, which might be a bit tricky to implement, but I
>>> think it's feasible (perhaps in a restricted manner).
>>>
>>> I think I'd prefer this when implementing orphan instances, and probably
>>> when writing the pragmas as well.
>>>
>>> On Mon, Oct 20, 2014 at 1:02 AM, David Feuer 
>>> wrote:
>>>
>>>> Orphan instances are bad. The standard approach to avoiding the orphan
>>>> hazard is to always put an instance declaration in the module that declares
>>>> the type or the one that declares the class. Unfortunately, this forces
>>>> packages like lens to have an ungodly number of dependencies. Yesterday, I
>>>> had a simple germ of an idea for solving this (fairly narrow) problem, at
>>>> least in some cases: allow a programmer to declare where an instance
>>>> declaration must be. I have no sense of sane syntax, but the rough idea is:
>>>>
>>>> {-# InstanceIn NamedModule [Context =>] C1 T1 [T2 ...] #-}
>>>>
>>>> This pragma would appear in a module declaring a class or type. The
>>>> named module would not have to be available, either now or ever, but
>>>> attempting to declare such an instance in any module *other* than the named
>>>> one would be an error by default, with a flag
>>>> -XAllowForbiddenInstancesAndInviteNasalDemons to turn it off. The optional
>>>> context allows multiple such pragmas to appear in the type/class-declaring
>>>> modules, to allow overlapping instances (all of them declared in advance).
>>>>
>>>> ___
>>>> ghc-devs mailing list
>>>> ghc-devs@haskell.org
>>>> http://www.haskell.org/mailman/listinfo/ghc-devs
>>>>
>>>>
>>>
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Help understanding Specialise.lhs

2014-10-19 Thread David Feuer
I'm trying to figure out how to address #9701, but I'm having an awfully
hard time figuring out what's going on in Specialise.lhs. I think I get the
vague general idea of what it's supposed to do, based on the notes, but the
actual code is a mystery to me. Is there anyone who might be able to help
me get enough of a sense of it to let me do what I need? Many thanks in
advance.

David Feuer
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: Avoiding the hazards of orphan instances without dependency problems

2014-10-19 Thread David Feuer
I don't think your approach is flexible enough to accomplish the purpose.
For example, it does almost nothing to help lens. Even my approach should,
arguably, be extended transitively, allowing the named module to delegate
that authority, but such an extension could easily be put off till later.
On Oct 19, 2014 7:17 PM, "John Lato"  wrote:

> Thinking about this, I came to a slightly different scheme.  What if we
> instead add a pragma:
>
> {-# OrphanModule ClassName ModuleName #-}
>
> and furthermore require that, if OrphanModule is specified, all instances
> can *only* appear in the module where the class is defined, the involved
> types are defined, or the given OrphanModule?  We would also need to add
> support for the compiler to understand that multiple modules may appear
> under the same name, which might be a bit tricky to implement, but I think
> it's feasible (perhaps in a restricted manner).
>
> I think I'd prefer this when implementing orphan instances, and probably
> when writing the pragmas as well.
>
> On Mon, Oct 20, 2014 at 1:02 AM, David Feuer 
> wrote:
>
>> Orphan instances are bad. The standard approach to avoiding the orphan
>> hazard is to always put an instance declaration in the module that declares
>> the type or the one that declares the class. Unfortunately, this forces
>> packages like lens to have an ungodly number of dependencies. Yesterday, I
>> had a simple germ of an idea for solving this (fairly narrow) problem, at
>> least in some cases: allow a programmer to declare where an instance
>> declaration must be. I have no sense of sane syntax, but the rough idea is:
>>
>> {-# InstanceIn NamedModule [Context =>] C1 T1 [T2 ...] #-}
>>
>> This pragma would appear in a module declaring a class or type. The named
>> module would not have to be available, either now or ever, but attempting
>> to declare such an instance in any module *other* than the named one would
>> be an error by default, with a flag
>> -XAllowForbiddenInstancesAndInviteNasalDemons to turn it off. The optional
>> context allows multiple such pragmas to appear in the type/class-declaring
>> modules, to allow overlapping instances (all of them declared in advance).
>>
>> ___
>> ghc-devs mailing list
>> ghc-devs@haskell.org
>> http://www.haskell.org/mailman/listinfo/ghc-devs
>>
>>
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: Avoiding the hazards of orphan instances without dependency problems

2014-10-19 Thread David Feuer
Although they have the same nasal-demon-inducing effects,
IncoherentInstances and AllowForbiddenInstances would turn off errors that
result from distinct situations. It's possible that one might want to play
with forbidden instances in development, keeping the standard coherence
checks in place, and then modify an imported module later.
On Oct 19, 2014 1:05 PM, "Brandon Allbery"  wrote:

> On Sun, Oct 19, 2014 at 1:02 PM, David Feuer 
> wrote:
>
>> with a flag -XAllowForbiddenInstancesAndInviteNasalDemons
>>
>
> One could argue this is spelled -XIncoherentInstances
>
> --
> brandon s allbery kf8nh   sine nomine
> associates
> allber...@gmail.com
> ballb...@sinenomine.net
> unix, openafs, kerberos, infrastructure, xmonad
> http://sinenomine.net
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Avoiding the hazards of orphan instances without dependency problems

2014-10-19 Thread David Feuer
Orphan instances are bad. The standard approach to avoiding the orphan
hazard is to always put an instance declaration in the module that declares
the type or the one that declares the class. Unfortunately, this forces
packages like lens to have an ungodly number of dependencies. Yesterday, I
had a simple germ of an idea for solving this (fairly narrow) problem, at
least in some cases: allow a programmer to declare where an instance
declaration must be. I have no sense of sane syntax, but the rough idea is:

{-# InstanceIn NamedModule [Context =>] C1 T1 [T2 ...] #-}

This pragma would appear in a module declaring a class or type. The named
module would not have to be available, either now or ever, but attempting
to declare such an instance in any module *other* than the named one would
be an error by default, with a flag
-XAllowForbiddenInstancesAndInviteNasalDemons to turn it off. The optional
context allows multiple such pragmas to appear in the type/class-declaring
modules, to allow overlapping instances (all of them declared in advance).
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


T3064 failures

2014-10-16 Thread David Feuer
I don't know what's going on, but T3064 is giving some substantial
performance trouble, making all the validations fail:

max_bytes_used value is too high:
ExpectedT3064(normal) max_bytes_used: 13251728 +/-20%
Lower bound T3064(normal) max_bytes_used: 10601382
Upper bound T3064(normal) max_bytes_used: 15902074
Actual  T3064(normal) max_bytes_used: 17472560
Deviation   T3064(normal) max_bytes_used: 31.9 %
bytes allocated value is too high:
ExpectedT3064(normal) bytes allocated:  385145080 +/-5%
Lower bound T3064(normal) bytes allocated:  365887826
Upper bound T3064(normal) bytes allocated:  404402334
Actual  T3064(normal) bytes allocated: 1372924224
Deviation   T3064(normal) bytes allocated:  256.5 %

I don't know what's allocating an extra gigabyte, but unless there's a good
reason, someone probably needs to tighten something up.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: oneShot (was Re: FoldrW/buildW issues)

2014-10-07 Thread David Feuer
Yes, and it does a very good job in many cases. In other cases, it's
not as good.

On Tue, Oct 7, 2014 at 7:59 AM, Sophie Taylor  wrote:
> Wait, isn't call arity analysis meant to do this by itself now?
>
> On 7 October 2014 17:05, David Feuer  wrote:
>>
>> Just for the heck of it, I tried out an implementation of scanl using
>> Joachim Breitner's magical oneShot primitive. Using the test
>>
>> scanlA :: (b -> a -> b) -> b -> [a] -> [b]
>> scanlA f a bs = build $ \c n ->
>> a `c`
>> foldr (\b g x -> let b' = f x b in (b' `c` g b'))
>>   (const n)
>>   bs
>>   a
>>
>> scanlB :: (b -> a -> b) -> b -> [a] -> [b]
>> scanlB f a bs = build $ \c n ->
>> a `c`
>> foldr (\b g -> oneShot (\x -> let b' = f x b in (b' `c` g b')))
>>   (const n)
>>   bs
>>   a
>>
>> f :: Int -> Bool
>> f 0 = True
>> f 1 = False
>> {-# NOINLINE f #-}
>>
>> barA = scanlA (+) 0 . filter f
>> barB = foldlB (+) 0 . filter f
>>
>>
>> with -O2 (NOT disabling Call Arity) the Core from barB is really,
>> really beautiful: it's small, there are no lets or local lambdas, and
>> everything is completely unboxed. This is much better than the result
>> of barA, which has a local let, and which doesn't seem to manage to
>> unbox anything. It looks to me like this could be a pretty good tool
>> to have around. It certainly has its limits—it doesn't do anything
>> nice with reverse . reverse  or  reverse . scanl f b . reverse, but it
>> doesn't need to be perfect to be useful. More evaluation, of course,
>> is necessary.to make sure it doesn't go wrong when used sanely.
>>
>> David
>> ___
>> ghc-devs mailing list
>> ghc-devs@haskell.org
>> http://www.haskell.org/mailman/listinfo/ghc-devs
>
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


oneShot (was Re: FoldrW/buildW issues)

2014-10-07 Thread David Feuer
Just for the heck of it, I tried out an implementation of scanl using
Joachim Breitner's magical oneShot primitive. Using the test

scanlA :: (b -> a -> b) -> b -> [a] -> [b]
scanlA f a bs = build $ \c n ->
a `c`
foldr (\b g x -> let b' = f x b in (b' `c` g b'))
  (const n)
  bs
  a

scanlB :: (b -> a -> b) -> b -> [a] -> [b]
scanlB f a bs = build $ \c n ->
a `c`
foldr (\b g -> oneShot (\x -> let b' = f x b in (b' `c` g b')))
  (const n)
  bs
  a

f :: Int -> Bool
f 0 = True
f 1 = False
{-# NOINLINE f #-}

barA = scanlA (+) 0 . filter f
barB = foldlB (+) 0 . filter f


with -O2 (NOT disabling Call Arity) the Core from barB is really,
really beautiful: it's small, there are no lets or local lambdas, and
everything is completely unboxed. This is much better than the result
of barA, which has a local let, and which doesn't seem to manage to
unbox anything. It looks to me like this could be a pretty good tool
to have around. It certainly has its limits—it doesn't do anything
nice with reverse . reverse  or  reverse . scanl f b . reverse, but it
doesn't need to be perfect to be useful. More evaluation, of course,
is necessary.to make sure it doesn't go wrong when used sanely.

David
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: dropWhileEndLE breakage

2014-10-02 Thread David Feuer
Simon Peyton Jones asked
> What's going on here?   No other library module defines this function, except 
> in Cabal!
> Simon

That was my fault; I'm very sorry. I had added that function (similar
to Data.List.dropWhileEnd, but not the same) to
compiler/utils/Util.lhs and to another module that used it, and then
forgot it was not available in libraries/base/GHC/.  Since neither
Phab nor I run Windows, there was little hope of catching the mistake
before it went out. I believe Joachim Breitner has fixed the problem
now by using Data.List.dropWhileEnd to construct that error
message—the difference in behavior doesn't matter there.

David Feuer
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: FoldrW/buildW issues

2014-09-24 Thread David Feuer
On Sep 12, 2014 2:35 PM, "Joachim Breitner" 
wrote:
> I once experimented with a magic "oneShot :: (a -> b) -> (a -> b)"
> function, semantically the identity, but tell the compiler not to share
> the result of the computation. Using that in the definition of
> foldl-as-foldr, one can get the same effect as Call Arity, but a bit
> more reliable. I need to investigate if that solves the sumConcatInits
> problem.

One nice thing about this idea (which sounds like it must be related to the
"state hack", but is more explicit) is that it presumably applies also to
similar situations in the State and ST monads, when a state transformer is
only used once. Could you explain, perhaps, what compiler transformation
this enables, and how you implemented it? It would be nice if the compiler
could figure this out for itself, but I'm sure that's much too big a
project for 7.10, whereas making sure foldl, scanl, mapAccumL, foldM, sum,
etc., work really reliably seems important. And yes, I do think such a
thing would probably work for scanl and such, assuming GHC's analysis can
use the information properly—they're just state-transforming list producers.

David
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: ghc-devs Digest, Vol 133, Issue 40

2014-09-23 Thread David Feuer
Simon Peyton Jones wrote:

> anywhere, I think.  You might want a new HsSyn data type for "list with
> possible leading or trailing commas":
>
>   data HsCommadList a
> = HCL
>  Int -- Number of leading commas
>  [a]
>  Int -- Number of trailing commas
>

If we're going to go to the trouble of supporting extra leading and
trailing commas, why not also support extra commas in the middle? That way,

#include "theseexports"
,
#include "thoseexports"

doesn't have to worry about how these and those exports are listed.

David
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: FFI library error building GHC

2014-09-20 Thread David Feuer
Here's what I could find:

[dfeuer@lemur libffi]$ find . -name "*\.a"
./build/inst/lib64/libffi.a
./build/x86_64-unknown-linux-gnu/.libs/libffi.a
./build/x86_64-unknown-linux-gnu/.libs/libffi_convenience.a

On Sat, Sep 20, 2014 at 6:15 AM, Moritz Angermann 
wrote:

> Could you check if you had any other libffi.a or (any other .a) file for
> that matter in the
> libffi folder?  I have the suspicion that the new libffi builds the
> archives for some platforms into different target folders, as someone on
> irc mentioned that for nix, there was a lib64 folder or so, and hence the
> libffi.a was in libffi/build/inst/lib64/libffi.a iirc.
>
> On Sep 20, 2014, at 8:13 AM, David Feuer  wrote:
>
> > I keep getting this error. Can anyone help? I tried removing the file as
> suggested, but it made no difference.
> >
> > "/home/dfeuer/GHC/7.8.3.bin/bin/ghc" -o
> utils/genapply/dist/build/tmp/genapply -hisuf hi -osuf  o -hcsuf hc
> -static  -O -H64m -package pretty -package-db libraries/bootstrapping.conf
>  -i -iutils/genapply/. -iutils/genapply/dist/build
> -iutils/genapply/dist/build/autogen -Iutils/genapply/dist/build
> -Iutils/genapply/dist/build/autogen -no-user-package-db -rtsopts
>   -odir utils/genapply/dist/build -hidir utils/genapply/dist/build -stubdir
> utils/genapply/dist/build-static  -O -H64m -package pretty -package-db
> libraries/bootstrapping.conf   -i -iutils/genapply/.
> -iutils/genapply/dist/build -iutils/genapply/dist/build/autogen
> -Iutils/genapply/dist/build -Iutils/genapply/dist/build/autogen
>  -no-user-package-db -rtsopts  utils/genapply/dist/build/GenApply.o
> > libffi/stamp.ffi.static-shared.install exists, but
> libffi/build/inst/lib/libffi.a does not.
> > Suggest removing libffi/stamp.ffi.static-shared.install.
> > make[1]: *** [libffi/build/inst/lib/libffi.a] Error 1
> > make[1]: *** Waiting for unfinished jobs
> > make: *** [all] Error 2
> >
> > ___
> > ghc-devs mailing list
> > ghc-devs@haskell.org
> > http://www.haskell.org/mailman/listinfo/ghc-devs
>
> —
> Moritz Angermann
> +49 170 54 33 0 74
> mor...@lichtzwerge.de
>
> lichtzwerge GmbH
> Freisinger Landstr. 25
> 85748 Garching b. München
>
> Amtsgericht München HRB 207882
> Geschäftsführung: Moritz Angermann, Ralf Sangl
> USt-Id: DE291948767
>
> Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte
> Informationen. Wenn Sie nicht der richtige Adressat sind oder diese
> E-Mail irrtümlich erhalten haben, informieren Sie bitte sofort den
> Absender und vernichten Sie diese Mail.
> Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail
> ist nicht gestattet.
> This e-mail may contain confidential and/or privileged information.
> If you are not the intended recipient (or have received this e-mail in
> error) please notify the sender immediately and destroy this e-mail.
> Any unauthorized copying, disclosure or distribution of the material in
> this e-mail is strictly forbidden.
>
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


FFI library error building GHC

2014-09-19 Thread David Feuer
I keep getting this error. Can anyone help? I tried removing the file as
suggested, but it made no difference.

"/home/dfeuer/GHC/7.8.3.bin/bin/ghc" -o
utils/genapply/dist/build/tmp/genapply -hisuf hi -osuf  o -hcsuf hc
-static  -O -H64m -package pretty -package-db
libraries/bootstrapping.conf   -i -iutils/genapply/.
-iutils/genapply/dist/build -iutils/genapply/dist/build/autogen
-Iutils/genapply/dist/build -Iutils/genapply/dist/build/autogen
-no-user-package-db -rtsopts  -odir utils/genapply/dist/build -hidir
utils/genapply/dist/build -stubdir utils/genapply/dist/build-static  -O
-H64m -package pretty -package-db libraries/bootstrapping.conf   -i
-iutils/genapply/. -iutils/genapply/dist/build
-iutils/genapply/dist/build/autogen -Iutils/genapply/dist/build
-Iutils/genapply/dist/build/autogen -no-user-package-db
-rtsopts  utils/genapply/dist/build/GenApply.o
libffi/stamp.ffi.static-shared.install exists, but
libffi/build/inst/lib/libffi.a does not.
Suggest removing libffi/stamp.ffi.static-shared.install.
make[1]: *** [libffi/build/inst/lib/libffi.a] Error 1
make[1]: *** Waiting for unfinished jobs
make: *** [all] Error 2
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Where can I stick a dead code elimination rule for quotRemInt#?

2014-09-19 Thread David Feuer
As I describe in #9617, GHC's CSE in 7.9 seems to be good enough to let Int
and Integer use

quot x y = fst (x `quotRem` y)
rem x y = snd (x `quotRem` y)

And actually get good results in code that uses both the quotient and the
remainder. I believe the only thing left to be able to actually implement
this is a simple rule to turn a match on quotRemInt# into one on quotInt#
or remInt# if only the quotient or only the remainder is used. My question
is where I could stick such a rule. Can anyone advise me? All I know is
that it has to happen after CSE.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Cleaning up rather silly static arguments

2014-09-15 Thread David Feuer
Aside from anything having to do with the foldrW/buildW stuff, I decided to
try a little experiment using fusing scanl and reverse (implementations at
http://lpaste.net/2416758997739634688 )

When I define

scanr f b = reverse . scanl (flip f) b . reverse

I get this:

scanr1
scanr1 = \ @ a_akP _ eta_Xb -> eta_Xb

scanr
scanr =
  \ @ a_akP @ a1_akQ f_ah8 b_ah9 eta_B1 ->
letrec {
  go_amb
  go_amb =
\ ds_amc eta1_Xa eta2_B2 eta3_Xc ->
  case ds_amc of _ {
[] -> eta1_Xa eta2_B2 eta3_Xc;
: y_amh ys_ami ->
  go_amb
ys_ami
(\ x_an9 eta4_Xj ->
   let {
 b'_ana
 b'_ana = f_ah8 y_amh x_an9 } in
   eta1_Xa b'_ana (: b'_ana eta4_Xj))
eta2_B2
eta3_Xc
  }; } in
go_amb eta_B1 (scanr1) b_ah9 (: b_ah9 ([]))

go_amb takes four arguments, two of which, eta2_B2 and eta3_Xc, are static.
What makes this seem particularly silly is that we already have all the
structure we need to get rid of them—all that remains is to actually delete
them and replace them with the values they take:

scanr1
scanr1 = \ @ a_akP _ eta_Xb -> eta_Xb

scanr
scanr =
  \ @ a_akP @ a1_akQ f_ah8 b_ah9 eta_B1 ->
let {
  listend
  listend = : b_ah9 ([])} in
letrec {
  go_amb
  go_amb =
\ ds_amc eta1_Xa  ->
  case ds_amc of _ {
[] -> eta1_Xa b_ah9 listend;
: y_amh ys_ami ->
  go_amb
ys_ami
(\ x_an9 eta4_Xj ->
   let {
 b'_ana
 b'_ana = f_ah8 y_amh x_an9 } in
   eta1_Xa b'_ana (: b'_ana eta4_Xj))
  }; } in
go_amb eta_B1 (scanr1)

Now I certainly wouldn't claim this is particularly *good* code, but it
seems significantly more reasonable than before.

If I were to try to find a way to get rid of these things, should I try
hacking on the static argument transformation, or would it fit better in
the simplifier, or somewhere else?

David
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: FoldrW/buildW issues

2014-09-14 Thread David Feuer
Your scanl wrapper might be right for scanl, but it does not satisfy the
condition Joachim proposed. In particular, if we define

(!!) :: [a] -> Int -> a
xs !! n
  | n < 0 = error "Negative index."
  | otherwise = foldrW indexWrap indexCons (error "Large index.") xs n
  where
indexCons x _ 0 = x
indexCons _ r n = r (n-1)
indexWrap = isoSimple

then the simple test

print $ (reverse $ eft 1 1000) !! 50

works just fine. If we replace isoSimple with scanlWrap isoSimple, then the
test fails. That is, this produces wrap and unwrap so that wrap . unwrap is
not much like the identity; it needs to interact with scanlCons in some
fashion to work properly. This does not seem to be at all unusual for
worker/wrapper pairs, but i believe it means we need to find a more general
local correctness criterion than Joachim proposed, if I understood him
correctly.

David


On Sun, Sep 14, 2014 at 2:08 PM, Dan Doel  wrote:

> Which scanl wrapper are you referring to?
>
> The first one I figured out was quite wrong in certain ways. But I think
> the new one is less controversial; it's a lot like the reverse one.
>
> On Sun, Sep 14, 2014 at 1:03 PM, David Feuer 
> wrote:
>
>> Joachim Breitner wrote:
>>
>>> Am Samstag, den 13.09.2014, 00:01 -0400 schrieb David Feuer:
>>> > On Sep 12, 2014 2:35 PM, "Joachim Breitner" 
>>> > wrote:
>>> > > Interesting. I assumed that some wrap.unwrap=id law would hold, or
>>> > at
>>> > > least some moral approximation (e.g. disregarding bottoms in an
>>> > > acceptable manner). But if the wrappers have to do arbitrary stuff
>>> > that
>>> > > can arbitrarily interact with how the producer calls them, this
>>> > becomes
>>> > > a bit less appealing.
>>> >
>>> > No, nothing pleasant like that, I'm afraid. isoSimple is like that of
>>> > course, but once it gets to foldl, the fusion rule is handing the
>>> > builder a wrap/unwrap pair that isn't even close to that.
>>>
>>> and parametricity doesn't help here? Note that due to the forall in the
>>> type of buildW, you can probably reason about what kind of values buildW
>>> can produce, as it can only use whatever the consumer handed to it.
>>> Maybe there is an invariant for that type, and the worker/wrapper pair
>>> is the identity for values that fulfill that invariant.
>>>
>>
>> That seems reasonable, and I suspect without any proof that Takano Akio's
>> wrapper for foldl and Dan Doel's wrapper for reverse probably satisfy it.
>> Scans seem to be more of a challenge. It appears to me that Dan's scanl
>> wrapper probably does *not* satisfy that requirement, and I don't know
>> enough to have much chance of finding one that does.
>>
>> David
>>
>> ___
>> ghc-devs mailing list
>> ghc-devs@haskell.org
>> http://www.haskell.org/mailman/listinfo/ghc-devs
>>
>>
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: FoldrW/buildW issues

2014-09-14 Thread David Feuer
Joachim Breitner wrote:

> Am Samstag, den 13.09.2014, 00:01 -0400 schrieb David Feuer:
> > On Sep 12, 2014 2:35 PM, "Joachim Breitner" 
> > wrote:
> > > Interesting. I assumed that some wrap.unwrap=id law would hold, or
> > at
> > > least some moral approximation (e.g. disregarding bottoms in an
> > > acceptable manner). But if the wrappers have to do arbitrary stuff
> > that
> > > can arbitrarily interact with how the producer calls them, this
> > becomes
> > > a bit less appealing.
> >
> > No, nothing pleasant like that, I'm afraid. isoSimple is like that of
> > course, but once it gets to foldl, the fusion rule is handing the
> > builder a wrap/unwrap pair that isn't even close to that.
>
> and parametricity doesn't help here? Note that due to the forall in the
> type of buildW, you can probably reason about what kind of values buildW
> can produce, as it can only use whatever the consumer handed to it.
> Maybe there is an invariant for that type, and the worker/wrapper pair
> is the identity for values that fulfill that invariant.
>

That seems reasonable, and I suspect without any proof that Takano Akio's
wrapper for foldl and Dan Doel's wrapper for reverse probably satisfy it.
Scans seem to be more of a challenge. It appears to me that Dan's scanl
wrapper probably does *not* satisfy that requirement, and I don't know
enough to have much chance of finding one that does.

David
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: FoldrW/buildW issues

2014-09-12 Thread David Feuer
On Sep 12, 2014 2:35 PM, "Joachim Breitner" 
wrote:
> Interesting. I assumed that some wrap.unwrap=id law would hold, or at
> least some moral approximation (e.g. disregarding bottoms in an
> acceptable manner). But if the wrappers have to do arbitrary stuff that
> can arbitrarily interact with how the producer calls them, this becomes
> a bit less appealing.

No, nothing pleasant like that, I'm afraid. isoSimple is like that of
course, but once it gets to foldl, the fusion rule is handing the builder a
wrap/unwrap pair that isn't even close to that.

> > 2. Somewhat related to the above, this framework has a huge amount of
> > "wiggle room". There is very little to guide the choice of Wrap type.
>
> I guess that would be resolved by time and experience, if we’d employ
> that scheme. But maybe we don’t.

The only way I would imagine would be if it turned out there were a few
types that could be composed somehow. But when I, experimentally, applied
Dan Doel's scanl wrapper type combined with Simple to (!!), I just got
wrong answers.

> > Do you have any ideas?
>
> Directly related to foldrW, no.
>
> About list fusion and foldl in general, some half-baked.
>
> I once experimented with a magic "oneShot :: (a -> b) -> (a -> b)"
> function, semantically the identity, but tell the compiler not to share
> the result of the computation. Using that in the definition of
> foldl-as-foldr, one can get the same effect as Call Arity, but a bit
> more reliable. I need to investigate if that solves the sumConcatInits
> problem.

How does that work exactly? Where do you stick the oneShot/why is it valid?

> Another idea, probably with the same effect: What happens if we extend
> build :: (forall b. (a -> b -> b) -> b -> b) -> [a]
> to
> buildI :: (forall b. (a -> b -> b) -> b -> (b -> b) -> b) -> [a]
> where the extra argument is the identity, but magically „improves values
> of type b“. So with
>
> enum = buildI $ \c n imp -> go 0
>   where go i = imp $ case i of 100 -> n ; _ -> i `c` go (i+1)
>
> and
>
>foldl f a0 = foldrI (\x k a -> k (f x a)) id (\k a -> k a) a0
>
> we might get good code (but this is half-baked and written as I go).

It sounds a lot like the foldrW/buildW thing again, but maybe you can do
better with it.

> Shouldn’t this be on ghc-dev where others can join an, and people will
> find it in the archives later? I prefer to reserve private mail to,
> well, private matters :-)

If you like.

David
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: The list fusion lab

2014-09-11 Thread David Feuer
Joachim Breitner wrote:
> Together with John Wiegly at ICFP, I started to create a list
> performance laboratory. You can find it at:
> https://github.com/nomeata/list-fusion-lab

Many thanks to you both! This sounds like an excellent idea. I do hope
someone figures out a way around the criterion dependency shortly.
Alternatively/longer term, I would love to see criterion refactored a
bit—separating benchmarking and analysis into separate packages would
presumably help with its dependency problems, and having the benchmarking
side work with GHC HEAD reliably would be a Good Thing.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


What is testsuite/tests/rename/should_fail/rnfail018.hs supposed to test?

2014-09-08 Thread David Feuer
When I compile this with 7.8.3, it gives an error message saying that type
variables a and m are not in scope. If I add them to the forall, it tells
me I need FlexibleContexts. If I add that, then it gives me an error about
an ambiguous type variable. Clearly, something crashed ghc-4.04proto, but
there's no indication of what that was. If this test is still at all
relevant, it probably needs to be updated to target something more narrowly.

The program:

{-# LANGUAGE MultiParamTypeClasses, ExplicitForAll #-}
module ShouldFail where
-- !!! For-all with parens
-- This one crashed ghc-4.04proto; the parens after the for-all fooled it

class Monad m => StateMonad s m where
   getState :: m s

setState0 :: forall b. (StateMonad (a,b) m => m a)
setState0 = getState >>= \ (l,_r) -> return l

David Feuer
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: Trying to fix an efficiency issue noted in a TODO in SAT.lhs

2014-09-07 Thread David Feuer
Joachim Breitner wrote:

> Did you profile first, and did it show up there? You know, premature
> optimization... so it might be that your fix is a nice improvement and
> useful exercise (and very welcome as such), but without much real-world
> effect.

You're right, of course. I read the comment and figured it made sense, but
it may not really make much. In particular, the lists involved will usually
be short, so O(n^2) may not actually be bad. It would probably be more
valuable for me to start by trying to make it easier to follow what's going
on (and I've submitted one patch to do so for one of the functions) and
think about efficiency later or not at all.

> BTW, code review is easier on Phabricator. Maybe you want to get started
> using that? See https://ghc.haskell.org/trac/ghc/wiki/Phabricator for
> instructions.

I'll take a look. There are a million tools to learn.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Trying to fix an efficiency issue noted in a TODO in SAT.hs

2014-09-06 Thread David Feuer
compiler/simplCore/SAT.hs has a TODO comment about the fact that it does a
fair bit of appending onto the ends of lists, and that should be done
differently. I made an attempt to fix it. The complexity of the recursion,
however, leaves me uncertain as to whether I really did or not. I've
attached a diff and I hope someone will be able to take a look at it. The
only use of Sequence.fromList is source line 172, and the only significant
use of Foldable.toList (aside from pretty-printing) is on source line 402.
Note that the use of Sequence may be temporary—I want to get the right code
structure down before choosing the best data structure.

Thanks,
David Feuer
diff --git a/compiler/simplCore/SAT.lhs b/compiler/simplCore/SAT.lhs
index a0b3151..aae3e69 100644
--- a/compiler/simplCore/SAT.lhs
+++ b/compiler/simplCore/SAT.lhs
@@ -67,10 +67,16 @@ import VarSet
 import Unique
 import UniqSet
 import Outputable
-
 import Data.List
 import FastString
 
+--We're probably not really going to use Data.Sequence
+--this is just a temporary temporary thing to see what we'll
+--actually need.
+import qualified Data.Sequence as S
+import Data.Foldable (toList)
+import Data.Sequence (Seq, (|>), (<|), (><), fromList)
+
 #include "HsVersions.h"
 \end{code}
 
@@ -118,7 +124,7 @@ data Staticness a = Static a | NotStatic
 
 type IdAppInfo = (Id, SATInfo)
 
-type SATInfo = [Staticness App]
+type SATInfo = Seq (Staticness App) -- [Staticness App]
 type IdSATInfo = IdEnv SATInfo
 emptyIdSATInfo :: IdSATInfo
 emptyIdSATInfo = emptyUFM
@@ -129,7 +135,7 @@ pprIdSATInfo id_sat_info = vcat (map pprIdAndSATInfo 
(Map.toList id_sat_info))
 -}
 
 pprSATInfo :: SATInfo -> SDoc
-pprSATInfo staticness = hcat $ map pprStaticness staticness
+pprSATInfo staticness = hcat $ map pprStaticness $ toList staticness
 
 pprStaticness :: Staticness App -> SDoc
 pprStaticness (Static (VarApp _))  = ptext (sLit "SV")
@@ -139,15 +145,22 @@ pprStaticness NotStatic= ptext (sLit "NS")
 
 
 mergeSATInfo :: SATInfo -> SATInfo -> SATInfo
-mergeSATInfo [] _  = []
-mergeSATInfo _  [] = []
-mergeSATInfo (NotStatic:statics) (_:apps) = NotStatic : mergeSATInfo statics 
apps
-mergeSATInfo (_:statics) (NotStatic:apps) = NotStatic : mergeSATInfo statics 
apps
-mergeSATInfo ((Static (VarApp v)):statics)  ((Static (VarApp v')):apps)  = (if 
v == v' then Static (VarApp v) else NotStatic) : mergeSATInfo statics apps
-mergeSATInfo ((Static (TypeApp t)):statics) ((Static (TypeApp t')):apps) = (if 
t `eqType` t' then Static (TypeApp t) else NotStatic) : mergeSATInfo statics 
apps
-mergeSATInfo ((Static (CoApp c)):statics) ((Static (CoApp c')):apps) = (if 
c `coreEqCoercion` c' then Static (CoApp c) else NotStatic) : mergeSATInfo 
statics apps
-mergeSATInfo l  r  = pprPanic "mergeSATInfo" $ ptext (sLit "Left:") <> 
pprSATInfo l <> ptext (sLit ", ")
-<> ptext (sLit "Right:") <> 
pprSATInfo r
+mergeSATInfo l r = S.zipWith mergeSA l r
+  where
+mergeSA NotStatic _ = NotStatic
+mergeSA _ NotStatic = NotStatic
+mergeSA (Static (VarApp v)) (Static (VarApp v'))
+  | v == v'   = Static (VarApp v)
+  | otherwise = NotStatic
+mergeSA (Static (TypeApp t)) (Static (TypeApp t'))
+  | t `eqType` t' = Static (TypeApp t)
+  | otherwise = NotStatic
+mergeSA (Static (CoApp c)) (Static (CoApp c'))
+  | c `coreEqCoercion` c' = Static (CoApp c)
+  | otherwise = NotStatic
+mergeSA _ _  = pprPanic "mergeSATInfo" $ ptext (sLit "Left:")
+   <> pprSATInfo l <> ptext (sLit ", ")
+   <> ptext (sLit "Right:") <> pprSATInfo r
 
 mergeIdSATInfo :: IdSATInfo -> IdSATInfo -> IdSATInfo
 mergeIdSATInfo = plusUFM_C mergeSATInfo
@@ -156,7 +169,7 @@ mergeIdSATInfos :: [IdSATInfo] -> IdSATInfo
 mergeIdSATInfos = foldl' mergeIdSATInfo emptyIdSATInfo
 
 bindersToSATInfo :: [Id] -> SATInfo
-bindersToSATInfo vs = map (Static . binderToApp) vs
+bindersToSATInfo vs = fromList $ map (Static . binderToApp) vs
 where binderToApp v | isId v= VarApp v
 | isTyVar v = TypeApp $ mkTyVarTy v
 | otherwise = CoApp $ mkCoVarCo v
@@ -178,7 +191,7 @@ satTopLevelExpr expr interesting_ids = do
 satExpr :: CoreExpr -> IdSet -> SatM (CoreExpr, IdSATInfo, Maybe IdAppInfo)
 satExpr var@(Var v) interesting_ids = do
 let app_info = if v `elementOfUniqSet` interesting_ids
-   then Just (v, [])
+   then Just (v, S.empty)
else Nothing
 return (var, emptyIdSATInfo, app_info)
 
@@ -195,8 +208,7 @@ satExpr (App fn arg) interesting_ids = do
 case fn_app of
 Nothing -> satRemainder

RE: cons/build and making rules look boring

2014-09-01 Thread David Feuer
The rule to make scanl fuse looks like this:

"scanl" [~1] forall f a bs . scanl f a bs =
  build (\c n -> a `c` foldr (scanlFB f c) (constScanl n) bs a)

I initially tried reversing it with something like this rule (I'm not sure
it was exactly like this, but it was similar):

"scanlList" [1] forall f a bs . a : foldr (scanlFB f (:)) (constScanl [])
bs a)
  = scanl f a bs

What I found was that this implementation led to poor performance on a
couple nofib benchmarks (I don't remember which off the top of my head).
When I dug down into them to see how they were using scanl, I found that
they *weren't*, either directly or indirectly. When I compared core2core
and dump-inlinings, I saw that the core looked pretty much the same for a
while, but the inliner was saying different things about how it was making
judgements about what to inline. Eventually, I realized that it was looking
at (:) differently *in general*, and giving it an inlining bonus, because
it appeared as the outermost name in the "scanlList" rule. In at least one
case, that changed its decision about whether to inline something. I only
know this is bad because the benchmarks said so—I don't know the precise
reasons.
On Sep 1, 2014 4:49 AM, "Simon Peyton Jones"  wrote:

> | makes (:) look "interesting" to the inliner. Unfortunately, as I
> | discovered after much extreme puzzlement about why rules relating to
> | scanl were affecting things that had nothing to do with scanl, it
> | turns out that making (:) look interesting is really quite bad, and
> | something that we probably never want to happen.
>
> Can you give a concrete, reproducible example of the problem you
> articulate here?  (In general, a concrete example brings tremendous focus
> to discussions, giving readers something specific to bite on.)
>
> Simon
>
> | -Original Message-
> | From: ghc-devs [mailto:ghc-devs-boun...@haskell.org] On Behalf Of David
> | Feuer
> | Sent: 30 August 2014 23:05
> | To: ghc-devs
> | Subject: cons/build and making rules look boring
> |
> | I think I may have figured out at least part of the reason that
> | cons/build gives bad results. I actually ran into a clue when working
> | on scanl. It seems at least part of the problem is that a rule like
> |
> | x : build g = build (\c n -> c x (g c n))
> |
> | makes (:) look "interesting" to the inliner. Unfortunately, as I
> | discovered after much extreme puzzlement about why rules relating to
> | scanl were affecting things that had nothing to do with scanl, it
> | turns out that making (:) look interesting is really quite bad, and
> | something that we probably never want to happen.
> |
> | As a result, the only ways I see to try to make rules like that work
> | properly are
> |
> | 1. If constructors are *always* best treated as boring, and the
> | inliner knows when's a constructor, make it treat them all as boring.
> |
> | 2. Offer a BORINGRULE annotation to indicate that the rule should not
> | make its LHS "interesting", or
> |
> | 3. (I don't like this option much) Make a special case forcing (:) in
> | particular to be boring.
> |
> | David
> | ___
> | ghc-devs mailing list
> | ghc-devs@haskell.org
> | http://www.haskell.org/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


<    1   2   3   4   >