Re: Why isn't this Typeable?

2017-09-25 Thread David Feuer
No. What led me down this path is that I was thinking about whether we could 
simplify the representation and reduce the TCB. The as-yet-incomplete ideas I 
had (largely based on the concept of using a constructor name as a 
singletons-style defunctionalization symbol) seem difficult to adapt to the 
generalization I describe, so I wanted to check first how much that matters.

David FeuerWell-Typed, LLP
 Original message From: Richard Eisenberg 
<r...@cs.brynmawr.edu> Date: 9/25/17  2:42 PM  (GMT-05:00) To: David Feuer 
<da...@well-typed.com> Cc: Ben Gamari <b...@smart-cactus.org>, 
ghc-devs@haskell.org Subject: Re: Why isn't this Typeable? 
I suppose this is conceivable, but it would complicate the representation and 
solver for TypeReps considerably. Do you have a real use case?

Richard

> On Sep 25, 2017, at 2:28 PM, David Feuer <da...@well-typed.com> wrote:
> 
> My example wasn't quite the one I intended (although I think it should
> work as well, and it's simpler). Here's the sort of example I really intended:
> 
>    data Bar :: forall (j :: forall k. k -> Maybe k) (a :: Type). Proxy (j a) 
>->  Type
> 
> I would expect
> 
>    Bar :: Proxy ('Just Int) -> Type
> 
> or, to abuse notation a bit,
> 
>    Bar @'Just @Int
> 
> to be Typeable. What I'm really suggesting is that we should distinguish 
> between things that are typeable and
> things that can be decomposed into typeable components. We already make a 
> limited distinction
> here. For example, we have
> 
>  'Just :: forall a. a -> Maybe a
> 
> 'Just itself cannot be Typeable, but once it's applied to a kind variable, it 
> is Typeable.
> 'Just @Int is Typeable even though that (kind) application cannot be broken 
> with App. Similarly, I'd expect
> Foo 'Just to be Typeable even though that (type) application cannot be broken 
> with App (or Fun).
> 
> Putting things in terms of fingerprints, we can offer type-indexed 
> fingerprints
> 
> newtype Finger a = Finger Fingerprint
> 
> for anything we can fingerprint. Is there any difficulty fingerprinting types 
> like Foo 'Just and
> Bar @'Just @Int? Fingerprints are useful for lots of applications where 
> decomposition isn't
> necessary.
> 
> On Sunday, September 24, 2017 1:16:37 PM EDT Richard Eisenberg wrote:
>> The problem is that to get Typeable (Foo 'Just), we need Typeable 'Just. 
>> However, the kind parameter for Typeable 'Just would be (forall a. a -> 
>> Maybe a), making the full constraint Typable (forall a. a -> Maybe a) 'Just. 
>> And saying that would be impredicative. In other contexts, 'Just *can* be 
>> Typeable, but it's 'Just invisibly instantiated at some monotype for `a`.
>> 
>> So I think that this boils down to impredicativity and that the 
>> implementation is doing the right thing here.
>> 
>> Richard
>> 
>>> On Sep 24, 2017, at 5:45 AM, David Feuer <da...@well-typed.com> wrote:
>>> 
>>> data Foo :: (forall a. a -> Maybe a) -> Type
>>> 
>>> Neither Foo nor Foo 'Just is Typeable. There seems to be a certain sense to 
>>> excluding Foo proper, because it can't be decomposed with Fun. But why not 
>>> Foo 'Just? Is there a fundamental reason, or is that largely an 
>>> implementation artifact?
>>> 
>>> David Feuer
>>> Well-Typed, LLP
>>> ___
>>> ghc-devs mailing list
>>> ghc-devs@haskell.org
>>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>> 
> 
> 

___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Why isn't this Typeable?

2017-09-25 Thread David Feuer
My example wasn't quite the one I intended (although I think it should
work as well, and it's simpler). Here's the sort of example I really intended:

data Bar :: forall (j :: forall k. k -> Maybe k) (a :: Type). Proxy (j a) 
->  Type

I would expect

Bar :: Proxy ('Just Int) -> Type

or, to abuse notation a bit,

Bar @'Just @Int

to be Typeable. What I'm really suggesting is that we should distinguish 
between things that are typeable and
things that can be decomposed into typeable components. We already make a 
limited distinction
here. For example, we have

  'Just :: forall a. a -> Maybe a

'Just itself cannot be Typeable, but once it's applied to a kind variable, it 
is Typeable.
'Just @Int is Typeable even though that (kind) application cannot be broken 
with App. Similarly, I'd expect
Foo 'Just to be Typeable even though that (type) application cannot be broken 
with App (or Fun).

Putting things in terms of fingerprints, we can offer type-indexed fingerprints

newtype Finger a = Finger Fingerprint

for anything we can fingerprint. Is there any difficulty fingerprinting types 
like Foo 'Just and
Bar @'Just @Int? Fingerprints are useful for lots of applications where 
decomposition isn't
necessary.

On Sunday, September 24, 2017 1:16:37 PM EDT Richard Eisenberg wrote:
> The problem is that to get Typeable (Foo 'Just), we need Typeable 'Just. 
> However, the kind parameter for Typeable 'Just would be (forall a. a -> Maybe 
> a), making the full constraint Typable (forall a. a -> Maybe a) 'Just. And 
> saying that would be impredicative. In other contexts, 'Just *can* be 
> Typeable, but it's 'Just invisibly instantiated at some monotype for `a`.
> 
> So I think that this boils down to impredicativity and that the 
> implementation is doing the right thing here.
> 
> Richard
> 
> > On Sep 24, 2017, at 5:45 AM, David Feuer <da...@well-typed.com> wrote:
> > 
> > data Foo :: (forall a. a -> Maybe a) -> Type
> > 
> > Neither Foo nor Foo 'Just is Typeable. There seems to be a certain sense to 
> > excluding Foo proper, because it can't be decomposed with Fun. But why not 
> > Foo 'Just? Is there a fundamental reason, or is that largely an 
> > implementation artifact?
> > 
> > David Feuer
> > Well-Typed, LLP
> > ___
> > ghc-devs mailing list
> > ghc-devs@haskell.org
> > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
> 


___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: RTS changes affect runtime when they shouldn’t

2017-09-24 Thread David Feuer
I think changes to the RTS, code generator, and general heap layout are exactly 
where we *do* want to worry about these very low-level details. Changes in type 
checking, desugaring, core-to-core, etc., probably are not, because it's just 
too hard to tease out the relationship between what they do and what 
instructions are emitted in the end.


David FeuerWell-Typed, LLP
 Original message From: Sven Panne  Date: 
9/24/17  2:00 PM  (GMT-05:00) To: Joachim Breitner  
Cc: ghc-devs@haskell.org Subject: Re: RTS changes affect runtime when they 
shouldn’t 
2017-09-23 21:06 GMT+02:00 Joachim Breitner :

> what I want to do is to reliably catch regressions.


The main question is: Which kind of regressions do you want to catch? Do
you care about runtime as experienced by the user? Measure the runtime. Do
you care abou code size? Measure the code size. etc. etc. Measuring things
like the number of fetched instructions as an indicator for the experienced
runtime is basically a useless exercise, unless you do this on ancient RISC
processors, where each instruction takes a fixed number of cycles.


> What are the odds that a change to the Haskell compiler (in particular to
> Core2Core
> transformations) will cause a significant increase in runtime without a
>  significant increase in instruction count?
> (Honest question, not rhetoric).
>

The odds are actually quite high, especially when you define "significant"
as "changing a few percent" (which we do!). Just a few examples from
current CPUs:

   * If branch prediction has not enough information to do this better, it
assumes that backward branches are taken (think: loops) and forward
branches are not taken (so you should put "exceptional" code out of the
common, straight-line code). If by some innocent looking change the code
layout changes, you can easily get a very measurable difference in runtime
even if the number of executed instructions stays exactly the same.

   * Even if the number of instructions changes only a tiny bit, it could
be the case that it is just enough to make caching much worse and/or make
the loop stream detector fail to detect a loop.

There are lots of other scenarios, so in a nutshell: Measure what you
really care about, not something you think might be related to that.

As already mentioned in another reply, "perf" can give you very detailed
hints about how good your program uses the pipeline, caches, branch
prediction etc. Perhaps the performance dashboard should really collect
these, too, this would remove a lot of guesswork.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Why isn't this Typeable?

2017-09-24 Thread David Feuer
I don't see why Typeable (Foo 'Just) requires that. I'd expect to get back a 
TrTyCon, not a TrApp. Some modifications to the structure of TrTyCon might be 
required.


David FeuerWell-Typed, LLP
 Original message From: Ryan Scott  
Date: 9/24/17  10:08 AM  (GMT-05:00) To: ghc-devs@haskell.org Subject: Re: Why 
isn't this Typeable? 
Trying to conclude Typeable Foo (or, if expanded with
-fprint-explicit-kinds, Typeable ((forall a. a -> Maybe a) -> Type)
Foo) is beyond GHC's capabilities at the moment, as that would require
impredicative polymorphism. This problem has arose in other contexts
too—see Trac #13895 [1] for one example.

I don't think you can conclude Typeable (Foo 'Just) either, since that
requires concluding both Typeable Foo and Typeable 'Just, so you
ultimately run into the same problem.

While there an in-the-works plan to allow a limited form of
impredicativity through explicit use of visible type application [2],
my fear is that that wouldn't be enough to address the problem you've
encountered, since there's no way to visibly apply @((forall a. a ->
Maybe a) -> Type) to Typeable at the moment. To accomplish this, you
would need visible kind application [3].

Ryan S.
-
[1] https://ghc.haskell.org/trac/ghc/ticket/13895
[2] https://ghc.haskell.org/trac/ghc/ticket/11319#comment:11
[3] https://ghc.haskell.org/trac/ghc/ticket/12045
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Why isn't this Typeable?

2017-09-24 Thread David Feuer
data Foo :: (forall a. a -> Maybe a) -> Type
Neither Foo nor Foo 'Just is Typeable. There seems to be a certain sense to 
excluding Foo proper, because it can't be decomposed with Fun. But why not Foo 
'Just? Is there a fundamental reason, or is that largely an implementation 
artifact?
David FeuerWell-Typed, LLP___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Help with #14140

2017-09-07 Thread David Feuer
Could you maybe point me toward where the constant folding is happening in this 
context? I'd like to take a glance and see if I can guess how to upgrade it to 
deal with what we know things *aren't*. Thanks in advance.


David FeuerWell-Typed, LLP___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Type-level generics

2017-09-02 Thread David Feuer
Ah, nice. I was actually exploring the vague general idea behind that approach 
earlier this evening. Magalhães (unsurprisingly) has developed it much much 
further.


David FeuerWell-Typed, LLP
 Original message From: Ryan Scott  
Date: 9/2/17  10:36 PM  (GMT-05:00) To: ghc-devs@haskell.org Subject: Re: 
Type-level generics 
If you're willing to go a completely different route from
GHC.Generics, then you might be interested in the paper Generic
Programming with Multiple Parameters [1] (whose existence I just
learned of—thanks to Pedro, the author, for pointing it out to me). It
does present a single Generic class that is capable of working over
any number of type parameters, although the interface presented is
significantly more complex than the current GHC.Generics.

The only reason I mention backwards compatibility is that if we are
going to introduce a GHC.Generics 2.0 some day, it'd be nice to have a
way to subsume the old interface with the new one, and fortunately,
the aforementioned paper includes an algorithm for doing so. My hope
was that we'd be able to incorporate these ideas into a design that
also grants the ability to write Generic instances for GADTs, but I
don't think GHC has a fancy enough type system to do this
satisfactorily at the moment.

Ryan S.
-
[1] http://dreixel.net/research/pdf/gpmp_colour.pdf
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Type-level generics

2017-08-31 Thread David Feuer
One other thing I should add. We'd really, really like to have isomorphism
evidence:

  toThenFrom :: pr p -> To (From x :: Rep a p) :~: (x :: a)
  fromThenTo :: pr1 a -> pr2 (r :: Rep a p) -> From (To r :: a) :~: (r :: Rep a 
p)

I believe these would make the To and From families considerably more
useful. Unfortunately, while I'm pretty sure those are completely legit for
any Generic-derived types, I don't think there's ever any way to prove
them in Haskell! Ugh.

On Thursday, August 31, 2017 3:37:15 PM EDT David Feuer wrote:
> I've been thinking for several weeks that it might be useful to offer
> type-level generics. That is, along with
> 
> to :: Rep a k -> a
> from :: a -> Rep a
> 
> perhaps we should also derive
> 
> type family To (r :: Rep a x) :: a
> type family From (v :: a) :: Rep a x
> 
> This would allow us to use generic programming at the type level
> For example, we could write a generic ordering family:
> 
> class OrdK (k :: Type) where
>   type Compare (x :: k) (y :: k) :: Ordering
>   type Compare (x :: k) (y :: k) = GenComp (Rep k ()) (From x) (From y)
> 
> instance OrdK Nat where
>   type Compare x y = CmpNat x y
> 
> instance OrdK Symbol where
>   type Compare x y = CmpSymbol x y
> 
> instance OrdK [a] -- No implementation needed!
> 
> type family GenComp k (x :: k) (y :: k) :: Ordering where
>   GenComp (M1 i c f p) ('M1 x) ('M1 y) = GenComp (f p) x y
>   GenComp (K1 i c p) ('K1 x) ('K1 y) = Compare x y
>   GenComp ((x :+: y) p) ('L1 m) ('L1 n) = GenComp (x p) m n
>   GenComp ((x :+: y) p) ('R1 m) ('R1 n) = GenComp (y p) m n
>   GenComp ((x :+: y) p) ('L1 _) ('R1 _) = 'LT
>   GenComp ((x :+: y) p) ('R1 _) ('L1 _) = 'GT
>   GenComp ((x :*: y) p) (x1 ':*: y1) (x2 ':*: y2) =
> PComp (GenComp (x p) x1 x2) (y p) y1 y2
>   GenComp (U1 p) _ _ = 'EQ
>   GenComp (V1 p) _ _ = 'EQ
> 
> type family PComp (c :: Ordering) k (x :: k) (y :: k) :: Ordering where
>   PComp 'EQ k x y = GenComp k x y
>   PComp x _ _ _ = x
> 
> For people who want to play around with the idea, here are the definitions of 
> To and From
> for lists:
> 
>   To ('M1 ('L1 ('M1 'U1))) = '[]
>   To ('M1 ('R1 ('M1 ('M1 ('K1 x) ':*: 'M1 ('K1 xs) = x ': xs
>   From '[] = 'M1 ('L1 ('M1 'U1))
>   From (x ': xs) = 'M1 ('R1 ('M1 ('M1 ('K1 x) ':*: 'M1 ('K1 xs
> 
> David


___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Type-level generics

2017-08-31 Thread David Feuer
I've been thinking for several weeks that it might be useful to offer
type-level generics. That is, along with

to :: Rep a k -> a
from :: a -> Rep a

perhaps we should also derive

type family To (r :: Rep a x) :: a
type family From (v :: a) :: Rep a x

This would allow us to use generic programming at the type level
For example, we could write a generic ordering family:

class OrdK (k :: Type) where
  type Compare (x :: k) (y :: k) :: Ordering
  type Compare (x :: k) (y :: k) = GenComp (Rep k ()) (From x) (From y)

instance OrdK Nat where
  type Compare x y = CmpNat x y

instance OrdK Symbol where
  type Compare x y = CmpSymbol x y

instance OrdK [a] -- No implementation needed!

type family GenComp k (x :: k) (y :: k) :: Ordering where
  GenComp (M1 i c f p) ('M1 x) ('M1 y) = GenComp (f p) x y
  GenComp (K1 i c p) ('K1 x) ('K1 y) = Compare x y
  GenComp ((x :+: y) p) ('L1 m) ('L1 n) = GenComp (x p) m n
  GenComp ((x :+: y) p) ('R1 m) ('R1 n) = GenComp (y p) m n
  GenComp ((x :+: y) p) ('L1 _) ('R1 _) = 'LT
  GenComp ((x :+: y) p) ('R1 _) ('L1 _) = 'GT
  GenComp ((x :*: y) p) (x1 ':*: y1) (x2 ':*: y2) =
PComp (GenComp (x p) x1 x2) (y p) y1 y2
  GenComp (U1 p) _ _ = 'EQ
  GenComp (V1 p) _ _ = 'EQ

type family PComp (c :: Ordering) k (x :: k) (y :: k) :: Ordering where
  PComp 'EQ k x y = GenComp k x y
  PComp x _ _ _ = x

For people who want to play around with the idea, here are the definitions of 
To and From
for lists:

  To ('M1 ('L1 ('M1 'U1))) = '[]
  To ('M1 ('R1 ('M1 ('M1 ('K1 x) ':*: 'M1 ('K1 xs) = x ': xs
  From '[] = 'M1 ('L1 ('M1 'U1))
  From (x ': xs) = 'M1 ('R1 ('M1 ('M1 ('K1 x) ':*: 'M1 ('K1 xs

David
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


RE: [commit: ghc] master: Adjust test suite stats (a055f24)

2017-08-28 Thread David Feuer
These were regressions from your simplifier refactoring. Yes, we should 
investigate. I'll open a ticket this evening, unless someone else gets to it 
first. But we surely don't want to keep CI red on every commit until then.


David FeuerWell-Typed, LLP
 Original message From: Simon Peyton Jones via ghc-devs 
<ghc-devs@haskell.org> Date: 8/28/17  4:57 PM  (GMT-05:00) To: 
ghc-devs@haskell.org Subject: RE: [commit: ghc] master: Adjust test suite stats 
(a055f24) 
David, are you sure we want to accept a more than 5% increase in compile time 
without investigation?   What commit caused these increases?  Maybe they are 
readily squashed? Generally we are trying to improve compiler perf not accept 
it getting worse!

Simon

| -Original Message-
| From: ghc-commits [mailto:ghc-commits-boun...@haskell.org] On Behalf Of
| g...@git.haskell.org
| Sent: 28 August 2017 19:34
| To: ghc-comm...@haskell.org
| Subject: [commit: ghc] master: Adjust test suite stats (a055f24)
| 
| Repository : ssh://g...@git.haskell.org/ghc
| 
| On branch  : master
| Link   :
| https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fghc.haske
| ll.org%2Ftrac%2Fghc%2Fchangeset%2Fa055f240aeda538c656a59e810870e6a2ccc2db
| 7%2Fghc=02%7C01%7Csimonpj%40microsoft.com%7C382263fbefc643e95a6308d4
| ee436098%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636395420551278722&
| sdata=WH8x1FkDCAmRas%2F5CIxA7PmqDj1zzfqApfyowKlGdfo%3D=0
| 
| >---
| 
| commit a055f240aeda538c656a59e810870e6a2ccc2db7
| Author: David Feuer <david.fe...@gmail.com>
| Date:   Mon Aug 28 14:35:19 2017 -0400
| 
| Adjust test suite stats
| 
| T1969 and T12150 were failing (stat too high)
| 
| 
| >---
| 
| a055f240aeda538c656a59e810870e6a2ccc2db7
|  testsuite/tests/perf/compiler/all.T | 6 --
|  1 file changed, 4 insertions(+), 2 deletions(-)
| 
| diff --git a/testsuite/tests/perf/compiler/all.T
| b/testsuite/tests/perf/compiler/all.T
| index 1da2883..cf49981 100644
| --- a/testsuite/tests/perf/compiler/all.T
| +++ b/testsuite/tests/perf/compiler/all.T
| @@ -72,7 +72,7 @@ test('T1969',
|   # 2017-03-24 9261052 (x86/Linux, 64-bit machine)
|   # 2017-04-06 9418680 (x86/Linux, 64-bit machine)
| 
| -   (wordsize(64), 16679176, 15)]),
| +   (wordsize(64), 19199872, 15)]),
|   # 2014-09-10 10463640, 10  # post-AMP-update (somewhat
| stabelish)
| # looks like the peak is around ~10M, but we're
| # unlikely to GC exactly on the peak.
| @@ -87,6 +87,7 @@ test('T1969',
|   # 2017-02-01 19924328 (amd64/Linux) Join points (#12988)
|   # 2017-02-14 16393848 Early inline patch
|   # 2017-03-31 16679176 Fix memory leak in simplifier
| + # 2017-08-25 19199872 Refactor the Mighty Simplifier
| 
|    compiler_stats_num_field('bytes allocated',
|    [(platform('i386-unknown-mingw32'), 301784492, 5), @@ -1110,7
| +,8 @@ test('T12150',
|   [ only_ways(['optasm']),
| compiler_stats_num_field('bytes allocated',
|    [(wordsize(64), 70773000, 5)
| -  # initial:  70773000
| +  # initial:    70773000
| +  # 2017-08-25: 74358208  Refactor the Mighty Simplifier
|    ]),
|   ],
|  compile,
| 
| ___
| ghc-commits mailing list
| ghc-comm...@haskell.org
| https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmail.hask
| ell.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fghc-
| commits=02%7C01%7Csimonpj%40microsoft.com%7C382263fbefc643e95a6308d4
| ee436098%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636395420551278722&
| sdata=j5wIO8m%2FgR3czmSfjpJlzV4HcMbwrI3nU2C98OmMkV0%3D=0
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: New primitive types?

2017-08-26 Thread David Feuer
Atomic operations, or the lack thereof, don't seem terribly relevant to 
immutable Haskell constructor fields.


David FeuerWell-Typed, LLP
 Original message From: Carter Schonwald 
 Date: 8/26/17  10:56 PM  (GMT-05:00) To: Florian 
Weimer , Michal Terepeta  Cc: 
ghc-devs  Subject: Re: New primitive types? 
Which architectures are which?

I assume you mean the dec alpha allowed atomic operations on bytes... but
your phrasing is a teeny bit unclear

On Sat, Aug 19, 2017 at 4:34 AM Florian Weimer  wrote:

> * Michal Terepeta:
>
> > On Tue, Aug 1, 2017 at 8:08 PM Carter Schonwald <
> carter.schonw...@gmail.com>
> > wrote:
> >> One issue with packed fields is that on many architectures you can't
> > quite do subword reads or
> >> writes.  So it might not always be a win.
> >
> > Could you give any examples?
>
> Historic DEC Alpha, now long obsolete.
>
> It is very hard to create compliant and performant implementations of
> Java 5, C 11 or C++ 11 on such architectures.  All these languages
> (and their subsequent revisions) require that naturally aligned
> objects can be accessed independently.  For example, you can't use a
> simple read-modify-write cycle to implement a single-byte store using
> word operations.
>
> That's why such architectures really do not have a future (or even a
> present), except maybe in niche markets such as GPGPU (but even there,
> things are heading towards the de-facto standard memory model).
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Tying knots with strict constructors

2017-08-26 Thread David Feuer
Once in a while, one desires to tie a recursive knot and is stymied by a
strict data constructor. I recently encountered this problem trying to
improve the implementation of `never` in the `streaming` package.
The Stream type is defined thus:

data Stream f m r = Step !(f (Stream f m r))
  | Effect (m (Stream f m r))
  | Return r

It would be nice to be able to write

never :: Applicative f => Stream f m r
never = fix (Step . pure)

Unfortunately, if `pure` is strict (e.g., Identity), this won't work. The
Step wrapper attempts to force `pure never` and then apply the Step worker.
This will never work. The streaming package works around the problem
by representing the `never` stream in a different way, at the potential
cost of some efficiency. In other contexts, there may be no (safe) workaround
at all.

This is terribly frustrating, because it seems almost possible to express what
I want in Core, and even possible to express it in Haskell with a really awful
unsafeCoerce. The nasty version looks like this:

data StreamL f m r = StepL (f (StreamL f m r))
   | EffectL (m (StreamL f m r))
   | ReturnL r

never :: forall f m r. Applicative f => Stream f m r
never = case loop of
  StepL x -> x `pseq` unsafeCoerce loop
   where
 loop :: StreamL f m r
 loop = StepL (pure loop) in loop
 {-# NOINLINE loop #-}
{-# NOINLINE never #-}

That is, I make a copy of the Stream type, but with a lazy version of the Step 
constructor,
I tie my knot, I make very sure that the strict field is evaluated, and then I 
unsafeCoerce
the whole thing in a thoroughly unsupported fashion to get back to the right 
type. Ideally,
It would be nice to get GHC to manufacture, and expose to users, bidirectional 
patterns
that offer more access to the raw representation of a type.

Basically, I want to get a bidirectional pattern for Step that:

1. Is lazy when used as a constructor (applying the "worker" constructor 
directly)
2. Is viewed as lazy on matching, so the strictness analysis comes out right.

Using such a feature will presumably always be dangerous (unless someone
does a ton of work to find an efficient way to make it safe), but I'd rather 
have
a reasonable dangerous way to do it than an unreasonable dangerous way,
if that can be accomplished.

Unfortunately, I haven't been able to think of a reasonable design for such a
language feature. Does anyone else have any ideas? Or some other thought about
how such things might be done?

David
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


dataToTag# documentation

2017-08-01 Thread David Feuer
dataToTag# is documented as getting the tag number of an enumeration,
which is perfectly reasonable because it's designed to support deriving Enum.
But it *appears* to work also for non-enumeration datatypes:

dataToTag# Nothing = 0#
dataToTag# (Just 3) = 1#

Does this actually always work? If so, should that be documented, or is there
a realistic possibility that its behavior will change in the future?

Additionally: the documentation for dataToTag# urges readers to use
GHC.Base.getTag instead. But dataToTag# is exported from the "public"
GHC.Exts, whereas getTag is not. Should we add getTag to GHC.Exts,
or change the documentation for dataToTag#?

David
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Repeated computations under a lambda

2017-07-18 Thread David Feuer
On Tuesday, July 18, 2017 3:55:28 PM EDT Conal Elliott wrote:
> Hi Sebastian,
> 
> Thanks for the reply. It's that I don't want `exampleC` to be eta-expanded.
> Apparently GHC does by default even when doing so moves computation under
> lambda. I've thought otherwise for a very long time.

GHC really likes to eta-expand, because that can be very good for allocation,
unboxing, and I don't know what else. Do you really need to represent the
intermediate result by a *function*? Would it work just to save the Double
itself? I suspect you could likely convince GHC to leave it alone.

David
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: [ANNOUNCE] GHC 8.2.1 release candidate 3 available

2017-07-09 Thread David Feuer
On Sunday, July 9, 2017 1:05:44 AM EDT Andrés Sicard-Ramírez wrote:
> While testing this RC candidate on Agda we found a compilation error.
> 
> The error was caused by an *unused* module which *is* compiled by this
> RC but it *is not* compiled by previous versions of GHC including
> 8.2.1 RC 2. This module was listed in the `other-modules` field of the
> `test-suite` section in the .cabal file. After removing the offending
> module, we didn't find more problems.
> 
> While I think GHC is doing the right thing here, it is not an obvious
> error on medium size projects like Agda. This is the reason for
> sharing this information.

Are you saying that Cabal is now compiling an unused module with legitimate
errors that GHC detects? If so, that sounds like a Cabal issue rather than a GHC
one. As you haven't offered example code, it's a bit hard to see just what you
mean.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


More TRAC ticket statuses?

2017-06-07 Thread David Feuer
There are (at least) two situations that I don't think we currently have a good 
way to track:

1. A new bug has been verified, but we do not yet have an expect_broken test 
case.
2. A bug has been fixed, but we are waiting for a test case.

Lacking (1) means that we have to manually dig through the ticket database and 
try things out in order to knock down tickets that should have already been 
closed. Lacking (2) means that a ticket could potentially get stuck in "new" 
status after someone's already done practically all the work required to fix 
them. What should we do about this?

David
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Hunting down a compilation performance regression involving type families

2017-06-06 Thread David Feuer
Edward Kmett has explained that this isn't sufficient when things go higher 
order. His suggested improvement is
    liftCoercion :: Maybe (Coercion a b -> Coercion (f a) (f b))


David FeuerWell-Typed, LLP
 Original message From: Ryan Scott  
Date: 6/6/17  1:41 PM  (GMT-05:00) To: Richard Eisenberg  
Cc: GHC developers , Eric Mertens  
Subject: Re: Hunting down a compilation performance regression involving type 
families 
Hrm. It's a shame that supporting this map/coerce RULE causes such pain.

This makes me wonder: can we get rid of this RULE? Eric Mertens pointed out
a trick [1] that's used in the profunctors library to make mapping coerce
over certain Profunctors more efficient. To adapt this trick for Functor,
we'd need to add another class method:

    class Functor f where
    fmap :: (a -> b) -> f a -> f b
    (<#>) :: Coercible a b => (a -> b) -> f a -> f b
    (<#>) = \f -> \p -> p `seq` fmap f p

Now, when implementing Functor instances, if we are working with a datatype
whose role is representational or phantom, we can make (<#>) really fast:

    data List a = Nil | Cons a (List a)
    instance Functor List where
    fmap = ...
    (<#>) = coerce

Now, instead of relying on (map MkNewtype Nil) to rewrite to Nil, we can
just use (MkNewtype <#> Nil)! No map/coerce RULE necessary :)

OK, I realize that suggesting that we remove the RULE is perhaps a touch
too far. But it does sting that we have to pay hefty compilation penalties
because of its existence...

Ryan S.
-
[1]
http://hackage.haskell.org/package/profunctors-5.2/docs/Data-Profunctor-Unsafe.html#v:-35-
.

On Wed, May 31, 2017 at 7:25 PM, Richard Eisenberg 
wrote:

>
> > On May 31, 2017, at 5:21 PM, Ryan Scott  wrote:
> > Does you know what might be going on here?
>
> I think so, but I don't know how to fix it.
>
> The commit you found (thank you!) makes simple_opt_expr (the "simple
> optimizer", run directly after desugaring, even with -O0) a little more
> selective in what `case` expressions it throws away. Previous to that
> commit, the optimizer would throw away a `case error "deferred type error"
> of _ -> ...` which is terrible. It seems that you have discovered that we
> are now too timid in throwing away unhelpful cases. It would be interesting
> to know what the newly-retained cases look like, so that we might throw
> them away.
>
> But there remains a mystery: Why do we need this code at all? Reading Note
> [Getting the map/coerce RULE to work] closely, it seems we need to simplify
>
>   forall a b (co :: a ~R# b).
> let dict = MkCoercible @* @a @b co in
> case Coercible_SCSel @* @a @b dict of
>   _ [Dead] -> map @a @b (\(x :: a) -> case dict of
>  MkCoercible (co :: a ~R# b) -> x |> co) = let dict = ... in ...
>
> to
>
>   forall a b (co :: a ~R# b).
> map @a @b (\(x :: a) -> x |> co) = \(x :: [a]) -> x |> [co]
>
> Part of doing so is to drop the `case Coercible_SCSel ...`, which gets in
> the way. The mystery is why this needs special code -- shouldn't the
> eliminate-case-of-known-constructor do the trick? This would require
> unfolding Coercible_SCSel. Does that happen? It would seem not... but maybe
> it should, which would remove the special-case code that I changed in that
> commit, and quite likely would simplify much more code besides.
>
> So: Is Coercible_SCSel unfolded during simple_opt? If not, what wonderful
> or terrible things happen if we do? If so, why does
> case-of-known-constructor not work here? My guess is that answering these
> questions may solve the original problem, but this guess could be wrong.
>
> Richard
>
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Trees that Grow in the hsSyn AST

2017-05-30 Thread David Feuer
On Friday, May 26, 2017 9:03:15 AM EDT Simon Peyton Jones wrote:
> 1. Which is better to start with: HsSyn or Core? Intuition suggests this sort 
> of thing could be very helpful for making zapping more reliable and ensuring 
> its efficiency, but there may be better reasons to start with HsSyn.
> 
> Definitely HsSyn.  It’s big, riddled with extra info, and has many purposes 
> for different people.  Core is small, tight, nailed down.  I don’t want to 
> mess with it.

Don't get me wrong. I wasn't suggesting that Core should come first; I have 
absolutely no basis to make any suggestion in that regard. I was just wondering 
what led to the decision to start with HsSyn. Are you suggesting that Core 
wouldn't benefit from these ideas? I surely don't see why not. Information 
about arity and strictness, at least, is introduced in specific compiler 
phases. I believe that some information needed for join points is only 
valid/available between certain phases. Making such things explicit in the 
types seems like it can only help.

> 2. If we're making intrusive changes to representations, would now be a 
> sensible era to consider switching to a different variable representation 
> (unbound, bound, abt, etc)?
> 
> I don’t think so.  The issues are quite orthogonal, and no one (to my 
> knowledge) has proposed any vaguely plausible change to variable bindings 
> that would scale to what GHC does.   In contrast, this stuff is “just” 
> re-engineering.

All right; I figured it wouldn't hurt to ask. May I ask what sorts of scaling 
problems you mean? 

David
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


RE: Trees that Grow in the hsSyn AST

2017-05-25 Thread David Feuer
I haven't looked in detail yet, but there seem to be good ideas. I have two 
questions:
1. Which is better to start with: HsSyn or Core? Intuition suggests this sort 
of thing could be very helpful for making zapping more reliable and ensuring 
its efficiency, but there may be better reasons to start with HsSyn.
2. If we're making intrusive changes to representations, would now be a 
sensible era to consider switching to a different variable representation 
(unbound, bound, abt, etc)?

David FeuerWell-Typed, LLP
 Original message From: Simon Peyton Jones via ghc-devs 
 Date: 5/25/17  6:48 PM  (GMT-05:00) To: Alan & Kim 
Zimmerman , ghc-devs@haskell.org Subject: RE: Trees that 
Grow in the hsSyn AST 
Folks

Do take a look at this:


·    We propose to re-engineer HsSyn itself.  This will touch a lot of code.

·    But it’s very neat, and will bring big long-term advantages

·    And we can do it a bit at a time

The wiki page https://ghc.haskell.org/trac/ghc/wiki/ImplementingTreesThatGrow 
has the details.

It’s entirely an internal change, not a change to GHC’s specification, so it’s 
independent of the GHC proposals process.  But I’d value the opinion of other 
GHC devs.

Alan has done a prototype first step, which worked out rather well.  Rather 
than having
   HsExpr Id
(which we all know means “HsExpr after the typechecker” but tha’s a bit 
inexplicit, we have
   HsExpr GhcTc
meaning “HsExpr after GHC’s Tc pass”.   In some ways this is quite superficial, 
but it unlocks the Trees That Grow machiney.

Please ask questions etc.  Alan and Shayan can record the answers in the wiki.  
I’m inclined to go ahead with this, so yell soon if you disagree.

Simon

From: ghc-devs [mailto:ghc-devs-boun...@haskell.org] On Behalf Of Alan & Kim 
Zimmerman
Sent: 24 May 2017 22:52
To: ghc-devs@haskell.org
Subject: Trees that Grow in the hsSyn AST

Hi all

You may be aware that Shayan Najd presented the paper  "Trees that Grow"[1] at 
HIW last year.
Based on the following mandate
> As in my previous email to Shayan (attached).  Wiki page, describe goals, 
> design,
> approach.  Point to prototype implementation.  Seek comments.   You can say 
> that
>I am supportive!
>
> Simon

We have set up a Wiki page at [2] describing a prototype implementation of the 
first stage of this for the hsSyn AST, which is to change the polymorphic 
variable from one of RdrName / Name / Id to an index type. This is presented as 
a fabricator diff at [3].
Please take a look and provide feedback.
Regards
  Alan


[1] 
http://www.jucs.org/jucs_23_1/trees_that_grow/jucs_23_01_0042_0062_najd.pdf
[2] https://ghc.haskell.org/trac/ghc/wiki/ImplementingTreesThatGrow
[3] https://phabricator.haskell.org/D3609
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Where do I start if I would like help improve GHC compilation times?

2017-04-09 Thread David Feuer
Be aware that some of the biggest performance problems with TH simply can't be 
fixed without changes to the TH language. For details, see Edward Yang's blog 
post: 
http://blog.ezyang.com/2016/07/what-template-haskell-gets-wrong-and-racket-gets-right/
There was a Reddit thread discussing that post at 
https://www.reddit.com/r/haskell/comments/4tfzah/what_template_haskell_gets_wrong_and_racket_gets/

David FeuerWell-Typed, LLP
 Original message From: Alfredo Di Napoli 
 Date: 4/9/17  5:37 AM  (GMT-05:00) To: Ben Gamari 
 Cc: ghc-devs@haskell.org Subject: Re: Where do I start 
if I would like help improve GHC compilation times? 
Hey Ben,
as promised I’m back to you with something more articulated and hopefully 
meaningful. I do hear you perfectly — probably trying to dive head-first into 
this without at least a rough understanding of the performance hotspots or the 
GHC overall architecture is going to do me more harm than good (I get the 
overall picture and I’m aware of the different stages of the GHC compilation 
pipeline, but it’s far from saying I’m proficient with the architecture as 
whole). I have also read a couple of years ago the GHC chapter on the 
“Architeture of Open Source Applications” book, but I don’t know how much that 
is still relevant. If it is, I guess I should refresh my memory.
I’m currently trying to move on 2 fronts — please advice if I’m a fool flogging 
a dead horse or if I have any hope of getting anything done ;)
1. I’m trying to treat indeed the compiler as a black block (as you adviced) 
trying to build a sufficiently large program where GHC is not “as fast as I 
would like” (I know that’s a very lame definition of “slow”, hehe). In 
particular, I have built the stage2 compiler with the “prof” flavour as you 
suggested, and I have chosen 2 examples as a reference “benchmark” for 
performance; DynFlags.hs (which seems to have been mentioned multiple times as 
a GHC perf killer) and the highlighting-kate package as posted here: 
https://ghc.haskell.org/trac/ghc/ticket/9221 . The idea would be to compile 
those with -v +RTS -p -hc -RTS enabled, look at the output from the .prof file 
AND the `-v` flag, find any hotspot, try to change something, recompile, 
observe diff, rinse and repeat. Do you think I have any hope of making progress 
this way? In particular, I think compiling DynFlags.hs is a bit of a dead-end; 
I whipped up this buggy script which escalated into a Behemoth which is 
compiling pretty much half of the compiler once again :D
```#!/usr/bin/env bash
../ghc/inplace/bin/ghc-stage2 --make -j8 -v +RTS -A256M -qb0 -p -h \-RTS 
-DSTAGE=2 -I../ghc/includes -I../ghc/compiler -I../ghc/compiler/stage2 
\-I../ghc/compiler/stage2/build 
\-i../ghc/compiler/utils:../ghc/compiler/types:../ghc/compiler/typecheck:../ghc/compiler/basicTypes
 
\-i../ghc/compiler/main:../ghc/compiler/profiling:../ghc/compiler/coreSyn:../ghc/compiler/iface:../ghc/compiler/prelude
 
\-i../ghc/compiler/stage2/build:../ghc/compiler/simplStg:../ghc/compiler/cmm:../ghc/compiler/parser:../ghc/compiler/hsSyn
 
\-i../ghc/compiler/ghci:../ghc/compiler/deSugar:../ghc/compiler/simplCore:../ghc/compile/specialise
 \-fforce-recomp -c $@```
I’m running it with `./dynflags.sh ../ghc/compiler/main/DynFlags.hs` but it’s 
taking a lot to compile (20+ mins on my 2014 mac Pro) because it’s pulling in 
half of the compiler anyway :D I tried to reuse the .hi files from my stage2 
compilation but I failed (GHC was complaining about interface file mismatch). 
Short story short, I don’t think it will be a very agile way to proceed. Am I 
right? Do you have any recommendation in such sense? Do I have any hope to 
compile DynFlags.hs in a way which would make this perf investigation feasible?
The second example (the highlighting-kate package) seems much more promising. 
It takes maybe 1-2 mins on my machine, which is enough to take a look at the 
perf output. Do you think I should follow this second lead? In principle any 
50+ modules package I think would do (better if with a lot of TH ;) ) but this 
seems like a low-entry barrier start.
2. The second path I’m exploring is simply to take a less holistic approach and 
try to dive in into a performance ticket like the ones listed here: 
https://www.reddit.com/r/haskell/comments/45q90s/is_anything_being_done_to_remedy_the_soul/czzq6an/Maybe
 some are very specific, but it seems like fixing small things and move forward 
could help giving me understanding of different sub-parts of GHC, which seems 
less intimidating than the black-box approach.
In conclusion, what do you think is the best approach, 1 or 2, both or none? ;)
Thank you!
Alfredo
On 7 April 2017 at 18:30, Alfredo Di Napoli  wrote:
Hey Ben,
thanks for the quite exhaustive reply! I’m on the go right now, but I promise 
to get back to you with a meaningful reply later this weekend ;)
Alfredo
On 7 April 2017 at 18:22, Ben 

Re: testsuite not in GHC 8.2.1-rc1 source tarball ?

2017-04-06 Thread David Feuer
That's not really too surprising to me. The test suite is primarily intended 
for GHC developers, and at present only works reliably when GHC is compiled for 
validation. Including it in the distribution would force users who had no use 
for it to pay for it anyway.


David FeuerWell-Typed, LLP
 Original message From: George Colpitts 
<george.colpi...@gmail.com> Date: 4/6/17  9:39 AM  (GMT-05:00) To: David Feuer 
<da...@well-typed.com>, Jens Petersen <juhpeter...@gmail.com>, Ben Gamari 
<b...@well-typed.com> Cc: GHC developers <ghc-devs@haskell.org> Subject: Re: 
testsuite not in GHC 8.2.1-rc1 source tarball ? 
Thanks Brandon
After downloading the source tarball and doing a build successfully I wanted to 
run the testsuite.
You writeAs far as I know, the test suite is normally run from ghc/testsuite.
That directory doesn't exist for me:
 pwd
/Users/gcolpitts/Downloads/ghc-8.2.0.20170404/ghc
bash-3.2$ ls testsuite
ls: testsuite: No such file or directory
bash-3.2$ 

so I guess the source tarball doesn't contain it and those who do a build can't 
test their build with the testsuite. I was hoping I could do that.
I didn't think the libffi directories were the right place to run from but they 
were only testsuite directories that the find command gave me.
Thanks againGeorge


On Thu, Apr 6, 2017 at 2:10 AM David Feuer <da...@well-typed.com> wrote:
I'm not sure why you're trying to run things from the libffi directory. As far 
as I know, the test suite is normally run from ghc/testsuite.


David FeuerWell-Typed, LLP
 Original message From: George Colpitts 
<george.colpi...@gmail.com> Date: 4/5/17  9:17 PM  (GMT-05:00) To: Jens 
Petersen <juhpeter...@gmail.com>, Ben Gamari <b...@well-typed.com> Cc: GHC 
developers <ghc-devs@haskell.org> Subject: Re: GHC 8.2.1-rc1 source tarball 
availability 
I'd like to run the testsuite on macOS but I am having trouble following the 
documentation at 
https://ghc.haskell.org/trac/ghc/wiki/Building/RunningTests/Running
Do I need to download something in addition to the source tarball or am I 
making some mistake?
Following is what I tried:
 pwd/Users/gcolpitts/Downloads/ghc-8.2.0.20170404/libffi# doc says: The 
commands on this page can all be executed from the testsuite 
directory.bash-3.2$ find . -name 
testsuite./libffi/build/testsuite./libffi/build/x86_64-apple-darwin/testsuitebash-3.2$
 pushd 
libffi/build/x86_64-apple-darwin/testsuite~/Downloads/ghc-8.2.0.20170404/libffi/build/x86_64-apple-darwin/testsuite
 ~/Downloads/ghc-8.2.0.20170404bash-3.2$ make testmake: *** No rule to make 
target `test'.  Stop.bash-3.2$ popd~/Downloads/ghc-8.2.0.20170404bash-3.2$ 
pushd 
libffi/build/testsuite~/Downloads/ghc-8.2.0.20170404/libffi/build/testsuite 
~/Downloads/ghc-8.2.0.20170404bash-3.2$ make testmake: *** No rule to make 
target `test'.  Stop.bash-3.2$ popd~/Downloads/ghc-8.2.0.20170404bash-3.2$ make 
test/Applications/Xcode.app/Contents/Developer/usr/bin/make -C testsuite/tests 
CLEANUP=1 SUMMARY_FILE=../../testsuite_summary.txtmake: *** testsuite/tests: No 
such file or directory.  Stop.make: *** [test] Error 2
ThanksGeorge

On Wed, Apr 5, 2017 at 9:43 PM Jens Petersen <juhpeter...@gmail.com> wrote:
On 4 April 2017 at 13:21, Ben Gamari <b...@well-typed.com> wrote:

I am happy to announce the release of the 8.2.1-rc1 source distribution

to binary packagers.

It seems to build okay for me on Fedora 26 so far.

But the testsuite completely failed in timeout: see 
https://ghc.haskell.org/trac/ghc/ticket/13534

Cheers, Jens


___

ghc-devs mailing list

ghc-devs@haskell.org

http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs



___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: GHC 8.2.1-rc1 source tarball availability

2017-04-05 Thread David Feuer
I'm not sure why you're trying to run things from the libffi directory. As far 
as I know, the test suite is normally run from ghc/testsuite.


David FeuerWell-Typed, LLP
 Original message From: George Colpitts 
 Date: 4/5/17  9:17 PM  (GMT-05:00) To: Jens 
Petersen , Ben Gamari  Cc: GHC 
developers  Subject: Re: GHC 8.2.1-rc1 source tarball 
availability 
I'd like to run the testsuite on macOS but I am having trouble following the 
documentation at 
https://ghc.haskell.org/trac/ghc/wiki/Building/RunningTests/Running
Do I need to download something in addition to the source tarball or am I 
making some mistake?
Following is what I tried:
 pwd/Users/gcolpitts/Downloads/ghc-8.2.0.20170404/libffi# doc says: The 
commands on this page can all be executed from the testsuite 
directory.bash-3.2$ find . -name 
testsuite./libffi/build/testsuite./libffi/build/x86_64-apple-darwin/testsuitebash-3.2$
 pushd 
libffi/build/x86_64-apple-darwin/testsuite~/Downloads/ghc-8.2.0.20170404/libffi/build/x86_64-apple-darwin/testsuite
 ~/Downloads/ghc-8.2.0.20170404bash-3.2$ make testmake: *** No rule to make 
target `test'.  Stop.bash-3.2$ popd~/Downloads/ghc-8.2.0.20170404bash-3.2$ 
pushd 
libffi/build/testsuite~/Downloads/ghc-8.2.0.20170404/libffi/build/testsuite 
~/Downloads/ghc-8.2.0.20170404bash-3.2$ make testmake: *** No rule to make 
target `test'.  Stop.bash-3.2$ popd~/Downloads/ghc-8.2.0.20170404bash-3.2$ make 
test/Applications/Xcode.app/Contents/Developer/usr/bin/make -C testsuite/tests 
CLEANUP=1 SUMMARY_FILE=../../testsuite_summary.txtmake: *** testsuite/tests: No 
such file or directory.  Stop.make: *** [test] Error 2
ThanksGeorge

On Wed, Apr 5, 2017 at 9:43 PM Jens Petersen  wrote:
On 4 April 2017 at 13:21, Ben Gamari  wrote:

I am happy to announce the release of the 8.2.1-rc1 source distribution

to binary packagers.

It seems to build okay for me on Fedora 26 so far.

But the testsuite completely failed in timeout: see 
https://ghc.haskell.org/trac/ghc/ticket/13534

Cheers, Jens


___

ghc-devs mailing list

ghc-devs@haskell.org

http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: DeriveFoldable treatment of tuples is surprising

2017-03-21 Thread David Feuer
The point is that there are two reasonable ways to do it, and the
deriving mechanism, as a rule, does not make choices between
reasonable alternatives.

On Tue, Mar 21, 2017 at 5:05 PM, Jake McArthur <jake.mcart...@gmail.com> wrote:
> I think it's a question of what one considers consistent. Is it more
> consistent to treat tuples as transparent and consider every component with
> type `a`, or is it more consistent to treat tuples as opaque and reuse the
> existing Foldable instance for tuples even if it might cause a compile time
> error?
>
>
> On Tue, Mar 21, 2017, 4:34 PM David Feuer <david.fe...@gmail.com> wrote:
>>
>> This seems much too weird:
>>
>> *> :set -XDeriveFoldable
>> *> data Foo a = Foo ((a,a),a) deriving Foldable
>> *> length ((1,1),1)
>> 1
>> *> length $ Foo ((1,1),1)
>> 3
>>
>> I've opened Trac #13465 [*] for this. As I write there, I think the
>> right thing is to refuse to derive Foldable for a type whose Foldable
>> instance would currently fold over components of a tuple other than
>> the last one.
>>
>> I could go either way on Traversable instances. One could argue that
>> since all relevant components *must* be traversed, we should just go
>> ahead and do that. Or one could argue that we should be consistent
>> with Foldable and refuse to derive it.
>>
>> What do you all think?
>>
>> [*] https://ghc.haskell.org/trac/ghc/ticket/13465
>> ___
>> Glasgow-haskell-users mailing list
>> glasgow-haskell-us...@haskell.org
>> http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


DeriveFoldable treatment of tuples is surprising

2017-03-21 Thread David Feuer
This seems much too weird:

*> :set -XDeriveFoldable
*> data Foo a = Foo ((a,a),a) deriving Foldable
*> length ((1,1),1)
1
*> length $ Foo ((1,1),1)
3

I've opened Trac #13465 [*] for this. As I write there, I think the
right thing is to refuse to derive Foldable for a type whose Foldable
instance would currently fold over components of a tuple other than
the last one.

I could go either way on Traversable instances. One could argue that
since all relevant components *must* be traversed, we should just go
ahead and do that. Or one could argue that we should be consistent
with Foldable and refuse to derive it.

What do you all think?

[*] https://ghc.haskell.org/trac/ghc/ticket/13465
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Bounding I/O imprecision

2017-03-16 Thread David Feuer
Thinking more about this I/O demand analysis thing. I don't think it's possible 
for the compiler to decide for itself what to fuzz, but a user might know very 
well. Given
m >>= f
there are really two sensible approaches:
1. Want to ensure m is executed even if the action produced by f will diverge.
2. Not care whether m is executed if the compound action will ultimately 
diverge.
Our current I/O hack mixes these two in an unprincipled way.
The second approach, not caring, is certainly the most natural for GHC's 
implementation of I/O: we're defining a function from the real world to a new 
real world and a value; the function is partial and we don't care about the 
details. This is fairly clearly the right way to handle strict ST: if we don't 
get a result at the end, we don't get anything useful and don't care what 
actions get dropped.
The first approach seems to offer a better way to explain I/O and evaluation to 
users, ensuring that evaluation is only performed to the extent necessary for 
execution. This is partially supported by the I/O hack when it triggers. But it 
doesn't always, and it's not entirely clear if we can make do so.
Still no real conclusions here; just exploring the problem more.
David FeuerWell-Typed, LLP___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


RE: Another strictness analysis wrinkle

2017-03-15 Thread David Feuer
Actually, I just had a thought. What if we ran ST computations with a different 
state token type? Say, State# FakeWorld? Would that let them escape the hack?


David FeuerWell-Typed, LLP
 Original message From: David Feuer <da...@well-typed.com> 
Date: 3/15/17  6:38 AM  (GMT-05:00) To: GHC developers <ghc-devs@haskell.org> 
Subject: Another strictness analysis wrinkle 
I don't see how we can take advantage of this, but IO and ST seem quite 
different from a strictness analysis perspective. The whole I/O hack is 
completely unnecessary for ST. Ugh.

David FeuerWell-Typed, LLP___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Another strictness analysis wrinkle

2017-03-15 Thread David Feuer
I don't see how we can take advantage of this, but IO and ST seem quite 
different from a strictness analysis perspective. The whole I/O hack is 
completely unnecessary for ST. Ugh.

David FeuerWell-Typed, LLP___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


RE: Removing core-spec.pdf from repository?

2017-03-13 Thread David Feuer
Kill it! That's terrible practice indeed. Speaking of generated files, it's 
time to check if our Unicode tables are up to date.


David FeuerWell-Typed, LLP
 Original message From: Ben Gamari  Date: 
3/13/17  6:57 PM  (GMT-05:00) To: GHC developers  
Subject: Removing core-spec.pdf from repository? 
Hello everyone,

Currently there is a typeset copy of the Core specification in the GHC
repository. This means any time someone changes the specification the
repository grows by around 300kB. While this isn't the end of the
world, it's generally considered bad form to put generated files under
version control.

Of course, the tools required to typeset the specification (ott and
LaTeX) are non-trivial to install, so there is considerable convenience
that comes from having a typeset version readily available.

I suggest that we remove the PDF from the repository but instead I can
start including it in my nightly documentation builds. Any objections?

Cheers,

- Ben
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


What should and should not be marked has_side_effects?

2017-03-10 Thread David Feuer
Note [PrimOp can_fail and has_side_effects] in prelude/PrimOp.hs says

> A primop "has_side_effects" if it has some *write* effect, visible
> elsewhere
> 
> - writing to the world (I/O)
> - writing to a mutable data structure (writeIORef)
> - throwing a synchronous Haskell exception
>
> [...]
>
>  * NB3: *Read* effects (like reading an IORef) don't count here,
>  
>because it doesn't matter if we don't do them, or do them more than
>once.  *Sequencing* is maintained by the data dependency of the state
>token.

But this does not actually seem to match what goes on in primops.txt.pp. The 
following, among many other seemingly read-only operations, have  
has_side_effects = True:

readMutVar# (the very example cited!), readArray#, unsafeFreezeArray#, 
unsafeThawArray#, tryReadMVar#, deRefWeak#

So what's the correct story? Do we want to change the note, or change the 
reality? The reason I happen to be looking at this is that I think the current 
arrangement allows us to define unsafeInterleaveIO in a particularly simple 
fashion:

unsafeInterleaveIO = pure . unsafePerformIO

but that's only safe as long as the interleaved IO won't float out and get 
performed before it's forced by normal IO.

But the unsafeInterleaveIO story seems much less important, in the grand 
scheme of things, than making everything else run fast. If indeed it's 
otherwise safe to mark these read-only ops has_side_effects=False, then I 
imagine we probably should do that.

David Feuer
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Getting exceptions right

2017-03-07 Thread David Feuer
I've put together a wiki page describing the issues I think we need to 
address, and laying out the model I think we want to implement for precise 
exceptions. Hopefully, this will help us figure out what we need to do to get a 
better story here.

https://ghc.haskell.org/trac/ghc/wiki/FixingExceptions

Thanks,
David Feuer
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


RE: Windows build broken

2017-02-28 Thread David Feuer

I can't fix that (no windows) but I just broke all the builds with a -Werror 
mistake and I can fix that one...

David FeuerWell-Typed, LLP
 Original message From: Simon Peyton Jones via ghc-devs 
 Date: 2/28/17  7:07 PM  (GMT-05:00) To: 
ghc-devs@haskell.org Subject: Windows build broken 


Windows build is broken again.  Might someone fix?
Simon
 
"inplace/bin/ghc-stage1.exe" -optc-fno-stack-protector -optc-Wall -optc-Werror 
-optc-Wall -optc-Wextra -optc-Wstrict-prototypes -optc-Wmissing-prototypes 
-optc-Wmissing-declarations -optc-Winline -optc-Waggregate-return 
-optc-Wpointer-arith
 -optc-Wmissing-noreturn -optc-Wnested-externs -optc-Wredundant-decls 
-optc-Iincludes -optc-Iincludes/dist 
-optc-Iincludes/dist-derivedconstants/header 
-optc-Iincludes/dist-ghcconstants/header -optc-Irts -optc-Irts/dist/build 
-optc-DCOMPILING_RTS -optc-fno-strict-aliasing
 -optc-fno-common -optc-Irts/dist/build/./autogen -optc-Wno-error=inline 
-optc-O2 -optc-fomit-frame-pointer -optc-g -optc-DRtsWay=\"rts_p\" 
-optc-DWINVER=0x06000100 -static -prof -eventlog  -O0 -H64m -Wall 
-fllvm-fill-undef-with-garbage    -Werror -Iincludes
 -Iincludes/dist -Iincludes/dist-derivedconstants/header 
-Iincludes/dist-ghcconstants/header -Irts -Irts/dist/build -DCOMPILING_RTS 
-this-unit-id rts -dcmm-lint  -i -irts -irts/dist/build -Irts/dist/build 
-irts/dist/build/./autogen -Irts/dist/build/./autogen  
 -O2    -Wnoncanonical-monad-instances  -c rts/Sparks.c -o 
rts/dist/build/Sparks.p_o
rts\Profiling.c: In function 'reportCCSProfiling':
 
rts\Profiling.c:704:9: error:
 error: implicit declaration of function 'writeCCSReportJson' 
[-Werror=implicit-function-declaration]
 writeCCSReportJson(prof_file, stack, totals);
 ^~
    |
704 | writeCCSReportJson(prof_file, stack, totals);
    | ^
 
rts\Profiling.c:704:9: error:
 error: nested extern declaration of 'writeCCSReportJson' 
[-Werror=nested-externs]
    |
704 | writeCCSReportJson(prof_file, stack, totals);
    | ^
cc1.exe: all warnings being treated as errors
`gcc.exe' failed in phase `C Compiler'. (Exit code: 1)
make[1]: *** [rts/ghc.mk:255: rts/dist/build/Profiling.p_o] Error 1
make[1]: *** Waiting for unfinished jobs
make: *** [Makefile:127: all] Error 2
/c/code/HEAD$

___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Proposal: make throwIO and throw strict

2017-02-27 Thread David Feuer
It's possible for code to throw an exception that itself throws an
imprecise exception. Such an exception is a bit tricky to catch. For
example:

import Control.Exception

strange = throwIO (undefined :: SomeException) `catch` \ex ->
  case () of
_ | Just _ <- (fromException ex :: Maybe IOError) -> print "IOError"
  | otherwise -> print "Something else"

You might think that this would catch the exception and print
"Something else", but in fact it does not. If others think this is as
surprising as I do, then I think we should make throwIO and throw
strict, so an exception will never itself be bottom. Using

throwIO' !e = throwIO e

in the code above instead of throwIO allows the exception to be caught.

A more conservative approach might be to just force result of
toException before calling raise#, but this only works when users use
an explicit type signature to fix the expression type, rather than an
exception constructor.

David
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Datacon RULES test

2017-02-23 Thread David Feuer
For good or ill, Simon doesn't want RULES for datacons. T12689 has to be 
removed (leaving T12689a, which is still fine). But I don't know enough about 
what you're doing with T12689broken to know how to make it express the right 
idea after this change. Can you please advise?

Thanks,
David Feuer
Well-Typed LLP
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Status of early-inline branch

2017-02-21 Thread David Feuer
On Tuesday, February 21, 2017 2:09:04 PM EST David Feuer wrote:
> I've been working on trying to get the branch in shape for review and such,
> but I see that you're also still working on it. So we don't tread on each
> other's toes too much, I'm wondering if you could give me a sense of what
> the status is, which commits are firm and which you might change, etc.
> 
> Thanks,
> David Feuer

Oh, and it would probably be easier if you work on top of my rebased version 
(D3167). That will at least avoid having to rehash the same things quite so 
many times, allow perf test results to be useful in the face of Ben's Typeable 
changes, etc.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Status of early-inline branch

2017-02-21 Thread David Feuer
I've been working on trying to get the branch in shape for review and such, 
but I see that you're also still working on it. So we don't tread on each 
other's toes too much, I'm wondering if you could give me a sense of what the 
status is, which commits are firm and which you might change, etc.

Thanks,
David Feuer
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Early inline

2017-02-17 Thread David Feuer
Yes, we definely want these. Are you wanting each of these submitted as a 
separate differential *in order*? Or do you want a more complex mix-and-match? 
Also, are there any commits you think should be squashed?

On Friday, February 17, 2017 4:41:33 PM EST Simon Peyton Jones via ghc-devs 
wrote:
> Ben, David, Reid
> I have been working for months (on and off, mostly off, but very ON for the
> last week or two) on a very simple idea: the simplifier should inline
> things even in the "gentle" phase. It seems so simple.  And it is: the key
> patch is tiny.
> But it stressed corners of the optimiser that were not stressed before; and
> digging into it showed opportunities I did not know about before. So I 
> have ended up a with a whole series of patches, which are on
> wip/spj-early-inline branch
> 
> 7f14d15c0e5fc2c9a81db3d0f0b01d85857b1d87 Error message wibbles accumulated
> from the preceding patches
> 
> 0499c65d9fa45e7879e1e1264fdaa15274adcba6 Improve SetLevels for join points
> 
> 3b2fc0827ff6cafa34836c2d9dc710b628c990b6 Change -ddump-tc-trace output in
> TcErrors, slightly
> 
> 9ffdf62b0ca72c4f35579f9d6f31a9beebf23025 Improve pretty-printing of types
> 
> 3f346eac06399a79adf48425018ee949cee245bf Add VarSet.anyDVarSet, allDVarSet
> 
> 912e71eb3b4ec91e805ecf2236d1033e55e2933a The Early Inline Patch
> 
> 7188cd13f8e54efa764d52ca016b87b3669b29f5 Small changes to expression sizing
> in CoreUnfold
> 
> bfc6fa3f377d11bdfcdbf82b65bf2f39cb00b90c Fix SetLevels for makeStaticPtr
> 
> 8b1cfea089faacb5b95ffcc3511e05faeabb8076 Extend CSE to handle recursive
> bindings
> 
> 50411995641802568bb27c867afe804f91e0524c Combine identical case alterantives
> in CSE
> 
> 2e077ccc736a0b2a622b7f42b7929966bddb4ded Inline data constructor wrappers in
> phase 2 only
> 
> b868de53dd19f639c1070089ecff21948ff33e0d Make Specialise work with casts
> 
> c767ae5f04a09ef71dcb8f67a17225a52c2cc5d2 Stop uniques ending up in SPEC rule
> names
> 
> b49ed1f0102f93ca7f62632c436b41bd240b501f Occurrence-analyse the result of
> rule firings
> 
> 607a735dfb99bb8f0edf466ccb01e732218c42ec Add -fspec-constr-keen
> 
> 67a0c1872c0515f1f12ea68097a84e02da92f45b Refactor floating of bindings
> (fiBind)
> 
> e90f4d7c6d3003039fa1647a3da3dafcaa75527b More tracing in SpecConstr
> 
> 
> Much to my surprise, we get some jolly nice improvements in compiler perf:
> 
> 3%   perf/compiler/T5837.runT5837 [stat too good] (normal)
> 
> 7%   perf/compiler/parsing001.run   parsing001 [stat too good] (normal)
> 
> 9%   perf/compiler/T12234.run   T12234 [stat too good] (optasm)
> 
> 35%  perf/compiler/T9020.runT9020 [stat too good] (optasm)
> 
> 9%   perf/compiler/T3064.runT3064 [stat too good] (normal)
> 
> 13%  perf/compiler/T9961.runT9961 [stat too good] (normal)
> 
> 20%  perf/compiler/T13056.run   T13056 [stat too good] (optasm)
> 
> 5%   perf/compiler/T9872d.run   T9872d [stat too good] (normal)
> 
> 5%   perf/compiler/T9872c.run   T9872c [stat too good] (normal)
> 
> 5%   perf/compiler/T9872b.run   T9872b [stat too good] (normal)
> 
> 7%   perf/compiler/T9872a.run   T9872a [stat too good] (normal)
> 
> 5%   perf/compiler/T783.run T783 [stat too good] (normal)
> 
> 35%   perf/compiler/T12227.run   T12227 [stat too good] (normal)
> 
> 20%   perf/compiler/T1969.runT1969 [stat too good] (normal)
> 
> 5%   perf/should_run/lazy-bs-alloc.run  lazy-bs-alloc [stat too good]
> (normal)
> 
> 5%   perf/compiler/T12707.run T12707 [stat too good] (normal)
> 
> 
> 
> 4%   perf/compiler/T3294.runT3294 [stat too good] (normal)
> 
> 1.5% perf/space_leaks/T4029.run T4029 [stat too good] (ghci)
> 
> So what is left?  I have sunk so much time into this and am still not QUITE
> out of the woods.   I was left with
> 
> Unexpected failures:
> 
>codeGen/should_compile/debug.run  debug [bad stdout] (normal)
> 
>concurrent/should_run/T4030.run   T4030 [bad exit code]
> (normal) I'm re-validating having pulled from HEAD, but I THINK that's all.
> Now
> 
> * I don't know how to Phab these individually
> 
> * I have not sweated through which patch is responsible for which
> perf improvments.  Maybe Gipeda can tell?
> 
> * I have not put each error message change with the correct patch. 
> I don't know how much that matters. So this is to say: anything you guys
> can do to help get this actually Done would be really helpful.   I'm out of
> time till Monday at least. It would be great to collect those performance
> improvements!
> Thanks!
> Simon


___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Static data and RULES

2017-02-17 Thread David Feuer
I've never used such rules myself, but when I asked Duncan Coutts about 
whether and how such rules were used in the wild, he said

> Well I've certainly tried to use that in the past.
> A previous version of the cbor lib which used a different
> representation did a lot of matching on constructors to re-arrange input to
> an interpreter, until I discovered that GHC
> actually uses constructor wrappers and that matching on constructors was
> thus not reliable.

He described such rules as "a totally legit thing to want to do". If a 
datatype represents an AST, then rewriting its terms can optimize the 
constructed programs. Of course, it's ultimately up to you. I have no dog in 
this race myself; my concern was for other people's code that could break as a 
result. Certainly such code is already fragile when strict constructors are 
involved, but if people have cleverly figured out that lazy constructors are 
more reliable in that regard, they could be using it. I don't know.

David

On Friday, February 17, 2017 8:06:17 AM EST Simon Peyton Jones via ghc-devs 
wrote:
> {-# RULES   "L" LCon1 0 = LCon2
> Oh I missed this entirely. You want to write a rule FOR a data
> constructor   I thought you just meant one that matches on a data
> constructor.
 That is you want (L 0) to rewrite, all by itself, to LCon2? 
> That had never occurred to me as a possibility.  Bizarre. Let’s not do
> that.
> 
> · GHC does not (knowingly) support it today
> 
> · It is a deeply weird thing to do
> 
> · If you want to do it, write you own “smart constructor” mkLCon1,
> that inlines when you say
 
> mkLCon1 x = LCon1 x
> 
> {-# INILNE [0] mkLCon1 #-}
> 
> {-# RULES “L” mkLCon1 x = LCon2 #-}
> 
> 
> Problem solved.
> Simon
> 
> From: David Feuer [mailto:david.fe...@gmail.com]
> Sent: 17 February 2017 00:30
> To: Simon Peyton Jones <simo...@microsoft.com>
> Cc: ghc-devs <ghc-devs@haskell.org>; Reid Barton <rwbar...@gmail.com>; Ben
> Gamari <bgam...@gmail.com>
 Subject: RE: Static data and RULES
> 
> Let me give an example. Suppose we have
> 
> data L = LCon1 Int | LCon2
> data S = SCon !Int
> 
> {-# RULES
> "L" LCon1 0 = LCon2
> "S" forall x . f (SCon x) = g x
>  #-}
> 
> The immediate problem today is with "S". The SCon wrapper could very well
> inline before the rule has a chance to fire. We'd like to be able to phase
> that inline to give it a chance.
 
> The "L" rule becomes problematic when we try to identify static data the
> simplifier shouldn't have to try to optimize. If it identifies LCon 0 as
> static, the "L" rule will never fire.
 
> On Feb 16, 2017 7:08 PM, "David Feuer"
> <david.fe...@gmail.com<mailto:david.fe...@gmail.com>> wrote:
 Semantically,
> the proposed scheme is very nearly equivalent to breaking *every* data
> constructor into a worker and a wrapper, and allowing INLINE and NOINLINE
> pragmas on the wrappers. That would allow terms built only from constructor
> workers and literals to be identified as they're constructed in any stage
> and left alone by the simplifier. It would also allow people using RULES
> that match on constructors to make those work reliably, by making sure the
> bindings they match on don't inline away or get marked static too early. Of
> course, we don't actually need to add more worker/wrapper pairs to do this;
> we can fake that. 
> On Feb 16, 2017 6:53 PM, "Simon Peyton Jones"
> <simo...@microsoft.com<mailto:simo...@microsoft.com>> wrote:
 I’m sorry I
> still don’t understand the problem.  Can you give an example?  It all works
> fine today; what will change in the proposed new scheme.  Indeed what IS
> the proposed new scheme? 
> I’m lost
> 
> Simon
> 
> From: David Feuer
> [mailto:david.fe...@gmail.com<mailto:david.fe...@gmail.com>]
 Sent: 16
> February 2017 23:51
> To: Simon Peyton Jones
> <simo...@microsoft.com<mailto:simo...@microsoft.com>>
 Cc: ghc-devs
> <ghc-devs@haskell.org<mailto:ghc-devs@haskell.org>>; Reid Barton
> <rwbar...@gmail.com<mailto:rwbar...@gmail.com>>; Ben Gamari
> <bgam...@gmail.com<mailto:bgam...@gmail.com>> Subject: RE: Static data and
> RULES
> 
> Sorry; guess I should have given more background on that. This goes back to
> the performance problems Ben encountered in Typeable. The goal is to avoid
> trying to optimize something over and over that's never ever going to
> change. If we know that a term is made only of static data, we can skip it
> altogether in simplification. Suppose we have
 
> foo = Just (Right [1])
> 
> Then no amount of optimization will ever be us

Re: Static data and RULES

2017-02-16 Thread David Feuer
On Friday, February 17, 2017 12:33:12 AM EST Simon Peyton Jones via ghc-devs 
wrote:
> The "L" rule becomes problematic when we try to identify static data the
> simplifier shouldn't have to try to optimize. If it identifies LCon 0 as
> static, the "L" rule will never fire.
 
> Why doesn’t it fire?
> 
> I’m afraid I still do not understand what change is proposed, so I’m finding
> it difficult to see how to fix problems with it.

I'm sorry; I wasn't trying to be obtuse; easy to drop context by mistake. The 
idea, at least roughly, is to have a "static" flag on each term. A term is 
considered static if it's

1. A Core literal,
2. A nullary constructor, or
3. A constructor whose arguments are all static.

Once a term is flagged static, the simplifier simply shouldn't try to optimize 
it--doing so is simply a waste of time.

The trouble is that rules like "L" can turn things that *look* utterly static 
into other things, through simplification that we then actually need! So we 
need to either try to figure out what's *really* static (which is complicated 
by orphan RULES) or we need to let users say so. I jumped for phased INLINE 
and NOINLINE pragmas because users are already accustomed to using those to 
say "I'm going to match on this with rules". It struck me also as a good way 
also to deal with the "S" rule that you've apparently found some other way 
around.

David
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


RE: Static data and RULES

2017-02-16 Thread David Feuer
Let me give an example. Suppose we have

data L = LCon1 Int | LCon2
data S = SCon !Int

{-# RULES
"L" LCon1 0 = LCon2
"S" forall x . f (SCon x) = g x
 #-}

The immediate problem today is with "S". The SCon wrapper could very well
inline before the rule has a chance to fire. We'd like to be able to phase
that inline to give it a chance.

The "L" rule becomes problematic when we try to identify static data the
simplifier shouldn't have to try to optimize. If it identifies LCon 0 as
static, the "L" rule will never fire.

On Feb 16, 2017 7:08 PM, "David Feuer" <david.fe...@gmail.com> wrote:

> Semantically, the proposed scheme is very nearly equivalent to breaking
> *every* data constructor into a worker and a wrapper, and allowing INLINE
> and NOINLINE pragmas on the wrappers. That would allow terms built only
> from constructor workers and literals to be identified as they're
> constructed in any stage and left alone by the simplifier. It would also
> allow people using RULES that match on constructors to make those work
> reliably, by making sure the bindings they match on don't inline away or
> get marked static too early. Of course, we don't actually need to add more
> worker/wrapper pairs to do this; we can fake that.
>
> On Feb 16, 2017 6:53 PM, "Simon Peyton Jones" <simo...@microsoft.com>
> wrote:
>
>> I’m sorry I still don’t understand the problem.  Can you give an
>> example?  It all works fine today; what will change in the proposed new
>> scheme.  Indeed what IS the proposed new scheme?
>>
>>
>>
>> I’m lost
>>
>>
>>
>> Simon
>>
>>
>>
>> *From:* David Feuer [mailto:david.fe...@gmail.com]
>> *Sent:* 16 February 2017 23:51
>> *To:* Simon Peyton Jones <simo...@microsoft.com>
>> *Cc:* ghc-devs <ghc-devs@haskell.org>; Reid Barton <rwbar...@gmail.com>;
>> Ben Gamari <bgam...@gmail.com>
>> *Subject:* RE: Static data and RULES
>>
>>
>>
>> Sorry; guess I should have given more background on that. This goes back
>> to the performance problems Ben encountered in Typeable. The goal is to
>> avoid trying to optimize something over and over that's never ever going to
>> change. If we know that a term is made only of static data, we can skip it
>> altogether in simplification. Suppose we have
>>
>>
>>
>> foo = Just (Right [1])
>>
>>
>>
>> Then no amount of optimization will ever be useful. But what about RULES?
>> If the outermost pattern in a rule matches on a data constructor, then it's
>> not static anymore! We may be replacing it with something else. So we need
>> a finer mechanism. We *also* need a finer mechanism for strict constructors
>> in general. We need to avoid inlining those too early if they're mentioned
>> in any position in RULES. Trying to make this work right automagically
>> looks a bit tricky in the face of orphan rules and such.
>>
>>
>>
>> On Feb 16, 2017 6:35 PM, "Simon Peyton Jones" <simo...@microsoft.com>
>> wrote:
>>
>> I don’t understand any of this.
>>
>>
>>
>> However, RULES are allowed to match on data constructors and it would be
>> nice to let that keep happening.
>>
>>
>>
>> Why won’t it keep happening?  What is the problem you are trying to
>> solve?  Why does the fast-path make it harder?
>>
>>
>>
>> Maybe open a ticket?
>>
>>
>>
>> Simon
>>
>>
>>
>> *From:* ghc-devs [mailto:ghc-devs-boun...@haskell.org] *On Behalf Of *David
>> Feuer
>> *Sent:* 16 February 2017 22:13
>> *To:* Ben Gamari <bgam...@gmail.com>; Reid Barton <rwbar...@gmail.com>
>> *Cc:* ghc-devs <ghc-devs@haskell.org>
>> *Subject:* Static data and RULES
>>
>>
>>
>> Ben Gamari and Reid Barton are interested in making it cheaper for static
>> data to pass through simplification. The basic idea is that if a term is
>> already made entirely of data constructors and literals, then there's
>> nothing left to optimize.
>>
>>
>>
>> However, RULES are allowed to match on data constructors and it would be
>> nice to let that keep happening. But on the other hand, RULES are
>> apparently (according to Duncan Coutts) already broken for strict data
>> constructors, because they have workers and wrappers.
>>
>>
>>
>> My thought: let's allow phased INLINE and NOINLINE pragmas for data
>> constructors. The default would be INLINE. The ~ phase choice would not be
>> available: once i

RE: Static data and RULES

2017-02-16 Thread David Feuer
Semantically, the proposed scheme is very nearly equivalent to breaking
*every* data constructor into a worker and a wrapper, and allowing INLINE
and NOINLINE pragmas on the wrappers. That would allow terms built only
from constructor workers and literals to be identified as they're
constructed in any stage and left alone by the simplifier. It would also
allow people using RULES that match on constructors to make those work
reliably, by making sure the bindings they match on don't inline away or
get marked static too early. Of course, we don't actually need to add more
worker/wrapper pairs to do this; we can fake that.

On Feb 16, 2017 6:53 PM, "Simon Peyton Jones" <simo...@microsoft.com> wrote:

> I’m sorry I still don’t understand the problem.  Can you give an example?
> It all works fine today; what will change in the proposed new scheme.
> Indeed what IS the proposed new scheme?
>
>
>
> I’m lost
>
>
>
> Simon
>
>
>
> *From:* David Feuer [mailto:david.fe...@gmail.com]
> *Sent:* 16 February 2017 23:51
> *To:* Simon Peyton Jones <simo...@microsoft.com>
> *Cc:* ghc-devs <ghc-devs@haskell.org>; Reid Barton <rwbar...@gmail.com>;
> Ben Gamari <bgam...@gmail.com>
> *Subject:* RE: Static data and RULES
>
>
>
> Sorry; guess I should have given more background on that. This goes back
> to the performance problems Ben encountered in Typeable. The goal is to
> avoid trying to optimize something over and over that's never ever going to
> change. If we know that a term is made only of static data, we can skip it
> altogether in simplification. Suppose we have
>
>
>
> foo = Just (Right [1])
>
>
>
> Then no amount of optimization will ever be useful. But what about RULES?
> If the outermost pattern in a rule matches on a data constructor, then it's
> not static anymore! We may be replacing it with something else. So we need
> a finer mechanism. We *also* need a finer mechanism for strict constructors
> in general. We need to avoid inlining those too early if they're mentioned
> in any position in RULES. Trying to make this work right automagically
> looks a bit tricky in the face of orphan rules and such.
>
>
>
> On Feb 16, 2017 6:35 PM, "Simon Peyton Jones" <simo...@microsoft.com>
> wrote:
>
> I don’t understand any of this.
>
>
>
> However, RULES are allowed to match on data constructors and it would be
> nice to let that keep happening.
>
>
>
> Why won’t it keep happening?  What is the problem you are trying to
> solve?  Why does the fast-path make it harder?
>
>
>
> Maybe open a ticket?
>
>
>
> Simon
>
>
>
> *From:* ghc-devs [mailto:ghc-devs-boun...@haskell.org] *On Behalf Of *David
> Feuer
> *Sent:* 16 February 2017 22:13
> *To:* Ben Gamari <bgam...@gmail.com>; Reid Barton <rwbar...@gmail.com>
> *Cc:* ghc-devs <ghc-devs@haskell.org>
> *Subject:* Static data and RULES
>
>
>
> Ben Gamari and Reid Barton are interested in making it cheaper for static
> data to pass through simplification. The basic idea is that if a term is
> already made entirely of data constructors and literals, then there's
> nothing left to optimize.
>
>
>
> However, RULES are allowed to match on data constructors and it would be
> nice to let that keep happening. But on the other hand, RULES are
> apparently (according to Duncan Coutts) already broken for strict data
> constructors, because they have workers and wrappers.
>
>
>
> My thought: let's allow phased INLINE and NOINLINE pragmas for data
> constructors. The default would be INLINE. The ~ phase choice would not be
> available: once inline, always inline.
>
>
>
> Semantics
>
> ~~
>
>
>
> For all constructors:
>
>
>
> If a constructor is allowed by pragmas to inline in a certain phase, then
> in that phase terms built from it can be considered static. Once static,
> always static.
>
>
>
> If a constructor is not allowed to inline in a certain phase, terms built
> from it will be considered non-static.
>
>
>
> After demand analysis and worker/wrapper, all constructors are considered
> inline.
>
>
>
> For strict constructors:
>
>
>
> A strict constructor wrapper prohibited from inlining in a certain phase
> simply will not.
>
>
>
> Strict constructor wrappers will all be allowed to inline after demand
> analysis and worker/wrapper. This matches the way we now handle wrappers
> actually created in that phase.
>
>
>
> Syntax:
>
>
>
> For GADT syntax, this is easy:
>
>
>
> data Foo ... where
>
>   {-# INLINE [1] Bar #-}
>
>   Bar :: ...
>
>
>
> For traditional syntax, I think it's probably best to pull the pragmas to
> the top:
>
>
>
> {-# NOINLINE Quux #-}
>
> data Baz ... = Quux ... | ...
>
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


RE: Static data and RULES

2017-02-16 Thread David Feuer
Sorry; guess I should have given more background on that. This goes back to
the performance problems Ben encountered in Typeable. The goal is to avoid
trying to optimize something over and over that's never ever going to
change. If we know that a term is made only of static data, we can skip it
altogether in simplification. Suppose we have

foo = Just (Right [1])

Then no amount of optimization will ever be useful. But what about RULES?
If the outermost pattern in a rule matches on a data constructor, then it's
not static anymore! We may be replacing it with something else. So we need
a finer mechanism. We *also* need a finer mechanism for strict constructors
in general. We need to avoid inlining those too early if they're mentioned
in any position in RULES. Trying to make this work right automagically
looks a bit tricky in the face of orphan rules and such.

On Feb 16, 2017 6:35 PM, "Simon Peyton Jones" <simo...@microsoft.com> wrote:

> I don’t understand any of this.
>
>
>
> However, RULES are allowed to match on data constructors and it would be
> nice to let that keep happening.
>
>
>
> Why won’t it keep happening?  What is the problem you are trying to
> solve?  Why does the fast-path make it harder?
>
>
>
> Maybe open a ticket?
>
>
>
> Simon
>
>
>
> *From:* ghc-devs [mailto:ghc-devs-boun...@haskell.org] *On Behalf Of *David
> Feuer
> *Sent:* 16 February 2017 22:13
> *To:* Ben Gamari <bgam...@gmail.com>; Reid Barton <rwbar...@gmail.com>
> *Cc:* ghc-devs <ghc-devs@haskell.org>
> *Subject:* Static data and RULES
>
>
>
> Ben Gamari and Reid Barton are interested in making it cheaper for static
> data to pass through simplification. The basic idea is that if a term is
> already made entirely of data constructors and literals, then there's
> nothing left to optimize.
>
>
>
> However, RULES are allowed to match on data constructors and it would be
> nice to let that keep happening. But on the other hand, RULES are
> apparently (according to Duncan Coutts) already broken for strict data
> constructors, because they have workers and wrappers.
>
>
>
> My thought: let's allow phased INLINE and NOINLINE pragmas for data
> constructors. The default would be INLINE. The ~ phase choice would not be
> available: once inline, always inline.
>
>
>
> Semantics
>
> ~~
>
>
>
> For all constructors:
>
>
>
> If a constructor is allowed by pragmas to inline in a certain phase, then
> in that phase terms built from it can be considered static. Once static,
> always static.
>
>
>
> If a constructor is not allowed to inline in a certain phase, terms built
> from it will be considered non-static.
>
>
>
> After demand analysis and worker/wrapper, all constructors are considered
> inline.
>
>
>
> For strict constructors:
>
>
>
> A strict constructor wrapper prohibited from inlining in a certain phase
> simply will not.
>
>
>
> Strict constructor wrappers will all be allowed to inline after demand
> analysis and worker/wrapper. This matches the way we now handle wrappers
> actually created in that phase.
>
>
>
> Syntax:
>
>
>
> For GADT syntax, this is easy:
>
>
>
> data Foo ... where
>
>   {-# INLINE [1] Bar #-}
>
>   Bar :: ...
>
>
>
> For traditional syntax, I think it's probably best to pull the pragmas to
> the top:
>
>
>
> {-# NOINLINE Quux #-}
>
> data Baz ... = Quux ... | ...
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Static data and RULES

2017-02-16 Thread David Feuer
The vast majority would have no pragma, or at least wouldn't have an unphased 
NOINLINE pragma, so they would typically inline before demand analysis anyway.


David FeuerWell-Typed, LLP 
 Original message From: Joachim Breitner 
<m...@joachim-breitner.de> Date: 2/16/17  5:20 PM  (GMT-05:00) To: 
ghc-devs@haskell.org Subject: Re: Static data and RULES 
Hi,

Am Donnerstag, den 16.02.2017, 17:12 -0500 schrieb David Feuer:
> Strict constructor wrappers will all be allowed to inline after
> demand analysis and worker/wrapper. This matches the way we now
> handle wrappers actually created in that phase.

I am worried that DmdAnal will be less effective the code it sees does
not have the wrappers of strict constructors already inlines. It may be
that the strictness signature of the wrapper is sufficient to make up
for this, but I am not sure.

Greetings,
Joachim

-- 
Joachim “nomeata” Breitner
  m...@joachim-breitner.de • https://www.joachim-breitner.de/
  XMPP: nome...@joachim-breitner.de • OpenPGP-Key: 0xF0FBF51F
  Debian Developer: nome...@debian.org___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Static data and RULES

2017-02-16 Thread David Feuer
Ben Gamari and Reid Barton are interested in making it cheaper for static
data to pass through simplification. The basic idea is that if a term is
already made entirely of data constructors and literals, then there's
nothing left to optimize.

However, RULES are allowed to match on data constructors and it would be
nice to let that keep happening. But on the other hand, RULES are
apparently (according to Duncan Coutts) already broken for strict data
constructors, because they have workers and wrappers.

My thought: let's allow phased INLINE and NOINLINE pragmas for data
constructors. The default would be INLINE. The ~ phase choice would not be
available: once inline, always inline.

Semantics
~~

For all constructors:

If a constructor is allowed by pragmas to inline in a certain phase, then
in that phase terms built from it can be considered static. Once static,
always static.

If a constructor is not allowed to inline in a certain phase, terms built
from it will be considered non-static.

After demand analysis and worker/wrapper, all constructors are considered
inline.

For strict constructors:

A strict constructor wrapper prohibited from inlining in a certain phase
simply will not.

Strict constructor wrappers will all be allowed to inline after demand
analysis and worker/wrapper. This matches the way we now handle wrappers
actually created in that phase.

Syntax:

For GADT syntax, this is easy:

data Foo ... where
  {-# INLINE [1] Bar #-}
  Bar :: ...

For traditional syntax, I think it's probably best to pull the pragmas to
the top:

{-# NOINLINE Quux #-}
data Baz ... = Quux ... | ...
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


eagerlyBlackhole# efficiency thought

2017-02-02 Thread David Feuer
We discussed trying to come up with a primitive

eagerlyBlackhole# :: a -> a
-- meaning
eagerlyBlackhole a = runRW# $ \s ->
  case noDuplicate# s of _ -> a

that would guarantee that the thunk is entered by only one thread. There
are important situations where that check is redundant. Consider this code:

(m >>= \a -> strictToLazyST (f a)) >>= o

The implementation of >>= needs to wrap up the execution of its first
argument in eagerlyBlackhole#. In this case, however, there is no risk that
two threads will execute m, because it's forced within the same outer
eagerlyBlackhole in which it was created. Is there a cheap way to detect
this situation at run-time,  when executing m, to avoid the synchronization
delay? This probably doesn't arise too much for unsafePerformIO, but in
lazy ST there may be a lot of nested no-duplicate thunks.

I looked into trying to fix things up with RULES for *> and such, but I ran
into trouble with eta expansion and in any case there are limits to what
they can do to help >>=, and the whole thing is rather complicated.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Lazy ST vs concurrency

2017-01-31 Thread David Feuer
I think Ben's eagerlyBlackhole is what I called noDup. And it is indeed a bit 
tricky to use 
correctly. It needs the same sort of care around inlining that unsafePerformIO 
does. A 
bit of summary:

1. It's okay to duplicate a runST thunk or to enter it twice, because each copy 
will have 
its own set of references and arrays. This is important, because we have 
absolutely no 
control over what the user will do with such a thunk.

2. With the exception of a runST thunk, we must never duplicate or double-enter 
a 
thunk if it performs or suspends ST work.

Based on my tests and the intuition I've developed about what's going on here, 
(2) 
breaks down into two pieces:


2a. Any time we perform or suspend ST work, we must use NOINLINE to avoid 
duplication.

2b. Any time we suspend ST work, we must set up the thunk involved with 
noDuplicate# 
or similar.

For example, the code I wrote yesterday for the Applicative instance looks like 
this:

fm <*> xm = ST $ \ s ->
   let
 {-# NOINLINE res1 #-}
 !res1 = unST fm s
 !(f, s') = res1

 {-# NOINLINE res2 #-}
 res2 = noDup (unST xm s')
 (x, s'') = res2
   in (f x, s'')

I NOINLINE res1. If it were to inline (could that happen?), we'd get

  let
res2 = noDup (unST xm (snd (unST fm s)))
(x, s'') = res2
  in (fst (unST fm s) x, s')

and that would run the fm computation twice. But I don't noDup res1, because we 
force 
it immediately on creation; no one else ever handles it.

I NOINLINE res2 for a similar reason, but I also use noDup on it. The res2 
thunk escapes 
into the wild via x and s'' in the result; we need to make sure that it is not 
entered twice.

I believe can use a few rewrite rules to reduce costs substantially in some 
situations. I 
will add those to the differential. 

 Original message 
From: Simon Marlow <marlo...@gmail.com> 
Date: 1/31/17 3:59 AM (GMT-05:00) 
To: Simon Peyton Jones <simo...@microsoft.com> 
Cc: David Feuer <da...@well-typed.com>, ghc-devs@haskell.org 
Subject: Re: Lazy ST vs concurrency 


On 30 January 2017 at 22:56, Simon Peyton Jones <simo...@microsoft.com[1]> 
wrote:


We don’t want to do this on a per-module basis do we, as 
-fatomic-eager-blackholing 
would suggest.  Rather, on per-thunk basis, no?  Which thunks, precisely?   I 
think 
perhaps *precisely thunks one of whose free variables has type (Sttate# s) for 
some s.*  
These are thunks that consume a state token, and must do so no more than once.___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Constant functions and selectors make for interesting arguments

2017-01-30 Thread David Feuer
Yes, I clearly haven't thought this through enough. For the purpose of
Functor deriving, it looks like we can just derive better. I've been
discussing it with Ryan and Reid, and it looks like we can get that
taken care of.

David

On Mon, Jan 30, 2017 at 5:50 PM, Simon Peyton Jones
<simo...@microsoft.com> wrote:
> What code would you like to get?
>
> I think you are talking about specialising a recursive function 
> ($fFunctorTree_$cfmap in this case) for a particular value of its function 
> argument.  That's a bit like SpecConstr (SpecFun perhaps) and nothing at all 
> like inlining.
>
> Unless I'm missing something.
>
> I think there's a ticket somewhere about extending SpecConstr to work on 
> function arugments, but it's tricky to do.
>
> Simon
>
> | -Original Message-
> | From: David Feuer [mailto:da...@well-typed.com]
> | Sent: 30 January 2017 21:42
> | To: ghc-devs@haskell.org; Simon Peyton Jones <simo...@microsoft.com>
> | Cc: David Feuer <david.fe...@gmail.com>
> | Subject: Re: Constant functions and selectors make for interesting
> | arguments
> |
> | Here's an example:
> |
> | data Tree a = Bin (Tree a) a (Tree a) | Tip deriving Functor
> |
> | {-# NOINLINE replace #-}
> | replace :: a -> Tree b -> Tree a
> | replace x t = x <$ t
> |
> | When I compile this with -O2, I get
> |
> | Rec {
> | -- RHS size: {terms: 18, types: 21, coercions: 0} $fFunctorTree_$cfmap
> |   :: forall a_ar2 b_ar3. (a_ar2 -> b_ar3) -> Tree a_ar2 -> Tree b_ar3
> | $fFunctorTree_$cfmap =
> |   \ (@ a_aGb)
> | (@ b_aGc)
> | (f_aFH :: a_aGb -> b_aGc)
> | (ds_dGN :: Tree a_aGb) ->
> | case ds_dGN of _ {
> |   Bin a1_aFI a2_aFJ a3_aFK ->
> | Bin
> |   ($fFunctorTree_$cfmap f_aFH a1_aFI)
> |   (f_aFH a2_aFJ)
> |   ($fFunctorTree_$cfmap f_aFH a3_aFK);
> |   Tip -> Tip
> | }
> | end Rec }
> |
> | $fFunctorTree_$c<$
> |   :: forall a_ar4 b_ar5. a_ar4 -> Tree b_ar5 -> Tree a_ar4
> | $fFunctorTree_$c<$ =
> |   \ (@ a_aGQ) (@ b_aGR) (eta_aGS :: a_aGQ) (eta1_B1 :: Tree b_aGR) ->
> | $fFunctorTree_$cfmap (\ _ -> eta_aGS) eta1_B1
> |
> | replace :: forall a_aqt b_aqu. a_aqt -> Tree b_aqu -> Tree a_aqt replace
> | = $fFunctorTree_$c<$
> |
> | This is no good at all, because replacing the values in the same tree
> | over and over will build up a giant chain of thunks in each node carrying
> | all the previous values. I suppose that inlining per se may not be quite
> | enough to fix this problem, but I suspect there's some way to fix it.
> | Fixing it in Functor deriving would be a start (I can look into that),
> | but fixing it in user code would be quite good too.
> |
> | On Monday, January 30, 2017 9:01:34 PM EST Simon Peyton Jones via ghc-
> | devs
> | wrote:
> | > Functions whose body is no bigger (by the inliner’s metrics) than the
> | call
> | > are always inlined vigorously.   So (\.-> k) replaces a call by a
> | > single variable.  GHC will do that a lot.
> |
> | > These ideas are best backed by use-cases where something good is not
> | > happening.   Do you have some?
> |
> | > Simon
> | >
> | > From: ghc-devs [mailto:ghc-devs-boun...@haskell.org] On Behalf Of
> | > David Feuer
> |  Sent: 27 January 2017 16:42
> | > To: ghc-devs <ghc-devs@haskell.org>
> | > Subject: Constant functions and selectors make for interesting
> | > arguments
> | >
> | > GHC's inliner has a notion of "interesting argument" it uses to
> | > encourage inlining of functions called with (I think) dictionary
> | > arguments. I think another class of argument is very interesting, by
> | > being very boring. Any argument that looks like either
> |
> | > \ _ ... (Con _ ... x ... _ ) ... _ -> coerce x
> | >
> | > or
> | >
> | > \ _ ... _ -> k
> | >
> | > Has a pretty good chance of doing a lot of good when inlined, perhaps
> | > plugging a space leak. Would it make sense to try to identify such
> | > functions and consider them interesting for inlining?
> |
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Lazy ST vs concurrency

2017-01-30 Thread David Feuer
On Monday, January 30, 2017 9:50:56 PM EST Simon Marlow wrote:

> Unfortunately the mechanisms we have right now to fix it aren't ideal -
> noDuplicate# is a bigger hammer than we need.

Do you think you could explain this a bit more? What aspect of nuDuplicate# is 
overkill? What does it guard against that can't happen here?

> All we really need is some
> way to make a thunk atomic, it would require some special entry code to the
> thunk which did atomic eager-blackholing.  Hmm, now that I think about it,
> perhaps we could just have a flag, -fatomic-eager-blackholing.

If it's possible to use a primop to do this "locally", I think it would be 
very nice to get that as well as a global flag. If it affects code generation 
in an inherently global fashion, then of course we'll just have to live with 
that, and lots of NOINLINE.

David
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Constant functions and selectors make for interesting arguments

2017-01-30 Thread David Feuer
Here's an example:

data Tree a = Bin (Tree a) a (Tree a) | Tip deriving Functor

{-# NOINLINE replace #-}
replace :: a -> Tree b -> Tree a
replace x t = x <$ t

When I compile this with -O2, I get

Rec {
-- RHS size: {terms: 18, types: 21, coercions: 0}
$fFunctorTree_$cfmap
  :: forall a_ar2 b_ar3. (a_ar2 -> b_ar3) -> Tree a_ar2 -> Tree b_ar3
$fFunctorTree_$cfmap =
  \ (@ a_aGb)
(@ b_aGc)
(f_aFH :: a_aGb -> b_aGc)
(ds_dGN :: Tree a_aGb) ->
case ds_dGN of _ {
  Bin a1_aFI a2_aFJ a3_aFK ->
Bin
  ($fFunctorTree_$cfmap f_aFH a1_aFI)
  (f_aFH a2_aFJ)
  ($fFunctorTree_$cfmap f_aFH a3_aFK);
  Tip -> Tip
}
end Rec }

$fFunctorTree_$c<$
  :: forall a_ar4 b_ar5. a_ar4 -> Tree b_ar5 -> Tree a_ar4
$fFunctorTree_$c<$ =
  \ (@ a_aGQ) (@ b_aGR) (eta_aGS :: a_aGQ) (eta1_B1 :: Tree b_aGR) ->
$fFunctorTree_$cfmap (\ _ -> eta_aGS) eta1_B1

replace :: forall a_aqt b_aqu. a_aqt -> Tree b_aqu -> Tree a_aqt
replace = $fFunctorTree_$c<$

This is no good at all, because replacing the values in the same tree over and 
over will build up a giant chain of thunks in each node carrying all the 
previous values. I suppose that inlining per se may not be quite enough to fix 
this problem, but I suspect there's some way to fix it. Fixing it in Functor 
deriving would be a start (I can look into that), but fixing it in user code 
would be quite good too.

On Monday, January 30, 2017 9:01:34 PM EST Simon Peyton Jones via ghc-devs 
wrote:
> Functions whose body is no bigger (by the inliner’s metrics) than the call
> are always inlined vigorously.   So (\.-> k) replaces a call by a
> single variable.  GHC will do that a lot.
 
> These ideas are best backed by use-cases where something good is not
> happening.   Do you have some?
 
> Simon
> 
> From: ghc-devs [mailto:ghc-devs-boun...@haskell.org] On Behalf Of David
> Feuer
 Sent: 27 January 2017 16:42
> To: ghc-devs <ghc-devs@haskell.org>
> Subject: Constant functions and selectors make for interesting arguments
> 
> GHC's inliner has a notion of "interesting argument" it uses to encourage
> inlining of functions called with (I think) dictionary arguments. I think
> another class of argument is very interesting, by being very boring. Any
> argument that looks like either
 
> \ _ ... (Con _ ... x ... _ ) ... _ -> coerce x
> 
> or
> 
> \ _ ... _ -> k
> 
> Has a pretty good chance of doing a lot of good when inlined, perhaps
> plugging a space leak. Would it make sense to try to identify such
> functions and consider them interesting for inlining?


___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Lazy ST vs concurrency

2017-01-30 Thread David Feuer
On Monday, January 30, 2017 1:50:29 PM EST Reid Barton wrote:
> I wrote a lazy ST microbenchmark (http://lpaste.net/351799) that uses
> nothing but lazy ST monad operations in the inner loop.

This benchmark doesn't really look like code I'd expect people to use in 
practice. Normally, they're using lazy ST because they actually need to use 
STRefs or STArrays in the loop! I suspect your test case is likely about the 
worst possible slowdown for the fix. And 3x slowdown in lazy ST doesn't 
necessarily translate to a 3x slowdown in an application using it.

David
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Lazy ST vs concurrency

2017-01-30 Thread David Feuer
I forgot to CC ghc-devs the first time, so here's another copy.

I was working on #11760 this weekend, which has to do with concurrency 
breaking lazy ST. I came up with what I thought was a pretty decent solution ( 
https://phabricator.haskell.org/D3038 ). Simon Peyton Jones, however, is quite 
unhappy about the idea of sticking this weird unsafePerformIO-like code 
(noDup, which I originally implemented as (unsafePerformIO . evaluate), but 
which he finds ugly regardless of the details) into fmap and (>>=).  He's also 
concerned that the noDuplicate# applications will kill performance in the 
multi-threaded case, and suggests he would rather leave lazy ST broken, or 
even remove it altogether, than use a fix that will make it slow sometimes, 
particularly since there haven't been a lot of reports of problems in the 
wild.

My view is that leaving it broken, even if it only causes trouble 
occasionally, is simply not an option. If users can't rely on it to always 
give correct answers, then it's effectively useless. And for the sake of 
backwards compatibility, I think it's a lot better to keep it around, even if 
it runs slowly multithreaded, than to remove it altogether.

Note to Simon PJ: Yes, it's ugly to stick that noDup in there. But lazy ST has 
always been a bit of deep magic. You can't *really* carry a moment of time 
around in your pocket and make its history happen only if necessary. We can 
make it work in GHC because its execution model is entirely based around graph 
reduction, so evaluation is capable of driving execution. Whereas lazy IO is 
extremely tricky because it causes effects observable in the real world, lazy 
ST is only *moderately* tricky, causing effects that we have to make sure 
don't lead to weird interactions between threads. I don't think it's terribly 
surprising that it needs to do a few more weird things to work properly.

David
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Constant functions and selectors make for interesting arguments

2017-01-27 Thread David Feuer
GHC's inliner has a notion of "interesting argument" it uses to encourage
inlining of functions called with (I think) dictionary arguments. I think
another class of argument is very interesting, by being very boring. Any
argument that looks like either

\ _ ... (Con _ ... x ... _ ) ... _ -> coerce x

or

\ _ ... _ -> k

Has a pretty good chance of doing a lot of good when inlined, perhaps
plugging a space leak. Would it make sense to try to identify such
functions and consider them interesting for inlining?
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


RE: Floating lazy primops

2017-01-24 Thread David Feuer
I've opened #13182 to explore one possible approach to dataToTag# that strikes 
me as likely to be simpler and to have fewer potential gotchas. But there could 
be critical points I'm missing about why we do it as we do now.
 Original message From: Simon Peyton Jones 
<simo...@microsoft.com> Date: 1/23/17  4:47 PM  (GMT-05:00) To: David Feuer 
<da...@well-typed.com> Cc: ghc-devs@haskell.org Subject: RE: Floating lazy 
primops 
We should have this conversation on a ticket, perhaps #13027.

| good at the time). Are there actually any primops with lifted arguments
| where we *want* speculation? Perhaps the most important primop to
| consider is seq#, which is

I don't understand this question.  comment:23 of #13027 specifically says to 
skip the ok-for-spec test for lifted args. So you as "are the any" whereas 
comment:23 says "all".

| 2. If dataToTag# is marked can_fail (an aspect of https://
| phabricator.haskell.org/rGHC5a9a1738023a), is it still possible for it to
| end up being applied to an unevaluated argument? If not, perhaps the
| CorePrep specials can be removed altogether.

That may be true, but it's not easy to GUARANTEE in the way that Lint 
guarantees types.   So I'm happier leaving in the CorePrep stuff.. but please 
do add a comment there that points to the Note in primops.txt.pp and says that 
it seems unlikely this will ever occur.

| One more question: do you think it's *better* to let dataToTag# float and
| then fix it up later, or better to mark it can_fail? Unlike
| reallyUnsafePtrEquality#, dataToTag# is used all over the place, so it is
| important that it interacts as well as possible with the optimizer,
| whatever that entails.

I think better to do as now. That way the simplifier has the opportunity to 
common-up multiple evals into one.  If we add them later that's harder.  

By all means add Notes to explain this.

Thanks!

Simon


| -----Original Message-
| From: David Feuer [mailto:da...@well-typed.com]
| Sent: 18 January 2017 19:45
| To: Simon Peyton Jones <simo...@microsoft.com>
| Cc: ghc-devs@haskell.org
| Subject: Floating lazy primops
| 
| I opened up https://phabricator.haskell.org/D2987 to mark
| reallyUnsafePtrEquality# can_fail, but in the process I realized a couple
| things.
| 
| 1. Part of https://phabricator.haskell.org/rGHC5a9a1738023a may actually
| not have been such a hot idea after all (although it certainly sounded
| good at the time). Are there actually any primops with lifted arguments
| where we *want* speculation? Perhaps the most important primop to
| consider is seq#, which is
| (mysteriously?) marked neither can_fail nor has_side_effects, but another
| to look at is unpackClosure#, which seems likely to give different
| results before and after forcing. Most other primops with lifted
| arguments are marked has_side_effects, and therefore won't be floated out
| anyway.
| 
| 2. If dataToTag# is marked can_fail (an aspect of https://
| phabricator.haskell.org/rGHC5a9a1738023a), is it still possible for it to
| end up being applied to an unevaluated argument? If not, perhaps the
| CorePrep specials can be removed altogether.
| 
| David Feuer

___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Magical function to support reflection

2017-01-18 Thread David Feuer
I've updated https://ghc.haskell.org/trac/ghc/wiki/MagicalReflectionSupport to 
reflect both Simon's thoughts on the matter and my own reactions to them. I 
hope you'll give it a peek.

David Feuer
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Magical function to support reflection

2017-01-17 Thread David Feuer
Simon has an idea for making it safer. I suspect it's only properly safe
with the "forall s", but there may be a way to at least make it
specialization-safe (if not conditionally coherence-safe) without that.

On Jan 17, 2017 2:42 PM, "Edward Kmett" <ekm...@gmail.com> wrote:

> That is the paper the reflection library API is based on.
>
> However, doing it the way mentioned in that paper (after modifying it to
> work around changes with the inliner for modern GHC) is about 3 orders of
> magnitude slower. We keep it around in reflection as the 'slow' path for
> portability to non-GHC compilers, and because that variant can make a form
> of Typeable reflection which is needed for some Exception gimmicks folks
> use.
>
> The current approach, and the sort of variant that David is pushing above,
> is basically free, as it costs a single unsafeCoerce. To make the
> reflection library work in a fully type-safe manner would take 1-3
> additional wired ins that would consist of well-typed core. The stuff David
> is proposing above would be more general but less safe.
>
> -Edward
>
> On Tue, Jan 17, 2017 at 10:45 AM, Simon Peyton Jones <
> simo...@microsoft.com> wrote:
>
>> David says that this paper is relevant
>>
>> http://okmij.org/ftp/Haskell/tr-15-04.pdf
>>
>>
>>
>> Simon
>>
>>
>>
>> *From:* David Feuer [mailto:david.fe...@gmail.com]
>> *Sent:* 14 January 2017 00:50
>> *To:* Simon Peyton Jones <simo...@microsoft.com>
>> *Cc:* ghc-devs <ghc-devs@haskell.org>; Edward Kmett <ekm...@gmail.com>
>> *Subject:* RE: Magical function to support reflection
>>
>>
>>
>> I need to look through a bit more of this, but explicit type application
>> certainly can be avoided using Tagged. Once we get the necessary magic,
>> libraries will be able to come up with whatever interfaces they like. My
>> main concern about the generality of
>>
>>
>>
>> reify# :: forall r. (RC a => r) -> a -> r
>>
>>
>>
>> (as with the primop type Edward came up with) is that it lacks the
>> `forall s` safety mechanism of the reflection library. Along with its key
>> role in ensuring class coherence[*], that mechanism also makes it clear
>> what specialization is and is not allowed to do with reified values. Again,
>> I'm not sure it can mess up the simpler/more general form you and Edward
>> propose, but it makes me nervous.
>>
>>
>>
>> [*] Coherence: as long as an instance of Reifies S A exists for some
>> concrete S::K, users can't incoherently write a polymorphic Reifies
>> instance for s::K.
>>
>>
>>
>> On Jan 13, 2017 7:33 PM, "Simon Peyton Jones" <simo...@microsoft.com>
>> wrote:
>>
>> David, Edward
>>
>> Here’s my take on this thread about reflection.   I’ll ignore Tagged and
>> the ‘s’ parameter, and the proxy arguments, since they are incidental.
>>
>> I can finally see a reasonable path; I think there’s a potential GHC
>> proposal here.
>>
>> Simon
>>
>>
>>
>> *First thing*: PLEASE let's give a Core rendering of whatever is
>> proposed. If it's expressible in Core that's reassuring.  If it requires an
>> extension to Core, that's a whole different thing.
>>
>>
>>
>> *Second*.  For any *particular* class, I think it's easy to express
>> reify in Core.  Example (in Core):
>>
>> reifyTypeable :: (Typeable a => b) -> TypeRep a -> b
>>
>> reifyTypable k = k |> co
>>
>> where co is a coercion that witnesses
>>
>>   co :: (forall a b. Typeable a => b) ~ forall a b. (TypeRep a -> b)
>>
>>
>>
>> *Third.  *This does not depend, and should not depend, on the fact that
>> single-method classes are represented with a newtype.  E.g. if we changed
>> Typeable to be represented with a data type thus (in Core)
>>
>> data Typeable a = MkTypeable (TypeRep a)
>>
>> using data rather than newtype, then we could still write reifyTypable.
>>
>> reifyTypeable :: (Typeable a => b) -> TypeRep a -> b
>>
>> reifyTypable = /\ab. \(f :: Typeable a => b). \(r :: TypeRep a).
>>
>>f (MkTypeable r)
>>
>> The efficiency of newtype is nice, but it’s not essential.
>>
>>
>>
>> *Fourth*.   As you point out, reify# is far too polymorphic. *Clearly
>> you need reify# to be a class method!*  Something like this
>>
>> class Reifiable a where
>>
>>   type RC a :: Constraint  -- Short for Reifi

RE: Magical function to support reflection

2017-01-13 Thread David Feuer
I need to look through a bit more of this, but explicit type application
certainly can be avoided using Tagged. Once we get the necessary magic,
libraries will be able to come up with whatever interfaces they like. My
main concern about the generality of

reify# :: forall r. (RC a => r) -> a -> r

(as with the primop type Edward came up with) is that it lacks the `forall
s` safety mechanism of the reflection library. Along with its key role in
ensuring class coherence[*], that mechanism also makes it clear what
specialization is and is not allowed to do with reified values. Again, I'm
not sure it can mess up the simpler/more general form you and Edward
propose, but it makes me nervous.

[*] Coherence: as long as an instance of Reifies S A exists for some
concrete S::K, users can't incoherently write a polymorphic Reifies
instance for s::K.

On Jan 13, 2017 7:33 PM, "Simon Peyton Jones" <simo...@microsoft.com> wrote:

David, Edward

Here’s my take on this thread about reflection.   I’ll ignore Tagged and
the ‘s’ parameter, and the proxy arguments, since they are incidental.

I can finally see a reasonable path; I think there’s a potential GHC
proposal here.

Simon



*First thing*: PLEASE let's give a Core rendering of whatever is proposed.
If it's expressible in Core that's reassuring.  If it requires an extension
to Core, that's a whole different thing.



*Second*.  For any *particular* class, I think it's easy to express reify
in Core.  Example (in Core):

reifyTypeable :: (Typeable a => b) -> TypeRep a -> b

reifyTypable k = k |> co

where co is a coercion that witnesses

  co :: (forall a b. Typeable a => b) ~ forall a b. (TypeRep a -> b)



*Third.  *This does not depend, and should not depend, on the fact that
single-method classes are represented with a newtype.  E.g. if we changed
Typeable to be represented with a data type thus (in Core)

data Typeable a = MkTypeable (TypeRep a)

using data rather than newtype, then we could still write reifyTypable.

reifyTypeable :: (Typeable a => b) -> TypeRep a -> b

reifyTypable = /\ab. \(f :: Typeable a => b). \(r :: TypeRep a).

   f (MkTypeable r)

The efficiency of newtype is nice, but it’s not essential.



*Fourth*.   As you point out, reify# is far too polymorphic. *Clearly you
need reify# to be a class method!*  Something like this

class Reifiable a where

  type RC a :: Constraint  -- Short for Reified Constraint

  reify# :: forall r. (RC a => r) -> a -> r

Now (in Core at least) we can make instances

instance Reifiable (TypeRep a) where

  type RC (TypeRep a) = Typeable a

  reify# k = k |> co  -- For a suitable co

Now, we can’t write those instances in Haskell, but we could make the
‘deriving’ mechanism deal with it, thus:

deriving instance Reifiable (Typeable a)

You can supply a ‘where’ part if you like, but if you don’t GHC will fill
in the implementation for you.  It’ll check that Typeable is a
single-method class; produce a suitable implementation (in Core, as above)
for reify#, and a suitable instance for RC. Pretty simple.   Now the solver
can use those instances.

There are lots of details

·I’ve used a single parameter class and a type function, because
the call site of reify# will provide no information about the ‘c’ in (c =>
r) argument.

·What if some other class has the same method type?  E.g. if
someone wrote

class MyTR a where op :: TypeRep a

would that mess up the use of reify# for Typeable?   Well it would if they
also did

deriving instance Reifiable (MyTR a)

And there really is an ambiguity: what should (reify# k (tr :: TypeRep
Int)) do?  Apply k to a TypeRep or to a MyTR?  So a complaint here would be
entirely legitimate.

·I suppose that another formulation might be to abstract over the
constraint, rather than the method type, and use explicit type application
at calls of reify#.  So

class Reifiable c where

  type RL c :: *

  reify# :: (c => r) -> RL c -> r

Now all calls of reify# would have to look like

reify# @(Typeable Int) k tr

Maybe that’s acceptable.  But it doesn’t seem as nice to me.

·One could use functional dependencies and a 2-parameter type
class, but I don’t think it would change anything much.  If type functions
work, they are more robust than fundeps.

·One could abstract over the type constructor rather than the
type.  I see no advantage and some disadvantages

class Reifiable t where

  type RC t :: * -> Constraint  -- Short for Reified Constraint

  reify# :: forall r. (RC t a => r) -> t a -> r





|  -Original Message-

|  From: ghc-devs [mailto:ghc-devs-boun...@haskell.org
<ghc-devs-boun...@haskell.org>] On Behalf Of David

|  Feuer

|  Sent: 11 December 2016 05:01

|  To: ghc-devs <ghc-devs@haskell.org>; Edward Kmett <ekm...@gmail.com>

|  Subject: Magical function to support reflection

|

|  The following proposal (with

Pattern checker status

2017-01-10 Thread David Feuer
Could you possibly give us an update about the current status of your 
pattern checker work? I know the number of tickets may seem a bit 
overwhelming; please reach out to the ghc-devs list, or individual 
developers, to get whatever help you need. Knocking out some of 
these 
tickets is a priority for 8.2, and the freeze is coming up fast. In my 
mind, 
the top priorities for GHC 8.2 should probably be #10746 and #12949. 
We really want to get those squashed. I think #11195 is also a fairly 
high 
priority (if it's still an issue!).

#10746 is a serious correctness issue, and Simon's suggested fix 
sounds 
straightforward. Have you run into trouble?

#12949 is also a serious correctness issue. You're right that desugaring 
doesn't happen till after type checking, but I believe the type checker 
should have already worked out what it needs to be able to desugar 
the 
overloading. It would be worth asking how you might access that 
information.

#11195 looks like a practically important and serious performance 
problem. I know you spent some time investigating it months ago; do 
you have any more recent progress to report? Do you know if the 
problem is still there?

David Feuer
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Can the definition of alwaysSucceeds be streamlined?

2016-12-27 Thread David Feuer
Currently, `GHC.Conc` has

alwaysSucceeds :: STM a -> STM ()
alwaysSucceeds i = do ( i >> retry ) `orElse` ( return () )
  checkInv i

If I understand what's going on here (which I may not), I think this
should be equivalent to

alwaysSucceeds i = (i >> retry) `orElse` checkInv i

David Feuer
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Magical function to support reflection

2016-12-22 Thread David Feuer
I meant to define reify for the Tagged representation in terms of reify':

reify :: forall a r . a -> (forall (s :: *) . Reifies s a => Proxy s -> r) -> r
reify a f = reify' (unproxy f) a

Further, I figured I'd look into reifyNat, and I came up with this:

reifyNat' :: forall a r . (forall (n :: Nat) . KnownNat n => Tagged n
r) -> Integer -> r
reifyNat' f = reify# (Constrain (unTagged (f :: Tagged n r)) :: forall
(n :: Nat) . Constrain (KnownNat n) r)


On Thu, Dec 22, 2016 at 6:55 PM, David Feuer <david.fe...@gmail.com> wrote:
> On Thu, Dec 22, 2016 at 4:58 PM, Edward Kmett <ekm...@gmail.com> wrote:
>> On Mon, Dec 12, 2016 at 1:31 PM, David Feuer <david.fe...@gmail.com> wrote:
>>>
>>> On Dec 12, 2016 1:15 PM, "Edward Kmett" <ekm...@gmail.com> wrote:
>>>
>>> A few thoughts in no particular order:
>>>
>>> Unlike this proposal, the existing 'reify' itself as core can actually be
>>> made well typed.
>>>
>>>
>>> Can you explain this?
>>
>> I mean just that. If you look at the core generated by the existing 'reify'
>> combinator, nothing it does is 'evil'. We're allowing it to construct a
>> dictionary. That isn't unsound where core is concerned.
>
> So what *is* evil about my Tagged approach? Or do you just mean that
> the excessive polymorphism is evil? There's no doubt that it is, but
> the only ways I see to avoid it are to bake in a particular Reifies
> class, which is a different kind of evil, or to come up with a way to
> express the constraint that the class has exactly one method, which is
> Extreme Overkill.
>
>> Where the surface language is concerned the uniqueness of that dictionary is
>> preserved by the quantifier introducing a new type generatively in the local
>> context, so the usual problems with dictionary construction are defused.
>
>>>  On the other other hand, if you're going to be magic, you might as well
>>> go all the way to something like:
>>>
>>> reify# :: (p => r) -> a -> r
>>>
>>>
>>> How would we implement reify in terms of this variant?
>>
>> That I don't have the answer to. It seems like it should work though.
>
> I think it does. I've changed the reify# type a bit to avoid an
> ambiguity I couldn't resolve.
>
> newtype Constrain p r = Constrain (p => r)
>
> reify# :: Constrain p r -> a -> r
>
> Using my Tagged definition of Reifies, we get
>
> reify' :: forall a r . (forall s . Reifies s a => Tagged s r) -> a -> r
> reify' f = reify# (Constrain (unTagged (f :: Tagged s r)) :: forall s
> . Constrain (Reifies s a) r)
>
> reify :: forall a r . a -> (forall s . Reifies s a => Proxy s -> r) -> r
> reify a f = reify# (Constrain (f (Proxy :: Proxy s)) :: forall s .
> Constrain (Reifies s a) r) a
>
> Using your proxy version, things are trickier, but I think it's
>
> reify :: forall a r . a -> (forall s . Reifies s a => Proxy s -> r) -> r
> reify a f = (reify# (Constrain (f (Proxy :: Proxy s)) :: forall s .
> Constrain (Reifies s a) r)) (const a :: forall proxy s . proxy s -> a)
>
> David
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Magical function to support reflection

2016-12-22 Thread David Feuer
On Thu, Dec 22, 2016 at 4:58 PM, Edward Kmett <ekm...@gmail.com> wrote:
> On Mon, Dec 12, 2016 at 1:31 PM, David Feuer <david.fe...@gmail.com> wrote:
>>
>> On Dec 12, 2016 1:15 PM, "Edward Kmett" <ekm...@gmail.com> wrote:
>>
>> A few thoughts in no particular order:
>>
>> Unlike this proposal, the existing 'reify' itself as core can actually be
>> made well typed.
>>
>>
>> Can you explain this?
>
> I mean just that. If you look at the core generated by the existing 'reify'
> combinator, nothing it does is 'evil'. We're allowing it to construct a
> dictionary. That isn't unsound where core is concerned.

So what *is* evil about my Tagged approach? Or do you just mean that
the excessive polymorphism is evil? There's no doubt that it is, but
the only ways I see to avoid it are to bake in a particular Reifies
class, which is a different kind of evil, or to come up with a way to
express the constraint that the class has exactly one method, which is
Extreme Overkill.

> Where the surface language is concerned the uniqueness of that dictionary is
> preserved by the quantifier introducing a new type generatively in the local
> context, so the usual problems with dictionary construction are defused.

>>  On the other other hand, if you're going to be magic, you might as well
>> go all the way to something like:
>>
>> reify# :: (p => r) -> a -> r
>>
>>
>> How would we implement reify in terms of this variant?
>
> That I don't have the answer to. It seems like it should work though.

I think it does. I've changed the reify# type a bit to avoid an
ambiguity I couldn't resolve.

newtype Constrain p r = Constrain (p => r)

reify# :: Constrain p r -> a -> r

Using my Tagged definition of Reifies, we get

reify' :: forall a r . (forall s . Reifies s a => Tagged s r) -> a -> r
reify' f = reify# (Constrain (unTagged (f :: Tagged s r)) :: forall s
. Constrain (Reifies s a) r)

reify :: forall a r . a -> (forall s . Reifies s a => Proxy s -> r) -> r
reify a f = reify# (Constrain (f (Proxy :: Proxy s)) :: forall s .
Constrain (Reifies s a) r) a

Using your proxy version, things are trickier, but I think it's

reify :: forall a r . a -> (forall s . Reifies s a => Proxy s -> r) -> r
reify a f = (reify# (Constrain (f (Proxy :: Proxy s)) :: forall s .
Constrain (Reifies s a) r)) (const a :: forall proxy s . proxy s -> a)

David
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Retro-Haskell: can we get seq somewhat under control?

2016-12-21 Thread David Feuer
I don't want to actually put the dictionary there. I want to *pretend* to
put the dictionary there. In testing mode, I want to be able to "take one
out" by making it out of whole cloth; in production mode I want to just
assume there are no bottoms in the constraints and never ever make the
dictionaries. But this all is probably better discussed on the existing
proposal, now that I know it exists. There are some considerable
complications raised there.

On Dec 21, 2016 11:55 PM, "Edward Kmett" <ekm...@gmail.com> wrote:

> Actually, if you go back to the original form of Seq it would translate to
>
> data Seq a => Foo a = Foo !Int !a
>
> which requires resurrecting DatatypeContexts, and not
>
> data Foo a = Seq a => Foo !Int !a
>
> The former requires Seq to call the constructor, but doesn't pack the
> dictionary into the constructor. The latter lets you get the dictionary out
> when you pattern match on it. meaning it has to carry the dictionary around!
>
> Unfortunately, non-trivial functionality is lost. With the old
> DatatypeContext translation you can't always unpack and repack a
> constructor. Whereas with a change to an existential encoding you're
> carrying around a lot of dictionaries in precisely the structures that
> least want to carry extra weight.
>
> Both of these options suck relative to the status quo for different
> reasons.
>
> -Edward
>
> On Wed, Dec 21, 2016 at 2:14 PM, Index Int <vlad.z.4...@gmail.com> wrote:
>
>> There's a related GHC Proposal:
>> https://github.com/ghc-proposals/ghc-proposals/pull/27
>>
>> On Wed, Dec 21, 2016 at 10:04 PM, David Feuer <david.fe...@gmail.com>
>> wrote:
>> > In the Old Days (some time before Haskell 98), `seq` wasn't fully
>> > polymorphic. It could only be applied to instances of a certain class.
>> > I don't know the name that class had, but let's say Seq. Apparently,
>> > some people didn't like that, and now it's gone. I'd love to be able
>> > to turn on a language extension, use an alternate Prelude, and get it
>> > back. I'm not ready to put up a full-scale proposal yet; I'm hoping
>> > some people may have suggestions for details. Some thoughts:
>> >
>> > 1. Why do you want that crazy thing, David?
>> >
>> > When implementing general-purpose lazy data structures, a *lot* of
>> > things need to be done strictly for efficiency. Often, the easiest way
>> > to do this is using either bang patterns or strict data constructors.
>> > Care is necessary to only ever force pieces of the data structure, and
>> > not the polymorphic data a user has stored in it.
>> >
>> > 2. Why does it need GHC support?
>> >
>> > It would certainly be possible to write alternative versions of `seq`,
>> > `$!`, and `evaluate` to use a user-supplied Seq class. It should even
>> > be possible to deal with strict data constructors by hand or
>> > (probably) using Template Haskell. For instance,
>> >
>> > data Foo a = Foo !Int !a
>> >
>> > would translate to normal GHC Haskell as
>> >
>> > data Foo a = Seq a => Foo !Int !a
>> >
>> > But only GHC can extend this to bang patterns, deal with the
>> > interactions with coercions, and optimize it thoroughly.
>> >
>> > 3. How does Seq interact with coercions and roles?
>> >
>> > I believe we'd probably want a special rule that
>> >
>> > (Seq a, Coercible a b) => Seq b
>> >
>> > Thanks to this rule, a Seq constraint on a type variable shouldn't
>> > prevent it from having a representational role.
>> >
>> > The downside of this rule is that if something *can* be forced, but we
>> > don't *want* it to be, then we have to hide it a little more carefully
>> > than we might like. This shouldn't be too hard, however, using a
>> > newtype defined in a separate module that exports a pattern synonym
>> > instead of a constructor, to hide the coercibility.
>> >
>> > 4. Optimize? What?
>> >
>> > Nobody wants Seq constraints blocking up specialization. Today, a
>> function
>> >
>> > foo :: (Seq a, Foldable f) => f a -> ()
>> >
>> > won't specialize to the Foldable instance if the Seq instance is
>> > unknown. This is lousy. Furthermore, all Seq instances are the same.
>> > The RTS doesn't actually need a dictionary to force something to WHNF.
>> > The situation is somewhat similar to that of Coercible, *but more so*.
>> > Coercible sometimes needs to pass evi

Improving DeriveTraversable

2016-12-21 Thread David Feuer
The role system is not currently able to use GND to derive Traversable
instances. While we wait for future research to solve that problem, I
think it would be nice to address a problem that can arise with
DeriveTraversable: when newtypes stack up, fmaps also stack up. I've
come up with a trick that I think could help solve the problem in at
least some important cases. There may be a nicer solution (perhaps
using associated types?), but I haven't found it yet. What I don't
know is whether this arrangement works for all important "shapes" of
newtypes, or what might be involved in automating it.

-- Represents a traversal that may come up with a type that's
-- a bit off, but not too far off. If you think about Coyoneda, this type
-- might make more sense. Whereas Coyoneda builds up larger and
-- larger *function compositions*, we just keep changing the coercion
-- types.
data Trav t b where
  Trav :: Coercible x (t b)
   => (forall f a . Applicative f => (a -> f b) -> t a -> f x)
   -> Trav t b

class (Foldable t, Functor t) => Traversable t where
  traverse :: Applicative f
   => (a -> f b) -> t a -> f (t b)

  -- This new method is not intended to be exported by Data.Traversable,
  -- but only by some ghc-special module.
  trav :: Trav t b
  trav = Trav traverse
  {-# INLINE trav #-}


Here are some sample newtype instances.

-- Convenience function from Data.Profunctor.Unsafe

(#.) :: Coercible b c => (b -> c) -> (a -> b) -> a -> c
_ #. g = coerce g
{-# INLINE (#.) #-}

-- Convenience function for changing a Trav type
retrav :: Coercible u t => (forall a . u a -> t a) -> Trav t b -> Trav u b
retrav extr (Trav t) = Trav ((. extr) #. t)

-- Function for defining traverse proper. Note that this should
-- *only* be used to define traverse for newtype wrappers;
-- for other types, it will add an unnecessary fmap.

travTraverse :: forall f t a b . (Traversable t, Applicative f)
 => (a -> f b) -> t a -> f (t b)
travTraverse = case trav :: Trav t b of
  Trav t -> \f xs -> fmap coerce (t f xs)
{-# INLINE travTraverse #-}

-- Sample types

newtype F t x = F {getF :: t x} deriving (Functor, Foldable)
newtype G t x = G {getG :: t x} deriving (Functor, Foldable)
newtype H f x = H {getH :: F (G f) x} deriving (Functor, Foldable)

instance Traversable t => Traversable (F t) where
  traverse = travTraverse
  trav = retrav getF trav

instance Traversable t => Traversable (G t) where
  traverse = travTraverse
  trav = retrav getG trav

instance Traversable t => Traversable (H t) where
  traverse = travTraverse
  trav = retrav getH trav

With these instances, traversing H t a will perform one fmap instead of three.


David Feuer
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Retro-Haskell: can we get seq somewhat under control?

2016-12-21 Thread David Feuer
In the Old Days (some time before Haskell 98), `seq` wasn't fully
polymorphic. It could only be applied to instances of a certain class.
I don't know the name that class had, but let's say Seq. Apparently,
some people didn't like that, and now it's gone. I'd love to be able
to turn on a language extension, use an alternate Prelude, and get it
back. I'm not ready to put up a full-scale proposal yet; I'm hoping
some people may have suggestions for details. Some thoughts:

1. Why do you want that crazy thing, David?

When implementing general-purpose lazy data structures, a *lot* of
things need to be done strictly for efficiency. Often, the easiest way
to do this is using either bang patterns or strict data constructors.
Care is necessary to only ever force pieces of the data structure, and
not the polymorphic data a user has stored in it.

2. Why does it need GHC support?

It would certainly be possible to write alternative versions of `seq`,
`$!`, and `evaluate` to use a user-supplied Seq class. It should even
be possible to deal with strict data constructors by hand or
(probably) using Template Haskell. For instance,

data Foo a = Foo !Int !a

would translate to normal GHC Haskell as

data Foo a = Seq a => Foo !Int !a

But only GHC can extend this to bang patterns, deal with the
interactions with coercions, and optimize it thoroughly.

3. How does Seq interact with coercions and roles?

I believe we'd probably want a special rule that

(Seq a, Coercible a b) => Seq b

Thanks to this rule, a Seq constraint on a type variable shouldn't
prevent it from having a representational role.

The downside of this rule is that if something *can* be forced, but we
don't *want* it to be, then we have to hide it a little more carefully
than we might like. This shouldn't be too hard, however, using a
newtype defined in a separate module that exports a pattern synonym
instead of a constructor, to hide the coercibility.

4. Optimize? What?

Nobody wants Seq constraints blocking up specialization. Today, a function

foo :: (Seq a, Foldable f) => f a -> ()

won't specialize to the Foldable instance if the Seq instance is
unknown. This is lousy. Furthermore, all Seq instances are the same.
The RTS doesn't actually need a dictionary to force something to WHNF.
The situation is somewhat similar to that of Coercible, *but more so*.
Coercible sometimes needs to pass evidence at runtime to maintain type
safety. But Seq carries no type safety hazard whatsoever--when
compiling in "production mode", we can just *assume* that Seq evidence
is valid, and erase it immediately after type checking; the worst
thing that could possibly happen is that someone will force a function
and get weird semantics. Further, we should *unconditionally* erase
Seq evidence from datatypes; this is necessary to maintain
compatibility with the usual data representations. I don't know if
this unconditional erasure could cause "laziness safety" issues, but
the system would be essentially unusable without it.

4. What would the language extension do, exactly?

a. Automatically satisfy Seq for data types and families.
b. Propagate Seq constraints using the usual rules and the special
Coercible rule.
c. Modify the translation of strict fields to add Seq constraints as required.

David Feuer
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Explicit inequality evidence

2016-12-12 Thread David Feuer
According to Ben Gamari's wiki page[1], the new Typeable is expected to
offer

eqTypeRep :: forall k (a :: k) (b :: k). TypeRep a -> TypeRep b -> Maybe (a
:~: b)

Ideally, we'd prefer to get either evidence of equality or evidence of
inequality. The traditional approach is to use Dec (a :~: b), where data
Dec a = Yes a | No (a -> Void). But  a :~: b -> Void isn't strong enough
for all purposes. In particular, if we want to use inequality to drive type
family reduction, we could be in trouble.

I'm wondering if we could expose inequality much as we expose equality.
Under an a # b constraint, GHC would recognize a and b as unequal. Some
rules:

Base rules
1. f x # a -> b
2. If C is a constructor, then C # f x and C # a -> b
3. If C and D are distinct constructors, then C # D

Propagation rules
1. x # y <=> (x -> z) # (y -> z) <=> (z -> x) # (z -> y)
2. x # y <=> (x z) # (y z) <=> (z x) # (z y)
3. If x # y then y # x

Irreflexivity
1. x # x is unsatisfiable (this rule would be used for checking patterns).

With this hypothetical machinery in place, we could get something like

data a :#: b where
  Unequal :: a # b => Unequal (a :#: b)

eqTypeRep' :: forall k (a :: k) (b :: k). TypeRep a -> TypeRep b -> Either
(a :#: b) (a :~: b)

Pattern matching on an Unequal constructor would reveal the inequality,
allowing closed type families relying on it to reduce.

Evidence structure:

Whereas (:~:) has just one value, Refl, it would be possible to imagine
richer evidence of inequality. If two types are unequal, then they must be
unequal in some particular fashion. I conjecture that we don't actually
gain much value by using rich evidence here. If the types are Typeable,
then we can explore them ourselves, using eqTypeRep' recursively to locate
one or more differences. If they're not, I don't think we can track the
source(s) of inequality in a coherent fashion. The information we got would
only be suitable for use in an error message. So one option would be to
bundle up some strings describing the known mismatch, and warn the user
very sternly that they shouldn't try to do anything too terribly fancy with
them.

[1] https://ghc.haskell.org/trac/ghc/wiki/Typeable/BenGamari
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Magical function to support reflection

2016-12-12 Thread David Feuer
On Dec 12, 2016 1:15 PM, "Edward Kmett"  wrote:

A few thoughts in no particular order:

Unlike this proposal, the existing 'reify' itself as core can actually be
made well typed.


Can you explain this?


Tagged in the example could be replaced with explicit type application if
backwards compatibility isn't a concern. OTOH, it is.


Would that help Core typing?

 On the other other hand, if you're going to be magic, you might as well go
all the way to something like:

reify# :: (p => r) -> a -> r


How would we implement reify in terms of this variant?


and admit both fundep and TF forms. I mean, if you're going to lie you
might as well lie big.


Definitely.



There are a very large number of instances out there scattered across
dozens of packages that would be broken by switching from Proxy to Tagged
or explicit type application internally. (I realize that this is a lesser
concern that can be resolved by a major version bump and some community
friction, but it does mean pragmatically that migrating to something like
this would need a plan.)


I just want to make sure that we do what we need to get Really Good Code,
if we're going to the trouble of adding compiler support.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Magical function to support reflection

2016-12-10 Thread David Feuer
The following proposal (with fancier formatting and some improved
wording) can be viewed at
https://ghc.haskell.org/trac/ghc/wiki/MagicalReflectionSupport

Using the Data.Reflection has some runtime costs. Notably, there can
be no inlining or unboxing of reified values. I think it would be nice
to add a GHC special to support it. I'll get right to the point of
what I want, and then give a bit of background about why.

=== What I want

I propose the following absurdly over-general lie:

reify# :: (forall s . c s a => t s r) -> a -> r

`c` is assumed to be a single-method class with no superclasses whose
dictionary representation is exactly the same as the representation of
`a`, and `t s r` is assumed to be a newtype wrapper around `r`. In
desugaring, reify# f would be compiled to f@S, where S is a fresh
type. I believe it's necessary to use a fresh type to prevent
specialization from mixing up different reified values.

=== Background

Let me set up a few pieces. These pieces are slightly modified from
what the package actually does to make things cleaner under the hood,
but the differences are fairly shallow.

newtype Tagged s a = Tagged { unTagged :: a }

unproxy :: (Proxy s -> a) -> Tagged s a
unproxy f = Tagged (f Proxy)

class Reifies s a | s -> a where
  reflect' :: Tagged s a

-- For convenience
reflect :: forall s a proxy . Reifies s a => proxy s -> a
reflect _ = unTagged (reflect' :: Tagged s a)

-- The key function--see below regarding implementation
reify' :: (forall s . Reifies s a => Tagged s r) -> a -> r

-- For convenience
reify :: a -> (forall s . Reifies s a => Proxy s -> r) -> r
reify a f = reify' (unproxy f) a

The key idea of reify' is that something of type

forall s . Reifies s a => Tagged s r

is represented in memory exactly the same as a function of type

a -> r

So we can currently use unsafeCoerce to interpret one as the other.
Following the general approach of the library, we can do this as such:

newtype Magic a r = Magic (forall s . Reifies s a => Tagged s r)
reify' :: (forall s . Reifies s a => Tagged s r) -> a -> r
reify' f = unsafeCoerce (Magic f)

This certainly works. The trouble is that any knowledge about what is
reflected is totally lost. For instance, if I write

reify 12 $ \p -> reflect p + 3

then GHC will not see, at compile time, that the result is 15. If I write

reify (+1) $ \p -> reflect p x

then GHC will never inline the application of (+1). Etc.

I'd like to replace reify' with reify# to avoid this problem.

Thanks,
David Feuer
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Reading floating point

2016-10-10 Thread David Feuer
It may currently be true for floats, but it's never been true in general,
particularly with regard to records. Read is not actually designed to parse
Haskell; it's for parsing "Haskell-like" things. Because it, unlike a true
Haskell parser, is type-directed, there are somewhat different trade-offs.

On Oct 11, 2016 1:50 AM, "Carter Schonwald" <carter.schonw...@gmail.com>
wrote:

> How is that not a bug? We should be able to read back floats
>
> On Monday, October 10, 2016, David Feuer <david.fe...@gmail.com> wrote:
>
>> It doesn't, and it never has.
>>
>> On Oct 10, 2016 6:08 PM, "Carter Schonwald" <carter.schonw...@gmail.com>
>> wrote:
>>
>>> Read should accept exactly the valid source literals for a type.
>>>
>>> On Monday, October 10, 2016, David Feuer <david.fe...@gmail.com> wrote:
>>>
>>>> What does any of that have to do with the Read instances?
>>>>
>>>> On Oct 10, 2016 1:56 PM, "Carter Schonwald" <carter.schonw...@gmail.com>
>>>> wrote:
>>>>
>>>>> The right solution is to fix things so we have scientific notation
>>>>> literal rep available.  Any other contortions run into challenges in
>>>>> repsentavility of things.  That's of course ignoring denormalized floats,
>>>>> infinities, negative zero and perhaps nans.
>>>>>
>>>>> At the very least we need to efficiently and safely support everything
>>>>> but nan. And I have some ideas for that I hope to share soon.
>>>>>
>>>>> On Monday, October 10, 2016, David Feuer <david.fe...@gmail.com>
>>>>> wrote:
>>>>>
>>>>>> I fully expect this to be somewhat tricky, yes. But some aspects of
>>>>>> the current implementation strike me as pretty clearly non-optimal. What 
>>>>>> I
>>>>>> meant about going through Rational is that given "625e-5", say, it
>>>>>> calculates 625%10, producing a fraction in lowest terms, before 
>>>>>> calling
>>>>>> fromRational, which itself invokes fromRat'', a division function 
>>>>>> optimized
>>>>>> for a special case that doesn't seem too relevant in this context. I 
>>>>>> could
>>>>>> be mistaken, but I imagine even reducing to lowest terms is useless here.
>>>>>> The separate treatment of the digits preceding and following the decimal
>>>>>> point doesn't do anything obviously useful either. If we (effectively)
>>>>>> normalize in decimal to an integral mantissa, for example, then we can
>>>>>> convert the whole mantissa to an Integer at once; this will balance the
>>>>>> merge tree better than converting the two pieces separately and 
>>>>>> combining.
>>>>>>
>>>>>> On Oct 10, 2016 6:00 AM, "Yitzchak Gale" <g...@sefer.org> wrote:
>>>>>>
>>>>>> The way I understood it, it's because the type of "floating point"
>>>>>> literals is
>>>>>>
>>>>>> Fractional a => a
>>>>>>
>>>>>> so the literal parser has no choice but to go via Rational. Once you
>>>>>> have that, you use the same parser for those Read instances to ensure
>>>>>> that the result is identical to what you would get if you parse it as
>>>>>> a literal in every case.
>>>>>>
>>>>>> You could replace the Read parsers for Float and Double with much more
>>>>>> efficient ones. But you would need to provide some other guarantee of
>>>>>> consistency with literals. That would be more difficult to achieve
>>>>>> than one might think - floating point is deceivingly tricky. There are
>>>>>> already several good parsers in the libraries, but I believe all of
>>>>>> them can provide different results than literals in some cases.
>>>>>>
>>>>>> YItz
>>>>>>
>>>>>> On Sat, Oct 8, 2016 at 10:27 PM, David Feuer <david.fe...@gmail.com>
>>>>>> wrote:
>>>>>> > The current Read instances for Float and Double look pretty iffy
>>>>>> from an
>>>>>> > efficiency standpoint. Going through Rational is exceedingly weird:
>>>>>> we have
>>>>>> > absolutely nothing to gain by dividing out the GCD, as far as I can
>>>>>> tell.
>>>>>> > Then, in doing so, we read the digits of the integral part to form
>>>>>> an
>>>>>> > Integer. This looks like a detour, and particularly bad when it has
>>>>>> many
>>>>>> > digits. Wouldn't it be better to normalize the decimal
>>>>>> representation first
>>>>>> > in some fashion (e.g., to 0.xxexxx) and go from there? Probably
>>>>>> less
>>>>>> > importantly, is there some way to avoid converting the mantissa to
>>>>>> an
>>>>>> > Integer at all? The low digits may not end up making any difference
>>>>>> > whatsoever.
>>>>>> >
>>>>>> >
>>>>>> > ___
>>>>>> > ghc-devs mailing list
>>>>>> > ghc-devs@haskell.org
>>>>>> > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>>>>>> >
>>>>>>
>>>>>>
>>>>>>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Reading floating point

2016-10-10 Thread David Feuer
I fully expect this to be somewhat tricky, yes. But some aspects of the
current implementation strike me as pretty clearly non-optimal. What I
meant about going through Rational is that given "625e-5", say, it
calculates 625%10, producing a fraction in lowest terms, before calling
fromRational, which itself invokes fromRat'', a division function optimized
for a special case that doesn't seem too relevant in this context. I could
be mistaken, but I imagine even reducing to lowest terms is useless here.
The separate treatment of the digits preceding and following the decimal
point doesn't do anything obviously useful either. If we (effectively)
normalize in decimal to an integral mantissa, for example, then we can
convert the whole mantissa to an Integer at once; this will balance the
merge tree better than converting the two pieces separately and combining.

On Oct 10, 2016 6:00 AM, "Yitzchak Gale" <g...@sefer.org> wrote:

The way I understood it, it's because the type of "floating point" literals
is

Fractional a => a

so the literal parser has no choice but to go via Rational. Once you
have that, you use the same parser for those Read instances to ensure
that the result is identical to what you would get if you parse it as
a literal in every case.

You could replace the Read parsers for Float and Double with much more
efficient ones. But you would need to provide some other guarantee of
consistency with literals. That would be more difficult to achieve
than one might think - floating point is deceivingly tricky. There are
already several good parsers in the libraries, but I believe all of
them can provide different results than literals in some cases.

YItz

On Sat, Oct 8, 2016 at 10:27 PM, David Feuer <david.fe...@gmail.com> wrote:
> The current Read instances for Float and Double look pretty iffy from an
> efficiency standpoint. Going through Rational is exceedingly weird: we
have
> absolutely nothing to gain by dividing out the GCD, as far as I can tell.
> Then, in doing so, we read the digits of the integral part to form an
> Integer. This looks like a detour, and particularly bad when it has many
> digits. Wouldn't it be better to normalize the decimal representation
first
> in some fashion (e.g., to 0.xxexxx) and go from there? Probably less
> importantly, is there some way to avoid converting the mantissa to an
> Integer at all? The low digits may not end up making any difference
> whatsoever.
>
>
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Reading floating point

2016-10-08 Thread David Feuer
The current Read instances for Float and Double look pretty iffy from an
efficiency standpoint. Going through Rational is exceedingly weird: we have
absolutely nothing to gain by dividing out the GCD, as far as I can tell.
Then, in doing so, we read the digits of the integral part to form an
Integer. This looks like a detour, and particularly bad when it has many
digits. Wouldn't it be better to normalize the decimal representation first
in some fashion (e.g., to 0.xxexxx) and go from there? Probably less
importantly, is there some way to avoid converting the mantissa to an
Integer at all? The low digits may not end up making any difference
whatsoever.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Custom warning suppression

2016-09-07 Thread David Feuer
Currently, the only way to suppress custom warnings and deprecations is
with -fno-warn-warnings-deprecations, which is a rather large hammer. I see
two ways we can improve this, and I propose that we should do both.

1. Per-binding suppression

Add -fno-warn-binding, -fno-deprecate-binding, -fwarn-binding options, and
-fdeprecate-binding options. These would take the (optionally qualified)
name of a binding and control warnings tied to it. So if you invoked
-fno-warn-binding "sillyFunction", then GHC would not warn you about the
silliness of anything named sillyFunction. -fno-warn-binding
"Data.Silly.sillyFunction" would limit the suppression to the silly
function in Data.Silly. -fno-deprecate-binding would refrain from emitting
deprecation warnings for the binding in question. -fno-deprecate-binding
would presumably imply no-warn-binding, since someone who doesn't care that
a function is going to be removed probably also doesn't care what else is
wrong with it.

2. Named warning classes

I'd like to add an optional "warning class" to the WARNING pragma,
preceding the warning description. This would be a short string indicating
what sort of warning is involved. This would be totally free-form, but the
documentation would suggest a few conventional options such as "partial"
and "slow". Then whole warning classes could be controlled with
-fno-warn-class and -first-class.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Unprincipled defaults

2016-09-02 Thread David Feuer
Whoa. I was far too confident that wouldn't work!

On Sep 2, 2016 12:39 PM, "Richard Eisenberg" <r...@cs.brynmawr.edu> wrote:

> I feel like I must be missing something, but what's wrong with
>
> class Semigroup1 f where
>   op :: f a -> f a -> f a
>
>   default op :: Monoid (f a) => f a -> f a -> f a
>   op = (<>)
>
>
> ? Does that do what you like?
>
> Richard
>
>
> On Sep 1, 2016, at 11:15 PM, David Feuer <david.fe...@gmail.com> wrote:
>
> On occasion, it can be useful to have default definitions that don't
> typecheck even with DefaultSignatures. It would be nice to be able to use
> them anyway. For example, if we have
>
> class Semigroup1 f where
>   op :: f a -> f a -> f a
>
> then we could, hypothetically, give a default definition for (<>):
>
>   default (<>) :: Semigroup1 f => f a -> f a -> f a
>   (<>) = op
>
> But we can't give a default definition
>
>   op = (<>)
>
> because there's no way to write its signature. However, for any F with
>
> instance Semigroup (F a) where ...
>
> the definition op = (<>) is perfectly fine.
>
> Would it be possible to offer a completely wild defaulting mechanism
> allowing *any expression* as a default, and only checking its type and
> compiling it when it's actually used?
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
>
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Unprincipled defaults

2016-09-01 Thread David Feuer
On occasion, it can be useful to have default definitions that don't
typecheck even with DefaultSignatures. It would be nice to be able to use
them anyway. For example, if we have

class Semigroup1 f where
  op :: f a -> f a -> f a

then we could, hypothetically, give a default definition for (<>):

  default (<>) :: Semigroup1 f => f a -> f a -> f a
  (<>) = op

But we can't give a default definition

  op = (<>)

because there's no way to write its signature. However, for any F with

instance Semigroup (F a) where ...

the definition op = (<>) is perfectly fine.

Would it be possible to offer a completely wild defaulting mechanism
allowing *any expression* as a default, and only checking its type and
compiling it when it's actually used?
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Rewrapping with simple bidirectional pattern synonyms

2016-08-24 Thread David Feuer
I'm glad to see I'm not the only one who wants this!

On Wed, Aug 24, 2016 at 6:13 PM, Edward Z. Yang  wrote:
> I filed a ticket for precisely this:
>
> https://ghc.haskell.org/trac/ghc/ticket/12203
>
> Edward
>
> Excerpts from David Feuer's message of 2016-08-24 18:11:16 -0400:
>> I'm trying to write a bidirectional pattern synonym which is morally
>> simple, but as far as I can tell there's no way to write it as a
>> simple bidirectional pattern synonym.
>>
>> What I want to say is
>>
>> pattern TSnoc (TSnocList xs) x = TSnocList (CL.TCons (Dual x) xs)
>>
>> (where TSnocList and Dual are both newtype constructors)
>>
>> But I don't see a way to do it without being explicitly bidirectional:
>>
>> pattern TSnoc xs x <- (tsViewR -> ViewR xs x) where
>>   TSnoc (TSnocList xs) x = TSnocList (CL.TCons (Dual x) xs)
>>
>> tsViewR :: TSnocList c x y -> ViewR c (TSnocList c) x y
>> tsViewR (TSnocList CL.TNil) = EmptyR
>> tsViewR (TSnocList (CL.TCons (Dual x) xs)) = ViewR (TSnocList xs) x
>>
>> Would it be possible to make this simple thing simple?
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Rewrapping with simple bidirectional pattern synonyms

2016-08-24 Thread David Feuer
I'm trying to write a bidirectional pattern synonym which is morally
simple, but as far as I can tell there's no way to write it as a
simple bidirectional pattern synonym.

What I want to say is

pattern TSnoc (TSnocList xs) x = TSnocList (CL.TCons (Dual x) xs)

(where TSnocList and Dual are both newtype constructors)

But I don't see a way to do it without being explicitly bidirectional:

pattern TSnoc xs x <- (tsViewR -> ViewR xs x) where
  TSnoc (TSnocList xs) x = TSnocList (CL.TCons (Dual x) xs)

tsViewR :: TSnocList c x y -> ViewR c (TSnocList c) x y
tsViewR (TSnocList CL.TNil) = EmptyR
tsViewR (TSnocList (CL.TCons (Dual x) xs)) = ViewR (TSnocList xs) x

Would it be possible to make this simple thing simple?
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Deriving tweaking

2016-08-05 Thread David Feuer
Excellent!

On Aug 5, 2016 2:32 PM, "Edward Kmett" <ekm...@gmail.com> wrote:

> Done and done! Retroactively. How is that for service? =)
>
> -Edward
>
> On Fri, Aug 5, 2016 at 2:08 PM, David Feuer <david.fe...@gmail.com> wrote:
>
>> I know there's been some discussion about letting users select the
>> deriving mechanism they want, but I'd like to propose a separate tweak to
>> the defaults. Specifically, it's annoying to have to use three pragmas to
>> let me write
>>
>> newtype Foo f a = Foo (f a) deriving (Functor, Foldable, Traversable)
>> data Bar f a = Bar (f a) deriving (Functor, Foldable, Traversable)
>>
>> and more annoying still that I'll end up with Foldable and Functor
>> instances for Foo that may be much worse than GND-derived ones.
>>
>> The tweaks I'm after:
>>
>> 1. Prefer GND to the built-in derivations for Functor and Foldable, and
>> probably also Eq and Ord.
>> 2. Make DeriveTraversable imply DeriveFunctor and DeriveFoldable.
>>
>> ___
>> ghc-devs mailing list
>> ghc-devs@haskell.org
>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>>
>>
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Deriving tweaking

2016-08-05 Thread David Feuer
I know there's been some discussion about letting users select the deriving
mechanism they want, but I'd like to propose a separate tweak to the
defaults. Specifically, it's annoying to have to use three pragmas to let
me write

newtype Foo f a = Foo (f a) deriving (Functor, Foldable, Traversable)
data Bar f a = Bar (f a) deriving (Functor, Foldable, Traversable)

and more annoying still that I'll end up with Foldable and Functor
instances for Foo that may be much worse than GND-derived ones.

The tweaks I'm after:

1. Prefer GND to the built-in derivations for Functor and Foldable, and
probably also Eq and Ord.
2. Make DeriveTraversable imply DeriveFunctor and DeriveFoldable.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Can we offer ~ without GADTs or type families?

2016-08-05 Thread David Feuer
It seems to me that equality constraints could potentially be supported by
an implementation with neither GADTs nor type families. Type families don't
really seem to have much to do with it, and GADTs are strictly heavier
(GADTs ~= ExistentialQuantification + TypeEquality).

Could we get a separate LANGUAGE pragma just for equality constraints?
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


How bad would it be for containers to depend on transformers?

2016-06-22 Thread David Feuer
Currently, containers does not depend on transformers, so it has to
duplicate its functionality or just do without. Since transformers is also
a GHC boot package, I believe it should be feasible to make containers
depend on it. To what extent would that reduce
parallelizability of GHC builds or otherwise make people mad?

David
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Optimizing "counting" GADTs

2016-06-19 Thread David Feuer
I meant reflection in the sense of the reflection package. Sorry for the
confusion.
On Jun 19, 2016 4:28 AM, "Ben Gamari"  wrote:

> "Edward Z. Yang"  writes:
>
> snip
>
> >> Dictionaries are harder to come by,
> >> but reflection might be an option.
> >
> > If I understand correctly, even if you have a Typeable dictionary you
> > don't necessarily have a way of constructing the other dictionaries
> > that are available at that type.  Maybe that is something worth fixing.
> >
> Right; a Typeable dictionary gives you nothing more than the identity of
> the type. You cannot get any further dictionaries from it. Honestly
> fixing this seems quite non-trivial (essentially requiring that you
> construct a symbol name for the desired dictionary and do a symbol table
> lookup to find it, hoping that the linker didn't decide to drop it due
> to being unused).
>
> Moreover, it seems possible that providing this ability may have
> consequences on parametricity. Reflection already comes dangerously
> close to compromising this property; we are saved only by the fact that
> a Typeable constraint is needed to request a representation. I'd imagine
> that allowing the user to produce arbitrary dictionaries from a
> representation may pose similar issues.
>
> Cheers,
>
> - Ben
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Optimizing "counting" GADTs

2016-06-18 Thread David Feuer
I would think the provided equalities could be constructed in a view
pattern, possibly using unsafeCoerce. Dictionaries are harder to come by,
but reflection might be an option. My two biggest gripes about pattern
synonyms are really

1. The constraints for "constructor" application are forced to be much
tighter than necessary. This is very sad because there doesn't seem to be
anything inherently difficult about fixing it--just allow the user to
specify the desired type signature for the synonym used as a constructor.
2. The exhaustivity check doesn't work yet.
On Jun 18, 2016 10:07 PM, "Matthew Pickering" <matthewtpicker...@gmail.com>
wrote:

> David, Carter,
>
> It would be nice to use pattern synonyms for this task but they do not
> work quite as expected as they don't cause type refinement.
>
> I quickly assembled this note to explain why.
>
> http://mpickering.github.io/posts/2016-06-18-why-no-refinement.html
>
> Matt
>
> On Fri, May 27, 2016 at 4:50 AM, David Feuer <david.fe...@gmail.com>
> wrote:
> > Scratch that. I think you might be right.
> >
> > On May 25, 2016 8:40 PM, "David Feuer" <david.fe...@gmail.com> wrote:
> >>
> >> Partially. Unfortunately, bidirectional pattern synonyms tie the types
> of
> >> the pattern synonyms to the types of the smart constructors for no good
> >> reason, making them (currently) inappropriate. But fixing that problem
> would
> >> offer one way to this optimization, I think.
> >>
> >> On May 25, 2016 8:37 PM, "Carter Schonwald" <carter.schonw...@gmail.com
> >
> >> wrote:
> >>
> >> could this be simulated/modeled with pattern synonyms?
> >>
> >> On Wed, May 25, 2016 at 7:51 PM, David Feuer <david.fe...@gmail.com>
> >> wrote:
> >>>
> >>> I've started a wiki page,
> >>> https://ghc.haskell.org/trac/ghc/wiki/OptimizeCountingGADTs , to
> consider
> >>> optimizing GADTs that look like natural numbers but that possibly have
> >>> "heavy zeros". Please take a look.
> >>>
> >>>
> >>> ___
> >>> ghc-devs mailing list
> >>> ghc-devs@haskell.org
> >>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
> >>>
> >>
> >
> > ___
> > ghc-devs mailing list
> > ghc-devs@haskell.org
> > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
> >
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Strictness/laziness warnings

2016-05-29 Thread David Feuer
Mostly I'm looking for a rough estimate. Some false positives and
false negatives are tolerable. If I have something like

f :: Int -> Maybe String -> String
f _ Nothing = "Hi there!"
f n (Just b) = if n > 0 then show b else "whatever"

then I'd likely be interested in a warning about the fact that the
first case is not strict in the Int and the second is. I'd also likely
be interested in a warning about the first case because I'm taking a
small primitive value (Int) and doing so lazily.

On Sun, May 29, 2016 at 11:04 PM, wren romano  wrote:
> On Sat, May 28, 2016 at 10:14 PM, Edward Kmett  wrote:
>> How would you detect the argument only being forced some of the time? Sounds
>> like a lot of long-term cross-module book-keeping.
>
> Sounds to me like what the strictness analyzer is already doing, ne? I
> missed the beginning of the thread, so might be off base. If it's more
> about noticing when the use sites of a given function have some
> pattern not captured by the strictness determined from the function's
> definition, then it seems like we shouldn't need /cross/-module
> bookkeeping: we should be able to just tabulate how each use site's
> strictness does/doesn't match the interface's spec.
>
> --
> Live well,
> ~wren
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Strictness/laziness warnings

2016-05-28 Thread David Feuer
I'm not suggesting these things are *wrong*, and I wouldn't want the
warnings to be included in -Wall. They're just possible areas of concern.
By "conditionally strict" I mean that the argument in question is only
forced sometimes.
On May 28, 2016 9:16 PM, "Edward Kmett" <ekm...@gmail.com> wrote:

> I have code that'd trip at least 2&3 in use in production. #2 arises for
> some tricks that Wren first introduced me to for loop invariant code
> motion. #3 arises when you want to memoize a result but only produce it
> lazily in case it isn't used. I don't quite understand what you mean by
> "conditionally strict" in an argument though.
>
> -Edward
>
> On Sat, May 28, 2016 at 8:00 PM, David Feuer <david.fe...@gmail.com>
> wrote:
>
>> There are certain patterns of strictness or laziness that signal the need
>> for extra caution. I'm wondering whether it might be possible to offer
>> warnings for different varieties of them, and pragmas suppressing the
>> warnings at the relevant sites. Some function behaviors that suggest extra
>> care:
>>
>> 1. Conditionally strict in an argument. In many cases, making it
>> unconditionally strict will improve performance.
>> 2. Strict in an argument that is or could be a function or a newtype
>> wrapper around a function. This can be caused by  adding too much
>> strictness defensively or to plug a leak.
>> 3. Lazy in a primitive argument like an Int. This could lead to
>> unnecessary boxing.
>>
>> Any of these could occur in correct, efficient code. But I'd love to be
>> presented a list of warnings to check over, and a way to check items off
>> the list with pragmas.
>>
>> ___
>> ghc-devs mailing list
>> ghc-devs@haskell.org
>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>>
>>
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Strictness/laziness warnings

2016-05-28 Thread David Feuer
There are certain patterns of strictness or laziness that signal the need
for extra caution. I'm wondering whether it might be possible to offer
warnings for different varieties of them, and pragmas suppressing the
warnings at the relevant sites. Some function behaviors that suggest extra
care:

1. Conditionally strict in an argument. In many cases, making it
unconditionally strict will improve performance.
2. Strict in an argument that is or could be a function or a newtype
wrapper around a function. This can be caused by  adding too much
strictness defensively or to plug a leak.
3. Lazy in a primitive argument like an Int. This could lead to unnecessary
boxing.

Any of these could occur in correct, efficient code. But I'd love to be
presented a list of warnings to check over, and a way to check items off
the list with pragmas.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Optimizing "counting" GADTs

2016-05-27 Thread David Feuer
Scratch that. I think you might be right.
On May 25, 2016 8:40 PM, "David Feuer" <david.fe...@gmail.com> wrote:

> Partially. Unfortunately, bidirectional pattern synonyms tie the types of
> the pattern synonyms to the types of the smart constructors for no good
> reason, making them (currently) inappropriate. But fixing that problem
> would offer one way to this optimization, I think.
> On May 25, 2016 8:37 PM, "Carter Schonwald" <carter.schonw...@gmail.com>
> wrote:
>
> could this be simulated/modeled with pattern synonyms?
>
> On Wed, May 25, 2016 at 7:51 PM, David Feuer <david.fe...@gmail.com>
> wrote:
>
>> I've started a wiki page,
>> https://ghc.haskell.org/trac/ghc/wiki/OptimizeCountingGADTs , to
>> consider optimizing GADTs that look like natural numbers but that possibly
>> have "heavy zeros". Please take a look.
>>
>> ___
>> ghc-devs mailing list
>> ghc-devs@haskell.org
>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>>
>>
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Optimizing "counting" GADTs

2016-05-25 Thread David Feuer
Partially. Unfortunately, bidirectional pattern synonyms tie the types of
the pattern synonyms to the types of the smart constructors for no good
reason, making them (currently) inappropriate. But fixing that problem
would offer one way to this optimization, I think.
On May 25, 2016 8:37 PM, "Carter Schonwald" <carter.schonw...@gmail.com>
wrote:

could this be simulated/modeled with pattern synonyms?

On Wed, May 25, 2016 at 7:51 PM, David Feuer <david.fe...@gmail.com> wrote:

> I've started a wiki page,
> https://ghc.haskell.org/trac/ghc/wiki/OptimizeCountingGADTs , to consider
> optimizing GADTs that look like natural numbers but that possibly have
> "heavy zeros". Please take a look.
>
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Optimizing "counting" GADTs

2016-05-25 Thread David Feuer
I've started a wiki page,
https://ghc.haskell.org/trac/ghc/wiki/OptimizeCountingGADTs , to consider
optimizing GADTs that look like natural numbers but that possibly have
"heavy zeros". Please take a look.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Unpacking single-field, single-strict-constructor GADTs and existentials

2016-05-25 Thread David Feuer
I've started a wiki page at
https://ghc.haskell.org/trac/ghc/wiki/NewtypeOptimizationForGADTS

On Wed, May 25, 2016 at 3:27 AM, Simon Peyton Jones
<simo...@microsoft.com> wrote:
> I'm not following the details of this discussion.  Would it be possible to 
> write a compact summary, with the key examples, in the appropriate ticket?
>
> I think that https://ghc.haskell.org/trac/ghc/ticket/10016 is one such 
> ticket, but I think there may be more than one issue at stake here.  For 
> example, #10016 can be done in a strongly typed way in Core; but #1965 cannot 
> (so far as I know).
>
> It could also help to have a wiki page to summarise the cluster of issues, 
> pointing to the appropriate tickets for individual cases.
>
> An articulate summary will make it much more likely that progress is made! 
> Thanks.
>
> Simon
>
> | -Original Message-
> | From: ghc-devs [mailto:ghc-devs-boun...@haskell.org] On Behalf Of David 
> Feuer
> | Sent: 24 May 2016 23:14
> | To: Carter Schonwald <carter.schonw...@gmail.com>
> | Cc: ghc-devs <ghc-devs@haskell.org>
> | Subject: Re: Unpacking single-field, single-strict-constructor GADTs and
> | existentials
> |
> | Unboxing, per se, is not required; only newtype optimization. I
> | believe Ed would probably have mentioned something when I discussed
> | the issue with him if he'd already had an idea for hacking around it.
> | Instead, he said he wanted the feature too.
> |
> | On Tue, May 24, 2016 at 6:03 PM, Carter Schonwald
> | <carter.schonw...@gmail.com> wrote:
> | > Phrased differently: there's a subclass of existential data types which
> | have
> | > a well behaved unboxed memory layout?
> | >
> | > @ David : have you tried simulating this in userland using eds structs /
> | > structures lib?
> | >
> | > On Tuesday, May 24, 2016, David Feuer <david.fe...@gmail.com> wrote:
> | >>
> | >> I should mention that while this does not require UNPACKing sum types (or
> | >> any of the difficult trade-offs that involves), it lets programmers
> | >> accomplish such unpacking by hand under sufficiently general conditions 
> to
> | >> be quite useful in practice. As long as the set of types involved is
> | closed,
> | >> it'll do.
> | >>
> | >> David Feuer <david.fe...@gmail.com> writes:
> | >>
> | >> > Given
> | >> >
> | >> > data Big a = B1 !(Small1 a) | B2 !(Small2 a) | B3 !(Small3 a), where 
> the
> | >> > Small types are (possibly recursive) sums, it's generally possible to
> | >> > express that as something like
> | >> >
> | >> > data Selector = One | Two | Three
> | >> > data Big a = forall (x :: Selector) .
> | >> >Big !(BigG x a)
> | >> > data BigG x a where
> | >> >   GB1a :: some -> fields -> BigG 'One a
> | >> >   GB1b :: fields -> BigG 'One a
> | >> >   GB2a :: whatever -> BigG 'Two a
> | >> >   GB3a :: yeah -> BigG 'Three a
> | >> >
> | >> > Making one big GADT from all the constructors of the "small" types, and
> | >> > then wrapping it up in an existential. That's what I meant about
> | >> > "unpacking". But for efficiency purposes, that wrapper needs the 
> newtype
> | >> > optimization.
> | >>
> | >> Yes, but you'd need to unbox a sum in this case, no? I think this is the
> | >> first issue that you need to solve before you can talk about dealing
> | >> with the polymorphism issue (although hopefully Ömer will make progress
> | >> on this for 8.2).
> | >>
> | >> Cheers,
> | >>
> | >> - Ben
> | ___
> | ghc-devs mailing list
> | ghc-devs@haskell.org
> | 
> https://na01.safelinks.protection.outlook.com/?url=http%3a%2f%2fmail.haskell.
> | org%2fcgi-bin%2fmailman%2flistinfo%2fghc-
> | 
> devs=01%7c01%7csimonpj%40064d.mgd.microsoft.com%7ce98f7b01dbeb47cc8d3908
> | 
> d38420b38b%7c72f988bf86f141af91ab2d7cd011db47%7c1=gFnWAB1of%2fp%2b0IXkD
> | CgcBbyxHkS7%2b4BusMl%2fs0rUySM%3d
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Unpacking single-field, single-strict-constructor GADTs and existentials

2016-05-25 Thread David Feuer
#1965 *as modified by comments #7 and #9* is pretty much what I'm hoping for.

On Wed, May 25, 2016 at 3:27 AM, Simon Peyton Jones
<simo...@microsoft.com> wrote:
> I'm not following the details of this discussion.  Would it be possible to 
> write a compact summary, with the key examples, in the appropriate ticket?
>
> I think that https://ghc.haskell.org/trac/ghc/ticket/10016 is one such 
> ticket, but I think there may be more than one issue at stake here.  For 
> example, #10016 can be done in a strongly typed way in Core; but #1965 cannot 
> (so far as I know).
>
> It could also help to have a wiki page to summarise the cluster of issues, 
> pointing to the appropriate tickets for individual cases.
>
> An articulate summary will make it much more likely that progress is made! 
> Thanks.
>
> Simon
>
> | -Original Message-
> | From: ghc-devs [mailto:ghc-devs-boun...@haskell.org] On Behalf Of David 
> Feuer
> | Sent: 24 May 2016 23:14
> | To: Carter Schonwald <carter.schonw...@gmail.com>
> | Cc: ghc-devs <ghc-devs@haskell.org>
> | Subject: Re: Unpacking single-field, single-strict-constructor GADTs and
> | existentials
> |
> | Unboxing, per se, is not required; only newtype optimization. I
> | believe Ed would probably have mentioned something when I discussed
> | the issue with him if he'd already had an idea for hacking around it.
> | Instead, he said he wanted the feature too.
> |
> | On Tue, May 24, 2016 at 6:03 PM, Carter Schonwald
> | <carter.schonw...@gmail.com> wrote:
> | > Phrased differently: there's a subclass of existential data types which
> | have
> | > a well behaved unboxed memory layout?
> | >
> | > @ David : have you tried simulating this in userland using eds structs /
> | > structures lib?
> | >
> | > On Tuesday, May 24, 2016, David Feuer <david.fe...@gmail.com> wrote:
> | >>
> | >> I should mention that while this does not require UNPACKing sum types (or
> | >> any of the difficult trade-offs that involves), it lets programmers
> | >> accomplish such unpacking by hand under sufficiently general conditions 
> to
> | >> be quite useful in practice. As long as the set of types involved is
> | closed,
> | >> it'll do.
> | >>
> | >> David Feuer <david.fe...@gmail.com> writes:
> | >>
> | >> > Given
> | >> >
> | >> > data Big a = B1 !(Small1 a) | B2 !(Small2 a) | B3 !(Small3 a), where 
> the
> | >> > Small types are (possibly recursive) sums, it's generally possible to
> | >> > express that as something like
> | >> >
> | >> > data Selector = One | Two | Three
> | >> > data Big a = forall (x :: Selector) .
> | >> >Big !(BigG x a)
> | >> > data BigG x a where
> | >> >   GB1a :: some -> fields -> BigG 'One a
> | >> >   GB1b :: fields -> BigG 'One a
> | >> >   GB2a :: whatever -> BigG 'Two a
> | >> >   GB3a :: yeah -> BigG 'Three a
> | >> >
> | >> > Making one big GADT from all the constructors of the "small" types, and
> | >> > then wrapping it up in an existential. That's what I meant about
> | >> > "unpacking". But for efficiency purposes, that wrapper needs the 
> newtype
> | >> > optimization.
> | >>
> | >> Yes, but you'd need to unbox a sum in this case, no? I think this is the
> | >> first issue that you need to solve before you can talk about dealing
> | >> with the polymorphism issue (although hopefully Ömer will make progress
> | >> on this for 8.2).
> | >>
> | >> Cheers,
> | >>
> | >> - Ben
> | ___
> | ghc-devs mailing list
> | ghc-devs@haskell.org
> | 
> https://na01.safelinks.protection.outlook.com/?url=http%3a%2f%2fmail.haskell.
> | org%2fcgi-bin%2fmailman%2flistinfo%2fghc-
> | 
> devs=01%7c01%7csimonpj%40064d.mgd.microsoft.com%7ce98f7b01dbeb47cc8d3908
> | 
> d38420b38b%7c72f988bf86f141af91ab2d7cd011db47%7c1=gFnWAB1of%2fp%2b0IXkD
> | CgcBbyxHkS7%2b4BusMl%2fs0rUySM%3d
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Unpacking single-field, single-strict-constructor GADTs and existentials

2016-05-24 Thread David Feuer
Unboxing, per se, is not required; only newtype optimization. I
believe Ed would probably have mentioned something when I discussed
the issue with him if he'd already had an idea for hacking around it.
Instead, he said he wanted the feature too.

On Tue, May 24, 2016 at 6:03 PM, Carter Schonwald
<carter.schonw...@gmail.com> wrote:
> Phrased differently: there's a subclass of existential data types which have
> a well behaved unboxed memory layout?
>
> @ David : have you tried simulating this in userland using eds structs /
> structures lib?
>
> On Tuesday, May 24, 2016, David Feuer <david.fe...@gmail.com> wrote:
>>
>> I should mention that while this does not require UNPACKing sum types (or
>> any of the difficult trade-offs that involves), it lets programmers
>> accomplish such unpacking by hand under sufficiently general conditions to
>> be quite useful in practice. As long as the set of types involved is closed,
>> it'll do.
>>
>> David Feuer <david.fe...@gmail.com> writes:
>>
>> > Given
>> >
>> > data Big a = B1 !(Small1 a) | B2 !(Small2 a) | B3 !(Small3 a), where the
>> > Small types are (possibly recursive) sums, it's generally possible to
>> > express that as something like
>> >
>> > data Selector = One | Two | Three
>> > data Big a = forall (x :: Selector) .
>> >Big !(BigG x a)
>> > data BigG x a where
>> >   GB1a :: some -> fields -> BigG 'One a
>> >   GB1b :: fields -> BigG 'One a
>> >   GB2a :: whatever -> BigG 'Two a
>> >   GB3a :: yeah -> BigG 'Three a
>> >
>> > Making one big GADT from all the constructors of the "small" types, and
>> > then wrapping it up in an existential. That's what I meant about
>> > "unpacking". But for efficiency purposes, that wrapper needs the newtype
>> > optimization.
>>
>> Yes, but you'd need to unbox a sum in this case, no? I think this is the
>> first issue that you need to solve before you can talk about dealing
>> with the polymorphism issue (although hopefully Ömer will make progress
>> on this for 8.2).
>>
>> Cheers,
>>
>> - Ben
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Unpacking single-field, single-strict-constructor GADTs and existentials

2016-05-24 Thread David Feuer
I should mention that while this does not require UNPACKing sum types (or
any of the difficult trade-offs that involves), it lets programmers
accomplish such unpacking by hand under sufficiently general conditions to
be quite useful in practice. As long as the set of types involved is
closed, it'll do.
David Feuer <david.fe...@gmail.com> writes:

> Given
>
> data Big a = B1 !(Small1 a) | B2 !(Small2 a) | B3 !(Small3 a), where the
> Small types are (possibly recursive) sums, it's generally possible to
> express that as something like
>
> data Selector = One | Two | Three
> data Big a = forall (x :: Selector) .
>Big !(BigG x a)
> data BigG x a where
>   GB1a :: some -> fields -> BigG 'One a
>   GB1b :: fields -> BigG 'One a
>   GB2a :: whatever -> BigG 'Two a
>   GB3a :: yeah -> BigG 'Three a
>
> Making one big GADT from all the constructors of the "small" types, and
> then wrapping it up in an existential. That's what I meant about
> "unpacking". But for efficiency purposes, that wrapper needs the newtype
> optimization.

Yes, but you'd need to unbox a sum in this case, no? I think this is the
first issue that you need to solve before you can talk about dealing
with the polymorphism issue (although hopefully Ömer will make progress
on this for 8.2).

Cheers,

- Ben
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Unpacking single-field, single-strict-constructor GADTs and existentials

2016-05-24 Thread David Feuer
No, because the pattern matching semantics are different. Matching on
the constructor *must* force the contents to maintain type safety.
It's really strict data with the newtype optimization, rather than a
bona fide newtype.

On Tue, May 24, 2016 at 4:18 PM, Ben Gamari <b...@well-typed.com> wrote:
> David Feuer <david.fe...@gmail.com> writes:
>
>> Not really. It's really just the newtype optimization, although it's not a
>> newtype.
>
> Ahh, I see. Yes, you are right. I was being silly.
>
> However, in this case wouldn't it make more sense to just call it a newtype?
>
> Cheers,
>
> - Ben
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Unpacking single-field, single-strict-constructor GADTs and existentials

2016-05-24 Thread David Feuer
Not really. It's really just the newtype optimization, although it's not a
newtype.
On May 24, 2016 12:43 PM, "Ben Gamari" <b...@well-typed.com> wrote:

> David Feuer <david.fe...@gmail.com> writes:
>
> > Given
> >
> > data Big a = B1 !(Small1 a) | B2 !(Small2 a) | B3 !(Small3 a), where the
> > Small types are (possibly recursive) sums, it's generally possible to
> > express that as something like
> >
> > data Selector = One | Two | Three
> > data Big a = forall (x :: Selector) .
> >Big !(BigG x a)
> > data BigG x a where
> >   GB1a :: some -> fields -> BigG 'One a
> >   GB1b :: fields -> BigG 'One a
> >   GB2a :: whatever -> BigG 'Two a
> >   GB3a :: yeah -> BigG 'Three a
> >
> > Making one big GADT from all the constructors of the "small" types, and
> > then wrapping it up in an existential. That's what I meant about
> > "unpacking". But for efficiency purposes, that wrapper needs the newtype
> > optimization.
>
> Yes, but you'd need to unbox a sum in this case, no? I think this is the
> first issue that you need to solve before you can talk about dealing
> with the polymorphism issue (although hopefully Ömer will make progress
> on this for 8.2).
>
> Cheers,
>
> - Ben
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Unpacking single-field, single-strict-constructor GADTs and existentials

2016-05-24 Thread David Feuer
Given

data Big a = B1 !(Small1 a) | B2 !(Small2 a) | B3 !(Small3 a), where the
Small types are (possibly recursive) sums, it's generally possible to
express that as something like

data Selector = One | Two | Three
data Big a = forall (x :: Selector) .
   Big !(BigG x a)
data BigG x a where
  GB1a :: some -> fields -> BigG 'One a
  GB1b :: fields -> BigG 'One a
  GB2a :: whatever -> BigG 'Two a
  GB3a :: yeah -> BigG 'Three a

Making one big GADT from all the constructors of the "small" types, and
then wrapping it up in an existential. That's what I meant about
"unpacking". But for efficiency purposes, that wrapper needs the newtype
optimization.
On May 24, 2016 4:16 AM, "Ben Gamari" <b...@well-typed.com> wrote:

> David Feuer <david.fe...@gmail.com> writes:
>
> > Data.IntMap could be cleaned up some if single-field, single strict
> > constructor GADTs/existentials could be unpacked even when wrapping a sum
> > type. We could then have
> >
> > data Status = E | NE
> > data IntMap' (s :: Status) a where
> >   Bin :: ... -> ... -> !(IntMap' NE a) -> !(IntMap' NE a) -> IntMap' NE a
> >   Tip :: ... -> a -> IntMap' NE a
> >   Nil :: IntMap' E a
> > data IntMap a =
> >   forall s . IM {-# UNPACK #-} !(IntMap' s a)
> >
> I'm not sure I understand how the existential helps you unpack this sum.
> Surely I'm missing something.
>
> > The representation would be the same as that of a newtype, but the
> pattern
> > matching semantics would be strict. In the GADT case, this would
> > essentially allow any fixed concrete datatype to serve directly as a
> > witness for an arbitrary set of type equalities demanded on construction.
> >
> > Is there any hope something like this could happen?
>
> Ignoring the sum issue for a moment:
>
> My understanding is that it ought to be possible to unpack at least
> single-constructor types in an existentially quantified datacon,
> although someone needs to step up to do it. A closely related issue
> (existentials in newtypes) was discussed by dons in a Stack Overflow
> question [1] quite some time ago.
>
> As far as I understand as long as the existentially-quantified argument
> is unconstrained (therefore there is no need to carry a dictionary) and
> of kind * (therefore has a uniform representation) there is no reason
> why unpacking shouldn't be possible.
>
> The case that you cite looks to be even easier since the existential is
> a phantom so there is no need to represent it at all. It seems to me
> like it might not be so difficult to treat this case in particular.
> It's possible all that is necessary would be to adjust the unpackability
> criteria in MkId.
>
> It actually looks like there's a rather closely related ticket already
> open, #10016.
>
> Cheers,
>
> - Ben
>
>
> [1]
> http://stackoverflow.com/questions/5890094/is-there-a-way-to-define-an-existentially-quantified-newtype-in-ghc-haskell
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Unpacking single-field, single-strict-constructor GADTs and existentials

2016-05-19 Thread David Feuer
Data.IntMap could be cleaned up some if single-field, single strict
constructor GADTs/existentials could be unpacked even when wrapping a sum
type. We could then have

data Status = E | NE
data IntMap' (s :: Status) a where
  Bin :: ... -> ... -> !(IntMap' NE a) -> !(IntMap' NE a) -> IntMap' NE a
  Tip :: ... -> a -> IntMap' NE a
  Nil :: IntMap' E a
data IntMap a =
  forall s . IM {-# UNPACK #-} !(IntMap' s a)

The representation would be the same as that of a newtype, but the pattern
matching semantics would be strict. In the GADT case, this would
essentially allow any fixed concrete datatype to serve directly as a
witness for an arbitrary set of type equalities demanded on construction.

Is there any hope something like this could happen?
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: suboptimal ghc code generation in IO vs equivalent pure code case

2016-05-14 Thread David Feuer
Well, a few weeks ago Bertram Felgenhauer came up with a version of IO that
acts more like lazy ST. That could be just the thing. He placed it in the
public domain/CC0 and told me I could put it up on Hackage if I want. I'll
try to do that this week, but no promises. I could forward his email if you
just want to try it out.
On May 14, 2016 2:31 PM, "Harendra Kumar"  wrote:

> The difference seems to be entirely due to memory pressure. At list size
> 1000 both pure version and IO version perform equally. But as the size of
> the list increases the pure version scales linearly while the IO version
> degrades exponentially. Here are the execution times per list element in ns
> as the list size increases:
>
> Size of list  Pure   IO
> 1000   8.7  8.3
> 1 8.7  18
> 10   8.8  63
> 100 9.3  786
>
> This seems to be due to increased GC activity in the IO case. The GC stats
> for list size 1 million are:
>
> IO case:   %GC time  66.1%  (61.1% elapsed)
> Pure case:   %GC time   2.6%  (3.3% elapsed)
>
> Not sure if there is a way to write this code in IO monad which can reduce
> this overhead.
>
> -harendra
>
>
> On 14 May 2016 at 17:10, Harendra Kumar  wrote:
> >
> > You are right about the way IO code is generated because of the ordering
> semantics. The IO version builds the list entirely in a recursive fashion
> before returning it while the pure code builds it lazily. I wrote very
> simple versions to keep things simpler:
> >
> > Pure version:
> >
> > f [] = []
> > f (x : xs) = x : f xs
> >
> >
> > time11.08 ms   (10.86 ms .. 11.34 ms)
> > Measured for a million elements in the list
> >
> >  104,041,264 bytes allocated in the heap
> >   16,728 bytes copied during GC
> >   35,992 bytes maximum residency (1 sample(s))
> >   21,352 bytes maximum slop
> >1 MB total memory in use (0 MB lost due to fragmentation)
> >
> >
> > IO version:
> > f [] = return []
> > f (x : xs) = do
> > rest <- f xs
> > return $ x : rest
> >
> > time 79.66 ms   (75.49 ms .. 82.55 ms)
> >
> >  208,654,560 bytes allocated in the heap
> >  121,045,336 bytes copied during GC
> >   27,679,344 bytes maximum residency (8 sample(s))
> >  383,376 bytes maximum slop
> >   66 MB total memory in use (0 MB lost due to fragmentation)
> >
> > Even though this is a small program not doing much and therefore
> enhancing even small differences to a great degree, I am not sure if I can
> completely explain the difference in slowness of the order of 7.5x by just
> the recursive vs lazy building of the list. I am wondering if there is
> anything that is worth further investigating and improving here.
> >
> > -harendra
> >
> > On 12 May 2016 at 05:41, Dan Doel  wrote:
> > >
> > > On Tue, May 10, 2016 at 4:45 AM, Harendra Kumar
> > >  wrote:
> > > > Thanks Dan, that helped. I did notice and suspect the update frame
> and the
> > > > unboxed tuple but given my limited knowledge about ghc/core/stg/cmm
> I was
> > > > not sure what is going on. In fact I thought that the intermediate
> tuple
> > > > should get optimized out since it is required only because of the
> realworld
> > > > token which is not real. But it might be difficult to see that at
> this
> > > > level?
> > >
> > > The token exists as far as the STG level is concerned, I think,
> > > because that is the only thing ensuring that things happen in the
> > > right order. And the closure must be built to have properly formed
> > > STG. So optimizing away the closure building would have to happen at a
> > > level lower than STG, and I guess there is no such optimization. I'm
> > > not sure how easy it would be to do.
> > >
> > > > What you are saying may be true for the current implementation but
> in theory
> > > > can we eliminate the intermediate closure?
> > >
> > > I don't know. I'm a bit skeptical that building this one closure is
> > > the only thing causing your code to be a factor of 5 slower. For
> > > instance, another difference in the core is that the recursive call
> > > corresponding to the result s2 happens before allocating the
> > > additional closure. That is the case statement that unpacks the
> > > unboxed tuple. And the whole loop happens this way, so it is actually
> > > building a structure corresponding to the entire output list in memory
> > > rather eagerly.
> > >
> > > By contrast, your pure function is able to act in a streaming fashion,
> > > if consumed properly, where only enough of the result is built to keep
> > > driving the rest of the program. It probably runs in constant space,
> > > while your IO-based loop has a footprint linear in the size of the
> > > input list (in addition to having slightly more overhead per character
> > > because of the one extra thunk), 

Can we do something slightly nicer about (^)?

2016-04-26 Thread David Feuer
Every time someone writes, say, x^20, the literal exponent defaults to
Integer. This is the wrong default whenever the literal is in the Word
range. Fixing this goes beyond the capabilities of RULES pragmas, but I
imagine it would be a fairly simple thing to accomplish in the internal
rule language.

David
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Pattern synonym type flexibility

2016-04-20 Thread David Feuer
Think about it this way. The matching aspect of a pattern synonym is
basically about defining a function to a somewhat weird type, except of
course optimized and maybe a bit more general.

data HList :: [*] -> * where
  HNil :: HList '[]
  HCons :: a -> HList as -> HList (a ': as)

data PatternResult :: ([*] -> Constraint) -> * where
  PatternResult :: provides ts => HList ts -> PatternResult provides

type Matcher requires provides = forall x . requires x => x ->
PatternResult provides

The smart constructor side of a pattern synonym is much, much simpler! It's
just a regular old Haskell value! The only special bit is that it's
treated, syntactically, as a constructor. There's simply nothing else worth
saying about it, so the less said the better.
On Apr 20, 2016 1:48 PM, "David Feuer" <david.fe...@gmail.com> wrote:

> I don't know what that means. There's no way to enforce duality at the
> term level. Enforcing it at the type level prevents me from doing what I
> want and serves no apparent purpose. Remember that pattern synonyms are all
> about providing nice syntax, not adding essential expressiveness.
> On Apr 20, 2016 1:41 PM, "Carter Schonwald" <carter.schonw...@gmail.com>
> wrote:
>
> Shouldn't the design simply be both directions are the dual of the other,
> and pure in some sense ?
>
>
> On Wednesday, April 20, 2016, David Feuer <david.fe...@gmail.com> wrote:
>
>> To some degree, it probably could be. But I believe that imposing any
>> substantial relationship between the smart constructor and the pattern
>> synonym is likely to fall squarely into the category of things that are
>> subtle, hard, and almost completely useless. In the arrangement I
>> suggested, people would be free to do some things that "don't make sense",
>> and that doesn't bother me in the least.
>> On Apr 20, 2016 1:27 PM, "Carter Schonwald" <carter.schonw...@gmail.com>
>> wrote:
>>
>>> Would that duality be related to the given vs wanted constraints ?
>>>
>>> On Wednesday, April 20, 2016, David Feuer <david.fe...@gmail.com> wrote:
>>>
>>>> As far as I can tell from the 7.10 documentation, it's impossible to
>>>> make a bidirectional pattern synonym used as a constructor have a
>>>> different type signature than when used as a pattern. Has this been
>>>> improved in 8.0? I really want something like
>>>>
>>>> class FastCons x xs | xs -> x where
>>>>   fcons :: x -> xs -> xs
>>>> class FastViewL x xs | xs -> x where
>>>>   fviewl :: xs -> ViewL x xs
>>>>
>>>> pattern x :<| xs <- (fviewl -> ConsL x xs) where
>>>>   x :<| xs = fcons x xs
>>>>
>>>> This would allow users to learn just *one* name, :<|, that they can
>>>> use for sequences that are consable or viewable even if they may not
>>>> be the other.
>>>>
>>>> If this is not yet possible, then I think the most intuitive approach
>>>> is to sever the notions of "pattern synonym" and "smart constructor".
>>>> So I'd write
>>>>
>>>> pattern x :<| xs <- (fviewl -> ConsL x xs)
>>>> constructor (:<|) = fcons
>>>>
>>>> The current syntax could easily be desugared to produce *both* a
>>>> pattern synonym and a smart constructor in the bidirectional case.
>>>> ___
>>>> ghc-devs mailing list
>>>> ghc-devs@haskell.org
>>>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>>>>
>>>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Pattern synonym type flexibility

2016-04-20 Thread David Feuer
I don't know what that means. There's no way to enforce duality at the term
level. Enforcing it at the type level prevents me from doing what I want
and serves no apparent purpose. Remember that pattern synonyms are all
about providing nice syntax, not adding essential expressiveness.
On Apr 20, 2016 1:41 PM, "Carter Schonwald" <carter.schonw...@gmail.com>
wrote:

Shouldn't the design simply be both directions are the dual of the other,
and pure in some sense ?


On Wednesday, April 20, 2016, David Feuer <david.fe...@gmail.com> wrote:

> To some degree, it probably could be. But I believe that imposing any
> substantial relationship between the smart constructor and the pattern
> synonym is likely to fall squarely into the category of things that are
> subtle, hard, and almost completely useless. In the arrangement I
> suggested, people would be free to do some things that "don't make sense",
> and that doesn't bother me in the least.
> On Apr 20, 2016 1:27 PM, "Carter Schonwald" <carter.schonw...@gmail.com>
> wrote:
>
>> Would that duality be related to the given vs wanted constraints ?
>>
>> On Wednesday, April 20, 2016, David Feuer <david.fe...@gmail.com> wrote:
>>
>>> As far as I can tell from the 7.10 documentation, it's impossible to
>>> make a bidirectional pattern synonym used as a constructor have a
>>> different type signature than when used as a pattern. Has this been
>>> improved in 8.0? I really want something like
>>>
>>> class FastCons x xs | xs -> x where
>>>   fcons :: x -> xs -> xs
>>> class FastViewL x xs | xs -> x where
>>>   fviewl :: xs -> ViewL x xs
>>>
>>> pattern x :<| xs <- (fviewl -> ConsL x xs) where
>>>   x :<| xs = fcons x xs
>>>
>>> This would allow users to learn just *one* name, :<|, that they can
>>> use for sequences that are consable or viewable even if they may not
>>> be the other.
>>>
>>> If this is not yet possible, then I think the most intuitive approach
>>> is to sever the notions of "pattern synonym" and "smart constructor".
>>> So I'd write
>>>
>>> pattern x :<| xs <- (fviewl -> ConsL x xs)
>>> constructor (:<|) = fcons
>>>
>>> The current syntax could easily be desugared to produce *both* a
>>> pattern synonym and a smart constructor in the bidirectional case.
>>> ___
>>> ghc-devs mailing list
>>> ghc-devs@haskell.org
>>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>>>
>>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Pattern synonym type flexibility

2016-04-20 Thread David Feuer
To some degree, it probably could be. But I believe that imposing any
substantial relationship between the smart constructor and the pattern
synonym is likely to fall squarely into the category of things that are
subtle, hard, and almost completely useless. In the arrangement I
suggested, people would be free to do some things that "don't make sense",
and that doesn't bother me in the least.
On Apr 20, 2016 1:27 PM, "Carter Schonwald" <carter.schonw...@gmail.com>
wrote:

> Would that duality be related to the given vs wanted constraints ?
>
> On Wednesday, April 20, 2016, David Feuer <david.fe...@gmail.com> wrote:
>
>> As far as I can tell from the 7.10 documentation, it's impossible to
>> make a bidirectional pattern synonym used as a constructor have a
>> different type signature than when used as a pattern. Has this been
>> improved in 8.0? I really want something like
>>
>> class FastCons x xs | xs -> x where
>>   fcons :: x -> xs -> xs
>> class FastViewL x xs | xs -> x where
>>   fviewl :: xs -> ViewL x xs
>>
>> pattern x :<| xs <- (fviewl -> ConsL x xs) where
>>   x :<| xs = fcons x xs
>>
>> This would allow users to learn just *one* name, :<|, that they can
>> use for sequences that are consable or viewable even if they may not
>> be the other.
>>
>> If this is not yet possible, then I think the most intuitive approach
>> is to sever the notions of "pattern synonym" and "smart constructor".
>> So I'd write
>>
>> pattern x :<| xs <- (fviewl -> ConsL x xs)
>> constructor (:<|) = fcons
>>
>> The current syntax could easily be desugared to produce *both* a
>> pattern synonym and a smart constructor in the bidirectional case.
>> ___
>> ghc-devs mailing list
>> ghc-devs@haskell.org
>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>>
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Pattern synonym type flexibility

2016-04-20 Thread David Feuer
As far as I can tell from the 7.10 documentation, it's impossible to
make a bidirectional pattern synonym used as a constructor have a
different type signature than when used as a pattern. Has this been
improved in 8.0? I really want something like

class FastCons x xs | xs -> x where
  fcons :: x -> xs -> xs
class FastViewL x xs | xs -> x where
  fviewl :: xs -> ViewL x xs

pattern x :<| xs <- (fviewl -> ConsL x xs) where
  x :<| xs = fcons x xs

This would allow users to learn just *one* name, :<|, that they can
use for sequences that are consable or viewable even if they may not
be the other.

If this is not yet possible, then I think the most intuitive approach
is to sever the notions of "pattern synonym" and "smart constructor".
So I'd write

pattern x :<| xs <- (fviewl -> ConsL x xs)
constructor (:<|) = fcons

The current syntax could easily be desugared to produce *both* a
pattern synonym and a smart constructor in the bidirectional case.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Could we promote unlifted tuples?

2016-03-19 Thread David Feuer
At present, currying and uncurrying at the type level doesn't seem to
work terribly well. In particular, the kinds

(a, b) -> c

and

a -> b -> c

aren't really isomorphic, because (a, b) can be stuck. This makes some
things (like expressing Atkey-style indexed functors in terms of
McBride-style ones) rather awkward, and difficult or impossible to
really get quite right. The thought came to me that maybe we could
allow unlifted tuples to be promoted. Then something of the kind (# a,
b #) would unconditionally unify with '(# a, b #). The restrictions on
how values of unlifted tuple types can be used would presumably
translate directly to restrictions on how types of unlifted tuple kind
can be used.

David Feuer
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


<    1   2   3   >