Re: Breaking Changes and Long Term Support Haskell

2015-10-21 Thread Dan Doel
For proposal 3, I don't see what difference it makes whether a
refreshed Haskell committee or a new libraries committee makes
decisions that affect backwards compatibility. A name doesn't ensure
good decision making. The only difference I can see is that the
Haskell committee might only publish final decisions every couple
years. But the Haskell report also isn't designed to describe
migration plans between feature revisions; unless the plan is to start
incorporating library deprecation and whatnot into the report (which
would be odd to me). But that would just be doing the same thing
slower, so it'd be little different than making library changes over 6
to 9 GHC versions instead of 3.

For proposal 2, I don't know how effective it will be in practice. I
believe it is already the job of a proposal submitter to summarize the
arguments made about it, according to the library proposal guidelines.
We could post those summaries to another list. But unless more people
promise they will be diligent about reading that list, I'm not sure
that one factor in these dust ups (surprise) will actually be any
different.

Also, if amount of discussion is at issue, I'm not sure I agree. For
AMP, I was waiting a decade, more or less. I thought we should do it,
other people thought we shouldn't because it would break things. I
don't know what more there was to discuss, except there was more stuff
to break the longer we waited.

As for FTP, some aspects only became known as the proposal was
implemented, and I don't know that they would have been realized
regardless of how long the proposal were discussed. And then we still
had a month or so of discussion after the implementation was
finalized, on the cusp of GHC 7.10 being released. So how much more
_was_ needed, that people are now discussing it again?

If it's just about documenting more things, there's certainly no harm in that.

For 1, I don't have a very strong opinion. If pressed, I would
probably express some similar sentiments to Henrik. I certainly don't
think Haskell would be nearly as good as it is if it were a simple
majority vote by all users (and I probably wouldn't use it if that's
how things were decided). Would a community vote for libraries
committee be better than appointment by people who previously held the
power (but have more to do than any human can accomplish)? I don't
know.

I should say, though, that things are not now run by simple majority
vote. What we conducted a year ago was a survey, where people
submitted their thoughts. I didn't get to read them, because they were
private, and it wasn't my decision to make. But it was not just +80
-20.

With regard to your last paragraph, unless I've missed something (and
I confess that I haven't read every comment in these threads), the
recent resignations didn't express disagreement with the decision
making process. They expressed disagreement with the (technical)
decisions (and their effects). I don't see how a different process
could have solved that unless it is expected that it would have made
different decisions.

-- Dan

On Wed, Oct 21, 2015 at 6:18 PM, Geoffrey Mainland  wrote:
> Hi Dan,
>
> Thank you for the historical perspective.
>
> I was careful not to criticize the committee. Instead, I made three
> concrete proposals with the hope that they would help orient a conversation.
>
> It sounds like you are not for proposal 3. How about the other two?
>
> My original email stated my underlying concern: we are losing valuable
> members of the community not because of the technical decisions that are
> being made, but because of the process by which they are being made.
> That concern is what drove my proposals. It is perfectly valid to think
> that that loss was the inevitable price of progress, but that is not my
> view.
>
> Cheers,
> Geoff
>
> On 10/21/15 5:22 PM, Dan Doel wrote:
>> Hello,
>>
>> I'm Dan Doel. I'm on the core libraries committee (though I'm speaking
>> only for myself). As I recall, one of the reasons I got tapped for it
>> was due to my having some historical knowledge about Haskell; not
>> because I was there, but because I've gone back and looked at some old
>> reports and whatnot (and sometimes think they're better than what we
>> have now).
>>
>> But, I was around (of course) when the core libraries committee
>> started up, so perhaps I can play the role of historian for this as
>> well.
>>
>> The reason the committee exists is because a couple years ago, people
>> brought up the ideas that were finally realized in the
>> Applicative-Monad proposal and the Foldable-Traversable proposal. A
>> lot of people weighed in saying they thought they were a good idea,
>> and signific

Re: Breaking Changes and Long Term Support Haskell

2015-10-21 Thread Dan Doel
Hello,

I'm Dan Doel. I'm on the core libraries committee (though I'm speaking
only for myself). As I recall, one of the reasons I got tapped for it
was due to my having some historical knowledge about Haskell; not
because I was there, but because I've gone back and looked at some old
reports and whatnot (and sometimes think they're better than what we
have now).

But, I was around (of course) when the core libraries committee
started up, so perhaps I can play the role of historian for this as
well.

The reason the committee exists is because a couple years ago, people
brought up the ideas that were finally realized in the
Applicative-Monad proposal and the Foldable-Traversable proposal. A
lot of people weighed in saying they thought they were a good idea,
and significantly fewer people weighed in saying they thought that it
shouldn't happen for various reasons---roughly the same things that
people are still bringing up about these proposals.

This wasn't the first time that happened, either. I think it was
widely agreed among most users that Functor should be a superclass of
Monad since I started learning Haskell around 10 years ago. And once
Applicative was introduced, it was agreed that that should go in the
middle of the two. But it appeared that it would never happen, despite
a significant majority thinking it should, because no one wanted to do
anything without pretty much unanimous consent.

So, one question that got raised is: why should this majority of
people even use Haskell/GHC anymore? Why shouldn't they start using
some other language that will let them change 15-year-old mistakes, or
adapt to ideas that weren't even available at that time (but are still
fairly old and established, all things considered). And the answer was
that there should be some body empowered to decide to move forward
with these ideas, even if there is some dissent. And frankly, it
wasn't going to be the prime committee, because it hadn't shown any
activity in something like 3 years at the time, and even when it was
active, it didn't make anywhere near the sort of changes that were
being discussed.

And the kicker to me is, many things that people are complaining about
again (e.g. the FTP) were the very things that the committee was
established to execute. I don't think we had a formal vote on that
proposal, because we didn't need to. Our existence was in part to
execute that proposal (and AMP). And then a year ago, when it was
finally time to release the changes, there was another ruckus. And we
still didn't have a CLC vote on the matter. What we did was conduct a
community poll, and then SPJ reviewed the submissions. But I don't
mean to pass the buck to him, because I'm pretty sure he was worried
that we were crazy, and overstepping our bounds. Just, the results of
the survey were sufficient for him to not overrule us.

So my point is this: there seems to be some sentiment that the core
libraries committee is unsound, and making bad decisions. But the
complaints are mostly not even about actual decisions we made (aside
from maybe Lennart Augustsson's, where he is unhappy with details of
the FTP that you can blame on us, but were designed to break the least
code, instead of being the most elegant; if we had pleased him more,
we would have pleased others less). They are about the reasons for
founding the committee in the first place. You can blame us, if you
like, because I think it's certain that we would have approved them if
we had formally voted. We just didn't even need to do so.

Forgive me if I'm wrong, but suggestions that these decisions should
have been deferred to a Haskell Prime committee mean, to me, that the
hope is that they would have been rejected. That the Haskell Prime
committee should have just vetoed these proposals that something like
80% or more of practicing Haskell users (as far as we can tell) wanted
for years before they finally happened. That the Haskell Prime
committee should be responsible for enforcing the very status quo that
led to the CLC in the first place, where proposals with broad support
but minority dissent never pass for various core modules.

If this is the case, then one could simply repose the earlier
question: why should most of these people stick around to obey by the
Haskell Prime committee's pronouncements, instead of getting to work
on a language that incorporates their input?

And if it isn't, then I don't ultimately understand what the
complaints are. We try to accomplish the (large) changes in a manner
that allows transition via refactoring over multiple versions (and as
I mentioned earlier, some complaints are that we compromised _too
much_ for this). And in light of the more recent complaints, it's even
been decided that our time frames should be longer. Rolling up changes
into a report just seems like it makes transitions less smooth. Unle

Re: Proposal: Non-recursive let

2013-07-10 Thread Dan Doel
On Wed, Jul 10, 2013 at 3:08 AM, Andreas Abel wrote:

> Another instance (cut-down) are let-guards like
>
>   let Just x | x > 0 = e in x
>
> The "x > 0" is understood as an assertion here, documenting an invariant.
>  However, Haskell reads this as
>
>   let Just x = case () of { () | x > 0 -> e } in x
>
> leading to non-termination.  A non-recursive version
>
>   let' Just x | x > 0 = e in x
>
> could be interpreted as
>
>   case e of { Just x | x > 0 -> x }
>
> instead, which is meaningful (in contrast to the current interpretation).
>

I still don't understand how this is supposed to work. It is a completely
different interpretation of guarded definitions that can only apply to
certain special cases.

Specifically, you have a let with one definition with one guard. This lets
you permute the pieces into a case, because there is one of everything.
But, if we instead have two guards, then we have two possible bodies of the
definition, and only one body of the let. But, the case has only one
discriminee (which comes from the definition body), and two branches (which
are supposed to come from the let body). We have the wrong number of pieces
to shuffle everything around.

The only general desugaring is the one in the report, but keeping the
non-recursive let. This means that the x in the guard refers to an outer
scope. But it is still equivalent to:

let' Just x = case x > 0 of True -> e in x

let' x = case (case x > 0 of True -> e) of ~(Just y) -> y in x

not:

case e of Just x | x > 0 -> x

case e of Just x -> case x > 0 of True -> x
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Status of Haskell'?

2012-12-02 Thread Dan Doel
As far as I know, MPTCs alone do not have this issue. But functional
dependencies do, as there are at least two ways they can behave. One is the
way they traditionally behave in GHC, and another is the way they would
behave if they were sugar for type families.

I can't think of anything about MPTCs alone that would be a problem, though.


On Sun, Dec 2, 2012 at 2:56 PM, Gábor Lehel  wrote:

> On Sun, Dec 2, 2012 at 7:06 PM, Dan Doel  wrote:
> > This is a significant problem for even some of the more ubiquitous
> > extensions. For instance, there are multiple compilers that implement
> > RankNTypes, but I would not be surprised at all if programs using that
> > extension were not portable across implementations (they're not really
> even
> > portable across GHC versions).
> >
> > The problem is that RankNTypes is not just about the fact that you are
> > allowed to use such types; every compiler must decide on an inference
> > algorithm that incorporates such types while defaulting to
> Hindley-Milner.
> > But, there are several such algorithms, and they have different trade
> offs
> > as far as where annotations must be placed, or even whether certain
> > conceivably well-typed terms are type checkable (for instance, GHC used
> to
> > do some level of impredicative instantiation; forall a. a -> a could be
> > instantiated to (forall a. a) -> (forall a. a); but this no longer
> works).
> >
> > So, even if we have ubiquitous agreement on the fact that RankNTypes are
> > useful, and implementable, we don't have ubiquitous agreement on the
> > algorithms for implementing them, and which set of trade offs to make.
> But
> > any standard would have to nail all that down, or else programs won't be
> > portable.
> >
> > And this is not the only extension for which this kind of thing is an
> issue.
> >
>
> Out of curiosity, to what degree does MultiParamTypeClasses have this
> issue? It seems to me like one of the few extensions which is
> straightforward, widely implemented, uncontroversial, and very useful.
> For some reason it's been held up by the FDs vs TFs debate, but I
> don't see why it has to be. Vanilla MPTCs on the one hand, and MPTCs
> together with either FDs or TFs on the other hand, serve different use
> cases. If you want a plain type-level relation on types, you use
> MPTCs. If you want some types to be determined by others, then you use
> either FDs or TFs. If we standardize support for the former, that's
> useful, even if we continue to procrastinate on the FDs vs TFs
> question. So if the idea is to do yearly incremental updates to the
> standard, MPTCs looks like the biggest low-hanging fruit to me.
> (Assuming they aren't similarly problematic as RankNTypes.)
>
> --
> Your ship was destroyed in a monadic eruption.
>
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Status of Haskell'?

2012-12-02 Thread Dan Doel
This is a significant problem for even some of the more ubiquitous
extensions. For instance, there are multiple compilers that implement
RankNTypes, but I would not be surprised at all if programs using that
extension were not portable across implementations (they're not really even
portable across GHC versions).

The problem is that RankNTypes is not just about the fact that you are
allowed to use such types; every compiler must decide on an inference
algorithm that incorporates such types while defaulting to Hindley-Milner.
But, there are several such algorithms, and they have different trade offs
as far as where annotations must be placed, or even whether certain
conceivably well-typed terms are type checkable (for instance, GHC used to
do some level of impredicative instantiation; forall a. a -> a could be
instantiated to (forall a. a) -> (forall a. a); but this no longer works).

So, even if we have ubiquitous agreement on the fact that RankNTypes are
useful, and implementable, we don't have ubiquitous agreement on the
algorithms for implementing them, and which set of trade offs to make. But
any standard would have to nail all that down, or else programs won't be
portable.

And this is not the only extension for which this kind of thing is an issue.

-- Dan


On Sun, Dec 2, 2012 at 6:42 AM, Ross Paterson  wrote:

> On Fri, Nov 30, 2012 at 11:05:41PM +, Gábor Lehel wrote:
> > Well, I'm not so sure it's a great idea to just bake "what GHC does at
> > this moment" (for any particular extension) into the standard without
> > really thinking about it. Even then, you have to figure out, in great
> > detail, what GHC does, and write it all down! That's not negligible
> > effort, either.
>
> And that is the core of the problem.  The standard isn't just a list
> of approved features.  It needs to describe them in such detail that a
> programmer can tell, from the Report alone, whether a particular program
> is legal, and if so what it's supposed to do.  We don't have that level
> of description for these extensions, and creating it will be a lot of
> hard work.
>
> Relying on "what GHC does at the moment" has obvious risks for
> programmers, it also puts an unfair responsibility on GHC itself.  How can
> they improve a feature if it's current implementation is the "standard"?
>
> ___
> Haskell-prime mailing list
> Haskell-prime@haskell.org
> http://www.haskell.org/mailman/listinfo/haskell-prime
>
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: TypeFamilies vs. FunctionalDependencies & type-level recursion

2011-06-15 Thread Dan Doel
On Tue, Jun 14, 2011 at 9:56 PM,
 wrote:
> Maybe "functional dependencies" is a poor name.  I'm not going to
> argue it's a good one, just that the extension is quite useful.
> Despite what the name suggests, it is most useful to think of
> "| a b -> c d" as meaning "any method that uses types a and b does not
> need to use types c and d for the compiler to determine a unique
> instance".  Once you start thinking of fundeps that way (and always
> keep in mind that contexts play no role in instance selection), then I
> think it gets a bit easier to reason about fundeps.

I wasn't really posting in this thread because I'm confused about how
fundeps actually behave in GHC. Rather, I care about what functional
dependencies mean, and what they should do, not just what they do
actually do in one implementation or another.

> I don't think so.  Because fundeps (despite the name) are mostly about
> instance selection, incoherent fundeps won't violate safety.  Your
> program may incoherently chose between methods in multiple instances
> that should be the same instance, but at least each individual method
> is type safe.  With type families, you can actually undermine type
> safety, and there's no cheating way to fix this, which is why I think
> TypeFamilies could by very dangerous when combined with dynamic
> loading.

I know that the actual, current implementation won't violate type
safety. But there are reasonable expectations for how *functional
dependencies* might work that would cause problems. Here's an example.

class C a b | a -> b

instance C Int b

foo :: forall a b c d. (C a c, C b d, a ~ b) => a -> b -> c -> d
foo _ _ x = x

coerce :: forall a b. a -> b
coerce = foo (5 :: Int) (5 :: Int)

This currently gets complaints about not being able to deduce c ~ d.
However, actually reading things as genuine functional dependencies,
there's nothing wrong with the logic:

a ~ b
c ~ F a  -- for some F
d ~ F b  -- for the same F
c ~ d

The only thing stopping this is that fundeps in GHC don't actually
introduce the information that they in theory represent. But this in
turn means that they don't fulfill certain usages of type families.

This is not inherent to fundeps. Fundeps could interact with local
constraints more like type families. And in fact, if fundeps ever
become sugar for type families (which is at least possible), then they
will (I think) work exactly this way, and the above instance would
need to be rejected to avoid unsoundness.

> Though I realize you are unlikely ever to like lifting the coverage
> condition, let me at least leave you with a better example of why
> Undecidable instances are useful.  Suppose you want to define an
> instance of MonadState for another monad like MaybeT.  You would need
> to write code like this:
>
>        instance (MonadState s m) => MonadState s (MaybeT m) where
>            get = lift get
>            put = lift . put
>
> This code fails the coverage condition, because class argument
> (MaybeT m) does not contain the type variable s.  Yet, unlike the
> compiler, we humans can look at the type context when reasoning about
> instance selection.

Yes, I'm familiar with the mtl examples. I had forgotten that they
actually need the coverage condition to be lifted, but they are
similar to the type cast case in that they're (possibly) uncheckably
all right.

I consider the fact that you need undecidable instances to be an
advertisement for type families, though.

> We know that even though our get method is
> returning an s--meaning really "forall s. s", since s is mentioned
> nowhere else--there is really only one s that will satisfy the
> MonadState s m requirement in the context.  Perhaps more importantly,
> we know that in any event, if the code compiles, the s we get is the
> one that the surrounding code (calling get or put) expects.

No, the s in the instance does not mean (forall s. s). What it means
is that forall s there is an instance for s, where the quantification
is outside the instance. The difference is significant. If we move to
explicit dictionary construction and arbitrary rank types, it'd look
like:

instance (MonadState s m) => MonadState s (T m)
   ==>
forall s m. (MSDict s m -> MSDict s (T m))

And having access to

forall s m. MSDict s (T m)

is not at all the same as having access to

forall m. MSDict (forall s. s) (T m)

Similarly, (forall a b. CDict a b) is not the same as (forall a. CDict
a (forall b. b)). And:

instance C a b

corresponds to the former, not the latter. Viewing C as a relation,
the former expresses that C relates every particular type to every
other type. The latter says that C relates every type to the single
type (forall b. b). And in the absence of any other instances, the
latter is a functional relation, and the former is not.

> Now if, in addition to lifting the coverage condition, you add
> OverlappingInstances, you can do something even better--you can wri

Re: TypeFamilies vs. FunctionalDependencies & type-level recursion

2011-06-15 Thread Dan Doel
On Wed, Jun 15, 2011 at 3:25 AM, Simon Peyton-Jones
 wrote:
> Wait.  What about
>        instance C [a] [b]
> ?  Should that be accepted?  The Coverage Condition says "no", and indeed it 
> is rejected. But if you add -XUndecidableInstances it is accepted.

This 'clearly' violates the functional dependency as well.

However, I must admit, it surprises me that GHC or Hugs ever detected
this, and I imagine there's no general way to detect 'acceptable'
instances.

> Do you think the two are different?  Do you argue for unconditional rejection 
> of everything not satisfying the Coverage Condition, regardless of flags?

One obvious difference from the instances that appear (depending on
how smart you're pretending to be as a compiler) bad but are
nevertheless okay is that these have no contexts. If you can detect
that, then:

instance C a b
instance C [a] [b]

clearly have multiple independent instantiations on both sides, and so
the relation is clearly non-functional. A simple heuristic might be to
reject those, but allow:

instance (..., D .. b .., ...) => C a b

trusting that the context determines b in the right way. Is this
possibly what GHC used to do? Of course, that allows 'Show b => C a b'
so it's pretty weak.

A slightly more intelligent heuristic might be to see if the fundeps
in the context determine b, but that sounds like it might be leaving
the realm of what's checkable.

-- Dan

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: TypeFamilies vs. FunctionalDependencies & type-level recursion

2011-06-14 Thread Dan Doel
On Tue, Jun 14, 2011 at 5:38 PM,
 wrote:
> Undecidable instances are orthogonal to both FunctionalDependencies
> and OverlappingInstances.  They concern whether or not the compiler
> can guarantee that the typechecker will terminate.  You can have
> undecidable instances without either of the other two extensions

I'm aware of what UndecidableInstances does.

But in this case, it's actually ensuring the soundness of the
computational aspect of functional dependencies, at least in the case
where said computation were fixed to incorporate the things type
families can do. Which would be useful, because fundeps don't interact
with GADTs and such correctly.

> The coverage condition is part of a pair of pair of sufficient (but
> not necessary) conditions that GHC applies to know that typechecking
> is decidable.  It just says that if you have a dependency "a -> b",
> and you have type variables in b, then they should mention all the
> type variables in a.  Thus, the following code is legal without
> UndecidableInstances:
>
>        {-# LANGUAGE MultiParamTypeClasses #-}
>        {-# LANGUAGE FlexibleInstances #-}
>        {-# LANGUAGE FunctionalDependencies #-}
>
>        class A a b | a -> b
>        instance A [a] (Either String a)

In this instance, the second argument can be given as a function of the first:

f [a] = Either String a

> But the following program is not:
>
>        class A a b | a -> b
>        instance A a (Either String b)

In this instance, it cannot.

> No, that's not right.  Even with UndecidableInstances, you cannot
> define conflicting instances for functional dependencies.  Moreover,
> just because the GHC's particular typechecker heuristics don't
> guarantee the above code is decidable doesn't mean it isn't decidable.

You certainly can define conflicting instances if 'a -> b' is supposed
to denote a functional dependency between a and b, and not just a
clever set of rules for selecting instances.

> I think you are thinking of UndecidableInstances in the wrong way.
> For instance, the following code does not require
> UndecidableInstances, but has polymorphic type variables on the
> right-hand side of a functional dependency, which seems to be what you
> object to:
>
>        {-# LANGUAGE MultiParamTypeClasses #-}
>        {-# LANGUAGE FunctionalDependencies #-}
>
>        class C a b | a -> b where
>            foo :: a -> b
>
>        instance C (Either a b) (Maybe b) where
>            foo _ = Nothing
>
>        bar :: Maybe Int
>        bar = foo (Left "x")
>
>        baz :: Maybe Char
>        baz = foo (Left "x")

This is another case where the second argument is a function of the first:

f (Either a b) = Maybe b

bar uses the induced instance C (Either String Int) (Maybe Int), and
baz uses the induced instance C (Either String Char) (Maybe Char).

> Remember, FunctionalDependencies are all about determining which
> instance you select.  The functional dependency "class C a b | a -> b"
> says that for a given type a, you can decide which instance of C
> (i.e., which particular function foo) to invoke without regard to the
> type b.

Why are they called "functional dependencies" with "a -> b" presumably
being read "b functionally depends on a" if there is no actual
requirement for b to be a function of a?

> It specifically does *not* say whether or not type b has to
> be grounded, or whether it may include free type variables, including
> just being a type variable if you enable FlexibleInstances:

I don't personally count free type metavariables as Haskell types.
They are artifacts of the inference algorithm. Int is a type, and
(forall a. a) is a type. But a metavariable 'b' is just standing in
for a type until we can figure out what actual Haskell type it should
be. Haskell muddies this distinction by only allowing prenex
quantification, and writing 'a -> b' instead of 'forall a b. a -> b',
but the distinction is actually important, if you ask me.

>        {-# LANGUAGE MultiParamTypeClasses #-}
>        {-# LANGUAGE FunctionalDependencies #-}
>        {-# LANGUAGE FlexibleInstances #-}
>
>        class C a b | a -> b where
>            foo :: a -> b
>
>        instance C (Either a b) b where  -- no UndecidableInstances needed
>            foo _ = undefined

This is yet another example where the second parameter is actually a
function of the first.

> Again, unique but not grounded.  Either Maybe c or "forall c" is a
> perfectly valid unique type, though it's not grounded.  People do
> weird type-level computations with fundeps using types that aren't
> forall c.  (The compiler will fail if you additionally have an
> instance with forall c.)

(forall c. Either Maybe c) is a valid type. "forall c" on its own is
not a type at all, it's a syntax error. And a metavariable 'c' is
something that exists during inference but not in the type system
proper.

> I think the reason you have the right-hand side is so you can
> say things like "| a -> b, b -> a".

The reason they h

Re: TypeFamilies vs. FunctionalDependencies & type-level recursion

2011-06-14 Thread Dan Doel
On Tue, Jun 14, 2011 at 1:19 PM,
 wrote:
> No, these are not equivalent.  The first one "TypeEq a b c" is just
> declaring an instance that works "forall c".  The second is declaring
> multiple instances, which, if there were class methods, could have
> different code.  The second one is illegal, because given just the
> first two types, a and b, you cannot tell which instance to pick.

Then why am I not allowed to write:

class C a b | a -> b
instance C T [a]

without undecidable instances? GHC actually complains in that case
that the coverage condition is violated. But it is a single well
specified instance that works for all a.

The answer is that such an instance actually violates the functional
dependency, but UndecidableInstances just turns off the checks to make
sure that fundeps are actually functional. It's a, "trust me," switch
in this case (not just a, "my types might loop," switch).

So I guess HList will still work fine, and UndecidableInstances are
actually more evil than I'd previously thought (thanks for the
correction, Andrea :)).

> A functional dependency such as "| a b -> c d" just guarantees that a
> and b uniquely determine the instance.  Hence, it is okay to have
> class methods that do not mention type variables c and d, because the
> compiler will still know which instance to pick.

It specifies that a and b uniquely determine c and d, so the choice of
instances is unambiguous based only on a and b. This is the basis for
type level computation that people do with fundeps, because a fundep
'a b -> c' allows one to compute a unique c for each pair of types.

If it were just about variable sets determining the instance, then we
could just list those. But I can write:

class C a b c d | a b -> c

And it will actually be a, b and d that determine the particular
instance, but just listing 'a b d', or using the fundep 'a b d -> c'
is wrong, because the fundep above specifies that there is a single
choice of c for each a and b. So we could have:

C Int Int Char Int
C Int Int Char Double
C Int Int Char Float

but trying to add:

C Int Int Int Char

to the first three would be an error, because the first two parameters
determine the third, rather than the first, second and fourth.

Being allowed to elide variables from the types of methods is one of
the uses of fundeps, and probably why they were introduced, but it is
not what fundeps mean.

> On the other hand, though the compiler won't accept it, you could at
> least theoretically allow code such as the following:
>
>        instance C [Char] where
>            type Foo [Char] = forall b. () => b

The fundep equivalent of this is not 'instance C [Char] b'. It is:

instance C [Char] (forall b. b)

except types like that aren't allowed in instances (for good reason in general).

The closest you could come to 'instance C T b' would be something like:

type instance F T = b

except that is probably interpreted by GHC as being (forall b. b).

-- Dan

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: TypeFamilies vs. FunctionalDependencies & type-level recursion

2011-06-14 Thread Dan Doel
Sorry about the double send, David. I forgot to switch to reply-all in
the gmail interface.

On Tue, Jun 14, 2011 at 11:49 AM,
 wrote:
> You absolutely still can use FunctionalDependencies to determine type
> equality in GHC 7.  For example, I just verified the code below with
> GHC 7.02:
>
> *Main> typeEq True False
> HTrue
> *Main> typeEq (1 :: Int) (2 :: Int)
> HTrue
> *Main> typeEq (1 :: Int) False
> HFalse
>
> As always, you have to make sure one of the overlapping instances is
> more specific than the other, which you can do by substituting a
> parameter c for HFalse in the false case and fixing c to HFalse using
> another class like TypeCast in the context.  (As contexts play no role
> in instance selection, they don't make the instance any more
> specific.)
>
> While I don't have convenient access to GHC 6 right this second, I'm
> pretty sure there has been no change for a while, as the HList paper
> discussed this topic in 2004.

Okay. I don't really write a lot of code like this, so maybe I missed
the quirks.

In that case, HList has been relying on broken behavior of fundeps for
7 years. :) Because the instance:

   instance TypeEq a b c

violates the functional dependency, by declaring:

   instance TypeEq Int Int Int
   instance TypeEq Int Int Char
   instance TypeEq Int Int Double
   ...
   instance TypeEq Int Char Int
   instance TypeEq Int Char Char
   ...

and adding the constraint doesn't actually affect which instances are
being declared, it just adds a constraint requirement for when any of
the instances are used. It appears I was wrong, though. GHC doesn't
actually fix the instance for such fundeps, and the following compiles
and runs fine:

   class C a b | a -> b where
     foo :: a -> b
     foo = error "Yo dawg."

   instance C a b where

   bar :: Int
   bar = foo "x"

   baz :: Char
   baz = foo "x"

so we're using an instance C String Int and an instance C String Char
despite the fact that there's a functional dependency from the first
argument to the second.

-- Dan

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: TypeFamilies vs. FunctionalDependencies & type-level recursion

2011-06-14 Thread Dan Doel
On Tue, Jun 14, 2011 at 11:27 AM, Andrea Vezzosi  wrote:
>>    class C a b | a -> b
>>    instance C a R
>>    instance C T U
>
> Are you sure that worked before?

80%

> The following still does anyhow:
>
>    data R
>    data T
>    data U
>    class C a b | a -> b
>    instance TypeCast R b => C a b
>    instance TypeCast U b => C T b
>
> In fact this is how IsFunction was implemented in 2005[1], and the
> same transformation appears to work for the Eq class too.
> If we allow TypeFamilies we can use (~) instead of the TypeCast hack,
> fortunately.

So it does.

instance (b ~ R) => C a b
instance (b ~ U) => C T b

Which is odd. I don't think there's a way to justify this working.
Either the preconditions are being taken into account, in which case
there is no reason for this to behave differently from:

instance C a R
instance C T U

or the preconditions are not being taken into account (which is the
type class way), in which case both of the qualified instances are
illegal, because they declare instances C T b for all b (plus a
constraint on the use), which violates the fundep. I've seen examples
like this before, and I think what GHC ends up doing (now) is fixing
the 'b' to whatever instance it needs first. But I don't think that's
very good behavior.

In this case it happens that it's impossible to use at more than one
instance, but if you accept the instances, you're back to the
questions of type soundness that are only evaded because fundeps don't
actually use all the information they allegedly express.

-- Dan

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: TypeFamilies vs. FunctionalDependencies & type-level recursion

2011-06-14 Thread Dan Doel
On Tue, Jun 14, 2011 at 5:36 AM, Simon Peyton-Jones
 wrote:
> There was an interesting thread on haskell-prime [1], about the relationship 
> between functional dependencies and type families.  This message is my 
> attempt to summarise the conclusions of that thread.  I'm copying other 
> interested parties (eg Oleg, Dimitrios)
>  [1] http://www.haskell.org/pipermail/haskell-prime/2011-May/003416.html
>
> 1. As things stand in GHC you can do some things with functional dependencies 
> that you can't do with type families. The archetypical example is type 
> equality.  We cannot write
>        type family Eq a b :: *
>        type instance Eq k k = True
>        type instance Eq j k = False
> because the two instances overlap.  But you can with fundeps
>        class Eq a b c | a b -> c
>        instance Eq k k True
>        instance Eq j k False
> As Alexey mentioned, fundeps have other disadvantages, so it's reasonable to 
> ask whether type families could extend to handle cases like this.

Fundeps no longer allow this as of GHC 7, it seems. It causes the same
fundep violation as the case:

class C a b | a -> b
instance C a R
instance C T U

for R /~ U. Which I find somewhat sensible, because the definitions
together actually declare two relations for any type:

Eq T T False
Eq T T True

and we are supposed to accept that because one is in scope, and has a
particular form, the other doesn't exist. But they needn't always be
in scope at the same time.

Essentially, GHC 7 seems to have separated the issue of type-function
overlapping, which is unsound unless you have specific conditions that
make it safe---conditions which fundeps don't actually ensure (fundeps
made it 'safe' in the past by not actually doing all the computation
that type families do)---from the issue of instance overlapping, which
is always sound at least. But if I'm not mistaken, type families can
handle these cases.

I'd personally say it's a step in the right direction, but it probably
breaks a lot of HList-related stuff.

-- Dan

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: TypeFamilies vs. FunctionalDependencies & type-level recursion

2011-05-29 Thread Dan Doel
On Sun, May 29, 2011 at 6:45 PM, Ben Millwood  wrote:
> It would seem very strange to me if haskell-prime made the choice of
> fundeps/type families based on the behaviour with
> OverlappingInstances. I'm under the impression that Overlapping is
> generally considered one of the more controversial extensions, and on
> the LanguageQualities wiki page [1] it's explicitly given as an
> example of something which violates them. So not only is Overlapping
> not in the language, but I imagine there are many people (myself
> included) who would like to ensure it stays out.
>
> My personal opinion is that if Haskell wants a more complete facility
> for type-level programming, that should be addressed directly, instead
> of via creative abuse of the class system and related machinery.

It should also be noted: the fact that functional dependencies work
with overlapping instances, while type families don't is not really
inherent in functional dependencies, but is instead related to choices
about how functional dependencies work that differ from how type
families do. And mainly, this is because functional dependencies fail
to incorporate local information, meaning they fail to work
appropriately in various situations (for instance, matching on a GADT
may refine a type, but that new information may not propagate through
a fundep).

In my experience, you can construct examples that should lead to type
soundness issues with fundeps, and only fail because of peculiarities
in fundep handling. But fundeps could (and arguably should, to
interact with GADTs and the like) be reworked to behave 'properly'.
It's just that type families already do.

I can't really recall what example I used in the past, but here's one
off the cuff:

  module A where
class C a b | a -> b where

instance C a a where

data T a where
  CT :: C a b => b -> T a

  module B where
import A

instance C Int Char where

c :: Char
c = case t of { CT x -> x }

So, the question is: what should happen here?

We've created a T Int in a context in which C Int Int, so it wraps an
Int. Then we match in a context in which C Int Char. But the fundep
tells us that there can only be one S such that C Int S. So we have
some choices:

1) Disallow the overlapping instance C Int Char, because it is
incompatible with the C Int Int from the other module. This is what
GHC 7 seems to do.

2) Pretend that there may in fact be more than one instance C Int a,
and so we can't infer what a is in the body of c. I think this is what
GHC used to do, but it means that whether a fundep "a -> b" actually
allows us to determine what b is from knowing a is kind of ad-hoc and
inconsistent.

3) Allow c to type check. This is unsound.

1 is, I think, in line with type families. But it rules out the
computation that fundeps + overlapping can do and type families
cannot.

Also, in an unrelated direction: there are conditions on type families
that can allow some overlapping to be permitted. For instance, if you
simply want a closed type function, like, taking the above as an
example:

type family F a :: * where
  instance F Int = Char
  instance F a   = a

Then this is a sufficient condition for overlapping to be permissible.
And it covers a lot of the use cases for overlapping instances, I
think.

-- Dan

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: In opposition of Functor as super-class of Monad

2011-01-04 Thread Dan Doel
On Tuesday 04 January 2011 5:24:21 am o...@okmij.org wrote:
> Method A: just define bind as usual
> 
> > instance (Functor (Iteratee el m),Monad m) => Monad (Iteratee el m) where
> > 
> > return = IE_done
> > 
> > IE_done a   >>= f = f a
> > IE_cont e k >>= f = IE_cont e (\s -> k s >>= docase)
> > 
> >  where
> >  docase (IE_done a, stream)   = case f a of
> >  
> >  IE_cont Nothing k -> k stream
> >  i -> return (i,stream)
> >  
> >  docase (i, s)  = return (i >>= f, s)
> 
> Although we must state the constraint (Functor (Iteratee el m)) to
> satisfy the super-class constraint, we have not made any use of the
> constraint.

This, at least, is false. If Functor is a superclass of Monad, then Monad m 
implies Functor m, which implies Functor (Iteratee el m). So Monad m is a 
sufficient constraint for the instance.

As for the other concerns, I think the closest fix I've seen is to allow 
subclasses to specify defaults for superclasses, and allow instances for 
subclasses to include methods for superclasses. So:

  class Functor m => Monad m where
return :: a -> m a
(>>=)  :: m a -> (a -> m b) -> m b

fmap f x = x >>= return . f

This has its own caveats of course. And in this case, it seems to 
overconstrain the functor instance, since presumably we'd end up with:

  instance Monad m => Monad (Iteratee el m) where ...
==>
  instance Monad m => Functor (Iterate el m) where ...

I'm not sure what to do about that.

-- Dan

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Negation

2010-02-08 Thread Dan Doel
On Monday 08 February 2010 11:18:07 am Simon Peyton-Jones wrote:
> I think that Hugs is right here.  After all, there is no ambiguity in any
>  of these expressions.  And an application-domain user found this behaviour
>  very surprising.

I think it's clear what one would expect the result of these expressions to 
be, but there is some ambiguity considering how GHC parses other similar 
expressions (I don't actually know if it's following the report in doing so or 
not). For instance:

  -4 `mod` 5

parses as:

  -(4 `mod` 5)

which I've seen confuse many a person (and it confused me the first time I saw 
it; it makes divMod and quotRem appear identical if one is testing them by 
hand as above, and doesn't know about the weird parsing). Knowing the above, I 
wasn't entirely sure what the results of x2 and x4 would be.

Of course, I think the `mod` parsing is the weird bit, and it'd be good if it 
could be amended such that

  -a `mod` -b = (-a) `mod` (-b)

like the rest of the proposal.

-- Dan
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Suggestion regarding (.) and map

2008-04-24 Thread Dan Doel
On Thursday 24 April 2008, Wolfgang Jeltsch wrote:
> I don’t think that this is reasonable.  (.) corresponds to the little
> circle in math which is a composition.  So (.) = (<<<) would be far better.

Were I building a library, this might be the direction I'd take things. 
They're two incompatible generalizations, and you have to decide which is 
more important to you.

For instance, you can generalize from arrows into categories (with objects in 
*):

class Category (~>) where
  id  :: a ~> a
  (.) :: (b ~> c) -> (a ~> b) -> (a ~> c)

And, of course, from this, you get the usual meanings for (->):

instance Category (->) where
  id x = x
  (f . g) x = f (g x)

An example of a Category that isn't an Arrow (I think) is:

newtype Op (~>) a b = Op { unOp :: b ~> a }

instance Category (~>) => Category (Op (~>)) where
  id = Op id
  -- (.) :: (b <~ c) -> (a <~ b) -> (a <~ c)
  (Op f) . (Op g) = Op (g . f)

type HaskOp = Op (->)

(Why is this even potentially useful? Well, if you define functors with 
reference to what two categories they relate, you get (pardon the illegal 
syntax):

map :: (a ~1> b) -> (f a ~2> f b)

Which gives you current covariant endofunctors if (~1>) = (~2>) = (->), but it 
also gives you contravariant endofunctors if (~1>) = (->) and (~2>) = Op 
(->). Is this a useful way of structuring things in practice? I don't know.)

Now, going the (.) = map route, one should note the following Functor 
instance:

instance (Arrow (~>)) => Functor ((~>) e) where
  -- fmap :: (a -> b) -> (e ~> a) -> (e ~> b)
  fmap f g = arr f <<< g

So, in this case (.) is composition of a pure function with an arrow, but it 
does not recover full arrow composition. It certainly doesn't recover 
composition in the general Category class above, because there's no operation 
for lifting functions into an arbitrary Category (think Op: given a function 
(a -> b), I can't get a (b -> a) in general).

(At a glance, if you have the generalized Functors that reference their 
associated Categories, you have:

map (a ~1> b) -> (e ~3> a) ~2> (e ~3> b)

so for (~1>) = (~3>), and (~2>) = (->), you've recovered (.) for arbitrary 
categories:

instance (Category (~>) => Functor ((~>) e) (~>) (->) where
  map f g = f . g

so, perhaps with a generalized Functor, you can have (.) = map *and* have (.) 
be a generalized composition.)

Now, the above Category stuff isn't even in any library that I know of, would 
break tons of stuff (with the generalized Functor, which is also kind of 
messy), and I haven't even seriously explored it, so it'd be ridiculous to 
request going in that direction for H'. But, restricted to the current 
libraries, if you do want to generalize (.), you have to decide whether you 
want to generalize it as composition of arrows, or as functor application. 
The former isn't a special case of the latter (with the current Functor, at 
least).

Generalizing (.) to Arrow composition seems more natural to me, but 
generalizing to map may well have more uses.

-- Dan
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Meta-point: backward compatibility

2008-04-23 Thread Dan Doel
On Wednesday 23 April 2008, Chris Smith wrote:
> I don't think I agree that fail in the Monad typeclass is a good example
> here, or necessarily that there is a good example.
>
> We should remember that there is a cohesive community of Haskell
> programmers; not a bunch of unrelated individuals who never talk to each
> other.  It's entirely possible to spend some time recommending against
> using fail (as many people have been doing), and then eventually remove
> it.  It doesn't need to break very much.  This is working in our favor,
> so we may as well use it.  IMO, this argues strongly in favor of making
> any backward compatible changes incrementally, instead of adopting a
> "Python 3000" model of postponing them and then breaking everything at
> once.
>
> (This is ignoring technical issues in this particular example; like what
> happens when a pattern match fails if there is no fail in Monad.  It's
> entirely possible that discouraging use is the right answer in this case,
> and that removal need not happen.  That's beside the point.)

The thing is, fail isn't evil because it's intrinsically evil; it's evil 
because it's in the wrong place. Many monads don't have a proper notion of 
failure, so error just gets called, which is undesirable. (Even if they do 
have a proper notion of failure, it's also easy to forget to define fail 
appropriately, if you're defining your own monad.)

Discouraging people from ever using fail, or even monadic pattern matching is 
also undesirable, because there are plenty of places where it is appropriate. 
As an example, I saw a blog post that included code like so (I don't remember 
the exact code, but this is comparable):

f c l = length [ () | (c':'h':_) <- tails l, c == c' ]

In the comments of that blog, someone had written that the code was evil, 
because it was an abuse of fail. But that isn't an abuse of fail (it doesn't 
even use fail at all, technically, since list comprehensions have their own 
desugaring in the report), it's appropriate use of fail. So clearly, people 
are already getting the wrong idea from the "don't use fail" push, and may be 
avoiding appropriate use of it, when it would make for clearer code.

So, if you ask me, the solution isn't "discourage all use of fail." There's 
nothing wrong with failing in an arbitrary monad-with-failure. And we already 
have a class for that, called MonadPlus (or MonadZero if you split them up as 
in 1.4). If you move fail there, then you can encourage use of fail when 
appropriate. Currently, it seems difficult to do that (witness the readM 
thread on the libraries list, where it was proposed that fail be used 
(because there are multiple ways to fail), but with type restricted to 
MonadPlus; most people still seemed to dislike that because 'fail is evil').

-- Dan
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: patch applied (haskell-prime-status): add ""Make $ left associative, like application"

2008-04-23 Thread Dan Doel
On Wednesday 23 April 2008, Bulat Ziganshin wrote:
> it's not refactoring! it's just adding more features - exception
> handler, progress indicator, memory pool and so on. actually, code
> blocks used as a sort of RAII for Haskell. are you wanna change all
> those ';' when you add new variable to your C++ code?
>
>   bracketCtrlBreak (archiveReadFooter command arcname) (archiveClose.fst) $
> \(archive,footer) -> do bad_crcs <- withList $ \bad_crcs -> do
>   doChunks arcsize sector_size $ \bytes -> do
> uiWithProgressIndicator command arcsize $ do
> or
> handleCtrlBreak  (ignoreErrors$ fileRemove arcname_fixed) $ do
> bracketCtrlBreak (archiveCreateRW arcname_fixed) (archiveClose) $
> \new_archive -> do withJIT (fileOpen =<< originalURL originalName arcname)
> fileClose $ \original' -> do
>
> is just two examples from my code

For what it's worth, both of these examples require no change.

However, with left-associative ($), you're free to change them to (sorry for 
the additional lines, but my mail client breaks at 80 characters. I think 
they're still valid code):


bracketCtrlBreak $ archiveReadFooter command arcname
 $ archiveClose.fst
 $ \(archive,footer) -> do
  bad_crcs <- withList $ \bad_crcs -> do
doChunks arcsize sector_size $ \bytes -> do
  uiWithProgressIndicator command arcsize $ do

handleCtrlBreak $ ignoreErrors (fileRemove arcname_fixed) $ do
bracketCtrlBreak $ archiveCreateRW arcname_fixed $ archiveClosed 
 $ \new_archive -> do
withJIT $ fileOpen =<< originalURL originalName arcname $ fileClose
$ \original' -> do

Or, for a simpler example I discovered earlier, you can write:

catchError $ do return 1
throwError $ strError "foo"
   $ \e -> return 2

Although I'm not sure how much better that is than the alternative:

do return 1
   throwError $ strError "foo"
`catchError`
\e -> return 2

-- Dan
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: patch applied (haskell-prime-status): add ""Make $ left associative, like application"

2008-04-23 Thread Dan Doel
On Wednesday 23 April 2008, Bulat Ziganshin wrote:
> Hello Dan,
>
> Wednesday, April 23, 2008, 1:42:20 PM, you wrote:
> > This wouldn't work, you'd have to rewrite it as:
> >
> > withSomeResource foo .
> >   withSomeOtherThing bar .
> > yetAnotherBlockStructured thing $ ...
>
> it is very inconvenient - we should use either . or $ depending on
> that it's last block or not. imagine all the changes when editing the
> code

Well, that may be inconvenient, but I don't know how much such block 
structured code is actually a composition of such functions. Off the top of 
my head, I'd expect most to involve lambdas or dos:

forM l $ \e -> ...
State $ \s -> ... (see also Reader, Cont, ...)
runST $ do ... (or 'flip runM arg' for most monads, I suppose)
callCC $ \k -> ... (shift)
forever $ do ...
withResource $ \r -> ...
local f $ do ...

Which still work with the flipped associativity. The one oddity I noticed 
looking at random code of mine is reset in Cont(T), which might currently be 
used like:

reset $ forM l $ \e -> ...

for inverting loops. It would have to become:

reset . forM l $ \e -> ...

Which might be a little weird (of course, in CC-delcont, that's

reset $ \p -> forM l $ \e -> ...

so it's moot there, but I suppose this affects all the functions that rely 
on 'do' to use repeated ($)s).

My code may well be abnormal, though (certainly, my infatuation with 
continuations probably is :)).

When I do have composition pipelines that don't fit on one line (like the 
initial example), I think I tend to write them like this:

pipe = foo
 . bar
 . baz
 . quux
 $ quuux

Which lets you slip new things in pretty naturally, I think.

There may be a lot of code out there that that all doesn't work for, though. I 
don't know.
-- Dan
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: patch applied (haskell-prime-status): add ""Make $ left associative, like application"

2008-04-23 Thread Dan Doel
On Wednesday 23 April 2008, Duncan Coutts wrote:
> withSomeResource foo $
>   withSomeOtherThing bar $
> yetAnotherBlockStructured thing $ ...
>
> Does that work?

This wouldn't work, you'd have to rewrite it as:

withSomeResource foo .
  withSomeOtherThing bar .
yetAnotherBlockStructured thing $ ...

But it seems like your second example is more likely to come up for 'with' 
type blocks:

> Or
>
> withSomeResource foo $ \x ->
>   withSomeOtherThing bar $ \y ->
> yetAnotherBlockStructured thing $ \z ->

That should work, because '\x ->' extends to the end of the expression. 
Similarly:

iWantMonadicValues $ do foo
bar
iAlsoWantMonadicStuff $ do ...

still works.

-- Dan
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: patch applied (haskell-prime-status): add ""Make $ left associative, like application"

2008-04-22 Thread Dan Doel
On Tuesday 22 April 2008, Simon Marlow wrote:
> I'm hoping someone will supply some.  There seemed to be strong opinion
> on #haskell that this change should be made, but it might just have been
> a very vocal minority.

These are the arguments off the top of my head:

1) Anything of the form:

f $ g $ h $ x

with right associative ($) can instead be written:

f . g . h $ x

where the associativity of ($) doesn't matter. It's not uncommon to want to 
peel off the end of such a pipeline to eliminate a point. For the second 
form, such a translation is:

\x -> f . g . h $ x ==> f . g . h

However:

\x -> f $ g $ h $ x ==> f $ g $ h

Is invalid, so one might argue that writing such pipelines with composition is 
a better habit to get into, as it allows easier cleanup of code in this way 
(if you like somewhat point-free code, that is).

2) Left associative ($) allows you to eliminate more parentheses. Per #1, any 
parentheses eliminated by right associative ($) can be eliminated by (.) and 
a single ($). However, left associative ($) allows, for instance:

f (g x) (h y) ==> f $ g x $ h y

3) Left associative ($) is consistent with left associative ($!). The right 
associative version of the latter is inconvenient, because it only allows 
things to be (easily) strictly applied to the last argument of a function. 
Needing to strictly apply to other arguments gives rise to things like:

   (f $! x) y z
   ((f $! x) $! y) $! z

Left associative, these are:

   f $! x $ y $ z
   f $! x $! y $! z

There may be more arguments, but those are the ones I've heard that I can 
think of at the moment. #3 strikes me as the most likely to bite people (the 
other two are more stylistic issues), but I suppose I don't know the relative 
frequency of strict pipelines (f $! g $! x) versus strict applications at 
non-final arguments.

And I suppose one has to weigh these arguments against breaking lots of code.

Cheers,
-- Dan
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime