Re: Removal candidates in patterns

2006-01-26 Thread Cale Gibbard
On 26/01/06, Simon Peyton-Jones <[EMAIL PROTECTED]> wrote:
> I agree that if (n+k) patterns go, then so should k patterns.  Both are
> overloaded, and that's the root of their complexity.

I'm not so sure about that. I don't use (n+k) patterns at all, but I
do get fairly regular use out of k patterns. (n+k) patterns can seem
like an odd special case in everything but Nat, but k patterns are
often the nicest way to handle base cases, and save you from writing
awkward-looking guards. Sure, they're not usually appropriate for
floating point computations, but for integral and rational types, they
work very well. (even if you can't pattern match against fractions)

One possibility is that k patterns could be generalised to arbitrary
members of Eq, rather than just being used for numbers. We could even
make variables bound in the parameter list available. So for a
simplistic example, one could write:

f :: (Eq a, Num a) => a -> a -> a
f x x = x -- if the two parameters match, give their common value
f _ _ = 0 -- otherwise give 0.

The first 'x' would pattern match as usual, and the second would
compare for equality with the first.

We'd have to work out the exact syntax for them -- essentially, it
would just involve detecting an arbitrary expression which was not a
pattern. There's some context sensitivity there with the option of
referring to previously bound variables though.

I'm not sure how often this would be used, and perhaps it's more
trouble than it's worth, but at least it leaves no further room for
generalisation, which makes the feature seem somewhat natural.

Even without previous-variable-binding, it subsumes all pattern
matching on literals, so it would take some thought to determine if it
really makes things more complicated or not.

> Personally I think ~ patterns are great, and we are now talking about !
> patterns, a kind of dual to ~ patterns.  So at least I think we should
> un-couple the two discussions.

I think so too. Removing ~ patterns seems like a fairly poor idea to
me. Sure, they're not very much explicitly used (though everyone uses
them implicitly in pattern bindings), but when you want them, they can
be fairly important. I think perhaps we just need better coverage of ~
in the tutorials. Now that I think about it, I rather like the idea of
! patterns as well. They make ~ patterns seem more natural by
contrast. Strictness can on occasion be just as important as laziness,
and this notation makes it more convenient to obtain in simple cases.
How to get a similarly pretty notation for more structured strictness
annotations is a bit of a concern. I wonder whether some of the
Control.Parallel.Strategies library should be more strategically
located? :)

 - Cale
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: Removal candidates in patterns

2006-01-26 Thread Claus Reinke

Olaf Chitil wrote:
I'd like to add one pattern to this list of removal candiates: k 
patterns, that is, numeric literals.


I was rather shocked when I first read this. And I certainly don't
like the argument from implementation difficulties in a certain tool!-)

I don't mind losing (n+k), not because it wasn't neat, but it looks like
a special case needing a more general solution, beyond Haskell''s scope.

I don't want to lose numeric literals in patterns! But, having recovered
from the first shock, and ignoring other people's hats, there may be a
few things that need cleaning up in that area (why do some patterns
work without Eq, some with? Should there be a Match class or 
something to pin down what can and what can't be matched how?..).



Let's remove higher order functions too, they are tricky to
implement. :)


it seems so, at least for pattern matching "numeric literals"; what is the
result of (f 1) and (g A) in the following code?

... -- some code omitted here

f 1 = True
f n = False

g A = True
g n = False

and should it depend on the context (types, instances, ..), or not?

run the following through ghci with and without the signature for f, 
and with either version of (==) for functions; and what happens if

we uncomment the Eq instance for D? is that all as expected?

cheers,
claus

---
module K where

import Text.Show.Functions

instance Eq (a->b) where
 f == g = False
 -- f == g = True

instance Num b => Num (a->b) where
 fromInteger n = const $ fromInteger n

-- f :: Num b => (a->b) -> Bool
f 1 = True
f n = False

main = print $ (f 1,g A)

-

data D = A | B -- no Eq, but matching allowed

-- instance Eq D where a == b = False

g A = True
g n = False


___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: The dreaded M-R

2006-01-26 Thread Ganesh Sittampalam

On Thu, 26 Jan 2006, John Hughes wrote:

(Some object that := "means" assignment--but come on, 
we're not reserving := for future use as assignment in Haskell, are we? 
Why should we give up a perfectly good symbol because it's used 
elsewhere to mean something else?).


Programmers unfamiliar with Haskell but familiar with general 
programming ideas would be confused by it. I think this is a good reason 
to avoid (mis)use of this symbol.


Quite a lot has been mentioned in various threads including this one about 
making sure that Haskell stays/becomes an easy/easier language to teach to 
undergraduates. However, there is a large and growing community of 
experienced programmers coming to Haskell and liking it, and we must keep 
them in mind too. A lot of them use the #haskell IRC channel as a 
resource, and as a regular there I have the impression that the numbers 
are on their way up quite rapidly.


Cheers,

Ganesh
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: Removal candidates in patterns

2006-01-26 Thread Ganesh Sittampalam

On Thu, 26 Jan 2006, Ian Lynagh wrote:


A Natural class would also make some sense. Then we could have, e.g.,
   (^) :: (Num a, Natural b) => a -> b -> a
although that does cause problems with Haskell's libraries being
strongly biased towards Int (and changing that probably breaks an awful
lot of code).


You could make "instance Natural Int" (etc) available in some module that 
legacy programs could import.



So in conclusion, I'm in favour of keeping both n+k and k patterns, and
restricting n+k patterns to Natural types and k patterns to Integral
types.


I like this idea.

Cheers,

Ganesh
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: Removal candidates in patterns

2006-01-26 Thread Lennart Augustsson

Olaf Chitil wrote:
I'd like to add one pattern to this list of removal candiates: k 
patterns, that is, numeric literals.




Wow!  That's a mighty big thing to remove.  For me personally that
would cause endless trouble.  I use k patterns all the time.
(I'm happy to to see 'n+k' gone, because I never use them.)

I don't even know how I'd try to motivate why they were removed
to a casual Haskell user.  "Some implementor was having trouble
with k patterns in some tool so they are gone now"?  Huh?
Let's remove higher order functions too, they are tricky to
implement. :)

-- Lennart
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: Removal candidates in patterns

2006-01-26 Thread Ian Lynagh
On Thu, Jan 26, 2006 at 07:35:42PM +, Olaf Chitil wrote:
> 
> As response to both Aaron and Duncan,
> 
> >foo 0 = ...
> >foo n = ...
> > 
> >
> And what about the negative numbers?

(I agree with Duncan re this).

> If Haskell had a type for natural numbers I'd be in favour of n+k and k 
> patterns (working only for this type, not any other numerical type).

Haskell (FSVO "Haskell") has several types for natural numbers: Word8,
Word16, Word32, Word64. I'd also like to see a Natural type (analogous
to Integer) (you might also argue for Word, analogous to Int), and I'd
like to use k patterns with all of them.

A Natural class would also make some sense. Then we could have, e.g.,
(^) :: (Num a, Natural b) => a -> b -> a
although that does cause problems with Haskell's libraries being
strongly biased towards Int (and changing that probably breaks an awful
lot of code).

However, it would seem odd to me, as a new user, that I could say

foo 1 = 0
foo n = n

but not

foo (-1) = 0
foo n = n

On n+k patterns, I think they make code a lot more concise and easier to
read, as well as allowing code to match specifications much more
closely. In fact, every reason why in a mathematical definition you
would say
f (x+1) = g x
rather than
f x | x >= 1 = g x'
where x' = x - 1
applies equally to code IMO.

I think there is something to be said for making n+k patterns have a
Natural type rather than an Integral type, though, as we are requiring
that n be at least 0.

k patterns are less clear cut due to Rational, but on balance I'd be
happy with k patterns being Integral only as people writing
f 1.1 = 0
probably normally don't really mean that.

So in conclusion, I'm in favour of keeping both n+k and k patterns, and
restricting n+k patterns to Natural types and k patterns to Integral
types.

> With respect to tools of which Hat is one example: If it is hard to 
> build tools, then less tools will be built. Compare the number of tools 
> for Scheme with those for Haskell. Most tools grow out of student 
> projects  or research projects;  these have  rather limited resources.

I don't think this makes it significantly harder to make tools, there is
a simple source transformation to eliminate these constructs (your
reasons for disliking using it I didn't fully understand). If tools like
hat think of these constructs as, and shows them to the user as, their
expanded versions then we would be no worse off than if they weren't in
the language.


Thanks
Ian

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: Removal candidates in patterns

2006-01-26 Thread Duncan Coutts
On Thu, 2006-01-26 at 17:01 +, Olaf Chitil wrote:

> Why are these patterns so hard to implement for Hat? Surely the Haskell 
> report gives a translation into simple core Haskell. Well, Hat does not 
> use this translation because it does not want to be an inefficient 
> pattern matcher (leave that job to the compiler) but produce a trace of 
> the Haskell program as it is written. However, both n+k and k patterns 
> cause calls of functions ( (-), (==) etc) that Hat has to record in its 
> trace.

Does it not have to do that for character and string patterns too?

I suppose that the proposals to create a string class and have
string/character constants overloaded by that class would cause similar
problems for Hat.

Duncan

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: Removal candidates in patterns

2006-01-26 Thread Olaf Chitil

Duncan Coutts wrote:


I think it's a perfectly reasonable mental model for people to believe
that:
data Char = 'a' | 'b' | 'c' | ...
data Int  = ... -2 | -1 | 0 | 1 | 2 | ...

I don't see why we should remove one and not the other. Students will
ask why the can pattern match on strings, characters and booleans but
not numbers.
 

Numbers are special because they are overloaded. A numeric literal is an 
element of many types. That clearly distinguishes them from other literals.



Perhaps primitive recursion on integers is misleading, but people will
still write

foo n | n == 0= ...
 | otherwise = ...

where they previously wrote

foo 0 = ...
foo n = ...

so what have we gained except less notational convenience?
 

Discourage anyone from teaching primitive recursion on integers. 
Recursion on integers then has to be taught as a separate topic, giving 
opportunity to point out the pitfalls. Sure, it doesn't prevent anyone 
from writing anything.



Not all pattern matching on numeric literals is involved with recursion
on integers, where as virtually all n+k patterns is used for that
purpose. 

I think there are very few situations where you would use k patterns 
without recursion.



So there is some distinction between the two forms. n+k
patterns are a quirk of the numeric types. k patterns are regular with
other types in the language.
 


As I said above, they are not regular because of overloading.


It's partly the complexity of the language and partly because our latest
language spec (H98) is not the language that we all use (H98 + various
extensions). I'm sure Haskell-prime will help in this area.
 

I hope as well that Haskell' will be the language that most people will 
use and some extensions are certainly required for practical use. I just 
want to get rid of superfluous features.


Ciao,
Olaf
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: Removal candidates in patterns

2006-01-26 Thread Duncan Coutts
On Thu, 2006-01-26 at 19:35 +, Olaf Chitil wrote:
> As response to both Aaron and Duncan,
> 
> >foo 0 = ...
> >foo n = ...
> >  
> >
> And what about the negative numbers? For negative numbers the second 
> equation matches, which in 90% of all cases in practise has never been 
> written for them. Aaron's Ackerman function disappears in infinite 
> recursion... Besides, what is ack 0.5 0.5?

Isn't the same true for:

foo n | n == 0= ...
  | otherwise = ...

It's still going to fail for negative numbers.

> The use of n+k patterns, but also the definition pattern above wrongly 
> lead programmers to believe that they are dealing with natural numbers. 
> There is no nice primitive recursion for integers. Even worse, without a 
> type signature restricting its type, foo will be defined for all numeric 
> types. For Float or Rational it makes hardly any sense.

The above example is still defined for all numeric types.

Eliminating that syntax form doesn't remove those problems.

> If Haskell had a type for natural numbers I'd be in favour of n+k and k 
> patterns (working only for this type, not any other numerical type).

I'm in favour of removing n+k patterns too.

> Using primitive recursion on integers or even arbitrary numbers is 
> misleading. You can teach primitive recursion nicely for algebraic data 
> types, because the recursive pattern of the function definition follows 
> the recursive pattern of the type definition.

Char is a type that is not constructed recursively and yet no one seems
to have problems with character literals as constructors and thus as
patterns. Each character literal is a Char constructor. Why can't each
numeric literal be a constructor for the numeric types?

I think it's a perfectly reasonable mental model for people to believe
that:
data Char = 'a' | 'b' | 'c' | ...
data Int  = ... -2 | -1 | 0 | 1 | 2 | ...

I don't see why we should remove one and not the other. Students will
ask why the can pattern match on strings, characters and booleans but
not numbers.

Perhaps primitive recursion on integers is misleading, but people will
still write

foo n | n == 0= ...
  | otherwise = ...

where they previously wrote

foo 0 = ...
foo n = ...

so what have we gained except less notational convenience?

Not all pattern matching on numeric literals is involved with recursion
on integers, where as virtually all n+k patterns is used for that
purpose. So there is some distinction between the two forms. n+k
patterns are a quirk of the numeric types. k patterns are regular with
other types in the language.

> With respect to tools of which Hat is one example: If it is hard to 
> build tools, then less tools will be built. Compare the number of tools 
> for Scheme with those for Haskell. Most tools grow out of student 
> projects  or research projects;  these have  rather limited resources.

It's partly the complexity of the language and partly because our latest
language spec (H98) is not the language that we all use (H98 + various
extensions). I'm sure Haskell-prime will help in this area.

I don't mean to belittle the difficulty of building tools. I know how
hard it is, I'm trying to build one too.

Duncan

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: Removal candidates in patterns

2006-01-26 Thread Olaf Chitil


As response to both Aaron and Duncan,


foo 0 = ...
foo n = ...
 

And what about the negative numbers? For negative numbers the second 
equation matches, which in 90% of all cases in practise has never been 
written for them. Aaron's Ackerman function disappears in infinite 
recursion... Besides, what is ack 0.5 0.5?


The use of n+k patterns, but also the definition pattern above wrongly 
lead programmers to believe that they are dealing with natural numbers. 
There is no nice primitive recursion for integers. Even worse, without a 
type signature restricting its type, foo will be defined for all numeric 
types. For Float or Rational it makes hardly any sense.


If Haskell had a type for natural numbers I'd be in favour of n+k and k 
patterns (working only for this type, not any other numerical type).


Using primitive recursion on integers or even arbitrary numbers is 
misleading. You can teach primitive recursion nicely for algebraic data 
types, because the recursive pattern of the function definition follows 
the recursive pattern of the type definition.



With respect to tools of which Hat is one example: If it is hard to 
build tools, then less tools will be built. Compare the number of tools 
for Scheme with those for Haskell. Most tools grow out of student 
projects  or research projects;  these have  rather limited resources.


Ciao,
Olaf
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: Removal candidates in patterns

2006-01-26 Thread Olaf Chitil

Simon Peyton-Jones wrote:


I agree that if (n+k) patterns go, then so should k patterns.  Both are
overloaded, and that's the root of their complexity.

Personally I think ~ patterns are great, and we are now talking about !
patterns, a kind of dual to ~ patterns.  So at least I think we should
un-couple the two discussions.
 

I agree that it is sensible to decouple the two discussions, so just add 
k patterns to n+k patterns.


However, ~ patterns are really currently the most complicated patterns 
and ! patterns match them in their complexity. Personally I believe that 
programmers should strive for more laziness, rarely for more strictness. 
I do not like that you can add ! in lots of places where it doesn't make 
any difference, e.g.


f [] !x = rhs1
f (y:ys) !x = rhs2

is the same as

f [] !x = rhs1
f (y:ys) x = rhs2


Your motivating example

f2 !x !y | g x = rhs1
   | otherwise = rhs2

I would express as

f2 x y = x `seq` y `seq` if g x then rhs1 else rhs2

Now you will probably counter with a definition where you can fall 
through the guard to the next equation. In my opinion that just shows 
how horrible guards are (and I would propose their removal if I saw any 
chance of success).


Ciao,
Olaf
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


RE: Removal candidates in patterns

2006-01-26 Thread Duncan Coutts
On Thu, 2006-01-26 at 17:31 +, Simon Peyton-Jones wrote:
> I agree that if (n+k) patterns go, then so should k patterns.  Both are
> overloaded, and that's the root of their complexity.

I have to say that we use 'k' patterns in teaching all the time, though
we do not teach n+k patterns. There are lots of cases where it's
convenient to say:

foo 0 = ...
foo n = ...

Intuitively it seems reasonable to me that 1 is a constructor for the
Int type just as 'c' is a constructor for type Char, and since it's a
constructor we can pattern patch on it.

To be honest, the difficulty of the internal translation needed for
tools seems less important to me than the convenience for users. I don't
think the difference that character constants are not overloaded where
as numeric constants are overloaded causes any difficult in
understanding for users.

Duncan

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: Removal candidates in patterns

2006-01-26 Thread Aaron Denney
On 2006-01-26, Olaf Chitil <[EMAIL PROTECTED]> wrote:
>
> I am very please to see on the Wiki also a list of removal candidates 
> and that these include n+k patterns and ~ patterns.
>
> I'd like to add one pattern to this list of removal candiates: k 
> patterns, that is, numeric literals.

I don't see that much use for the first two but I really want to
argue for being able to pattern-match on numeric literals.  I think
numeric literals should be treated as much as possible as if there were
declarations like "data Int = 0 | 1 | (-1) | 2 | (-2) | ..."

Or am I misunderstanding the suggestion here?

> Iff n+k patterns are removed, there is little good use for k patterns 
> either.

Say what?  n+k could perhaps serve some pedagogical purpose in
presenting the peano numbers.  Plain old literals are not so tied to
a particular representation (that is, you can imagine 4 being expanded
to match Int# 4, or BooleanSequence [T,F,F] internally, or whatever and
still looking exactly the same in the code), and have the same utility
as being able to pattern-match any data.

> So get rid of these three and pattern matching becomes so much more simple.

>From the point of view of Hat, yes.  Despite how useful hat is, I'd
rather have the ability to do 

ack 0 n = n+1
ack m 0 = ack (m-1) 1
ack m n = ack (m-1) (ack m (n-1))

which looks far nicer than

ack m n | m == 0 = n + 1
| n == 0 = ack (m-1) 1
| otherwise = ack (m-1) (ack m (n-1))

I admit this is 99% aesthetics, but aesthetics do matter, as does
consistency and regularity.  And there are some cases where the guards
can get quite complex, especially when rewriting something that already
combines pattern-matching with guards.

-- 
Aaron Denney
-><-

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: The dreaded M-R

2006-01-26 Thread Aaron Denney
On 2006-01-26, John Hughes <[EMAIL PROTECTED]> wrote:
> I don't think it's hard. I would just teach students to define
> functions with =, and "variables" with :=. I tell my students to write
> type signatures at the beginning anyway, so they don't risk being
> bitten by the M-R anyway. Beginning students just do what you tell
> them, and they already think of function and variable definitions as
> different. Learning a different syntax for one of them would not be a
> problem.
>
> Once they've mastered basic programming and start getting interested
> in things like overloading, then you have to explain how the M-R
> works. I'd much rather explain =/:= than try to teach them how you
> know whether a definition is shared or not right now.

And this gets back to "what the target audience for Haskell' is"
question.  Since I'm not a CS student, and I'm not teaching CS students,
this whole argument is rather unconvincing to me.

-- 
Aaron Denney
-><-

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: Existential types: want better syntactic support (autoboxing?)

2006-01-26 Thread S.J.Thompson

Johannes - thanks for the pointer to this posting; would you have a
concrete proposal to make on the basis of this for Haskell'?

Regards

Simon Thompson

On Wed, 25 Jan 2006, Johannes Waldmann wrote:

> It is standard practice to hide implementation details,
> in particular, not publishing the type of an object,
> but just the interfaces that its type implements. We can do this
> with existential types but the notation feels clumsy. See my message
> http://www.haskell.org//pipermail/haskell-cafe/2005-June/010516.html
>
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


RE: Removal candidates in patterns

2006-01-26 Thread Simon Peyton-Jones
I agree that if (n+k) patterns go, then so should k patterns.  Both are
overloaded, and that's the root of their complexity.

Personally I think ~ patterns are great, and we are now talking about !
patterns, a kind of dual to ~ patterns.  So at least I think we should
un-couple the two discussions.

Simon

| -Original Message-
| From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of
| Olaf Chitil
| Sent: 26 January 2006 17:01
| To: haskell-prime@haskell.org
| Subject: Removal candidates in patterns
| 
| 
| I am very please to see on the Wiki also a list of removal candidates
| and that these include n+k patterns and ~ patterns.
| 
| I'd like to add one pattern to this list of removal candiates: k
| patterns, that is, numeric literals.
| 
| Why do I want to get rid of these three patterns? Because all three
| caused me no end of trouble when implementing the program
transformation
| of the Haskell tracer Hat. Hat actually still doesn't handle nested ~
| patterns.
| 
| Why are these patterns so hard to implement for Hat? Surely the
Haskell
| report gives a translation into simple core Haskell. Well, Hat does
not
| use this translation because it does not want to be an inefficient
| pattern matcher (leave that job to the compiler) but produce a trace
of
| the Haskell program as it is written. However, both n+k and k patterns
| cause calls of functions ( (-), (==) etc) that Hat has to record in
its
| trace. Also ~ patterns do not fit the simple rewriting semantics of
the
| Hat trace and hence have to be recorded specially. While in simple
cases
| that occur in practice it is pretty straightforward to remove n+k, k
and
| ~ patterns from a larger pattern while keeping the rest of the larger
| pattern intact, in the general case this is incredibly hard.
| 
| Iff n+k patterns are removed, there is little good use for k patterns
| either. Since the introduction of monadic IO the ~ pattern is hardly
| used in practice either. In all the simple cases that these three are
| currently used in practice, it is easy for the programmer to define
| their function in an alternative way.
| 
| So get rid of these three and pattern matching becomes so much more
simple.
| 
| Ciao,
| Olaf
| ___
| Haskell-prime mailing list
| Haskell-prime@haskell.org
| http://haskell.org/mailman/listinfo/haskell-prime
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: more flexible partial application

2006-01-26 Thread Aaron Denney
On 2006-01-26, Dinko Tenev <[EMAIL PROTECTED]> wrote:
> On 1/26/06, Conor McBride <[EMAIL PROTECTED]> wrote:
> [...]
>> We'd do daft stuff like
>>
>>   (200 * _ ^ 2) unitsquare
>
> Yes, I played with a concept like that at one point, and came to the
> conclusion that it was better done with lambdas.  I am all
> specifically about function application, not arbitrary expressions.

Arbitrary expressions are just function application.

>> If you do want to pull a stunt like this, you need some other funny
>> brackets which specifically indicate this binding power, and then you
>> can do grouping inside them, to create larger linear abstractions. You
>> could have something like
>>
>>   (| f (_ * 3) _ |)
>
> We already have lambdas for this, and they're shorter, clearer, and
> more powerful.

The same hold (except for shorter) for this whole extension, and I don't
know that "shorter" holds here.

-- 
Aaron Denney
-><-

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Removal candidates in patterns

2006-01-26 Thread Olaf Chitil


I am very please to see on the Wiki also a list of removal candidates 
and that these include n+k patterns and ~ patterns.


I'd like to add one pattern to this list of removal candiates: k 
patterns, that is, numeric literals.


Why do I want to get rid of these three patterns? Because all three 
caused me no end of trouble when implementing the program transformation 
of the Haskell tracer Hat. Hat actually still doesn't handle nested ~ 
patterns.


Why are these patterns so hard to implement for Hat? Surely the Haskell 
report gives a translation into simple core Haskell. Well, Hat does not 
use this translation because it does not want to be an inefficient 
pattern matcher (leave that job to the compiler) but produce a trace of 
the Haskell program as it is written. However, both n+k and k patterns 
cause calls of functions ( (-), (==) etc) that Hat has to record in its 
trace. Also ~ patterns do not fit the simple rewriting semantics of the 
Hat trace and hence have to be recorded specially. While in simple cases 
that occur in practice it is pretty straightforward to remove n+k, k and 
~ patterns from a larger pattern while keeping the rest of the larger 
pattern intact, in the general case this is incredibly hard.


Iff n+k patterns are removed, there is little good use for k patterns 
either. Since the introduction of monadic IO the ~ pattern is hardly 
used in practice either. In all the simple cases that these three are 
currently used in practice, it is easy for the programmer to define 
their function in an alternative way.


So get rid of these three and pattern matching becomes so much more simple.

Ciao,
Olaf
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: The dreaded M-R

2006-01-26 Thread Ross Paterson
On Thu, Jan 26, 2006 at 03:01:32PM +0100, John Hughes wrote:
> (I wonder what happens today, if you write mutually recursive
> definitions where the M-R applies to some, but not others?)

Under the Haskell 98 rules (4.5.5), the MR applies to whole dependency
groups.  H98 also requires (4.5.2) that the types of all variables in
a dependency group have the same context (whether MR applies or not).

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: The dreaded M-R

2006-01-26 Thread John Hughes

Simon Marlow wrote:


On 26 January 2006 09:59, John Hughes wrote:

 


The solution I favour is simply to use *different syntax* for the two
forms of binding, so that a definition is monomorphic, and computed
at most once, if it uses the monomorphic binding operator, and
polymorphic/overloaded, computed at each use, if it uses the other.
Whether it's a function definition or not is irrelevant, as is whether
or not it carries a type signature.

The trick is finding good syntax. I suggest = for bind-by-name, and
:= for bind-by-need.
   



The reasoning for the proposal makes complete sense to me, but I don't
feel the proposed solution strikes the right balance.  The MR is a
subtle point that we don't want to have to burden newcomers to the
language with, but having two forms of binding is a fundamental part of
the language design that would surely crop up early on the Haskell
learning curve.  John - how do you envisage teaching this?
 

I don't think it's hard. I would just teach students to define functions 
with =,
and "variables" with :=. I tell my students to write type signatures at 
the beginning
anyway, so they don't risk being bitten by the M-R anyway. Beginning 
students

just do what you tell them, and they already think of function and variable
definitions as different. Learning a different syntax for one of them 
would not

be a problem.

Once they've mastered basic programming and start getting interested in 
things
like overloading, then you have to explain how the M-R works. I'd much 
rather

explain =/:= than try to teach them how you know whether a definition is
shared or not right now.


I wonder if there's an alternative solution along these lines:

 - We use ParialTypeSignatures to make bindings monomorphic:


http://haskell.galois.com/cgi-bin/haskell-prime/trac.cgi/wiki/PartialTyp
eSigs

   eg.

 x :: _
 x = (+1)

 - we make it a static error for a variable bound by a simple pattern
   binding ("x = e") to be overloaded, unless a type signature is
given.
   The error message would explain the problem, and how to fix it.
   Alternatively, we make it a strong warning.

It seems to me that the partial type signatures extension provides a lot
of bang for the buck - it gives us a way out of the MR in addition to
partial type signatures.

I don't like this. Once students start dropping type signatures (which 
they do

pretty soon for local variables in where-clauses), they would sometimes--
unpredictably as far as they're concerned--get an error message telling them
they must put one back in again, but it's enough to write x :: _. Can 
you imagine

explaining to an average student in the first year why they MUST put in a
type signature, but it doesn't need to include a type???

Don't underestimate the difficulties many students already face. At this 
stage,

they're not even completely sure what the difference is between a type and a
value, let alone a type and a class! Understanding the effect of the 
presence
or absence of a type signature is beyond most students until much, much 
later.


If we replace or revise the M-R, the replacement should be very, very 
simple.
The M-R in its present form is a clever, and not terribly complicated 
solution

--but complicated enough to have caused no end of trouble over the years.
Let's not be clever, let's be straightforward and explicit: two binding 
forms,

two notations.

John

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


scoped type variables in class instances

2006-01-26 Thread S.M.Kahrs
The Haskell Prime Wiki mentions the scoping of type variables
in class instances, but only as an aside, and it is not even clear
whether proposal 1 would support that feature or not.

For me this once occurred as a matter of language expressiveness,
i.e. I had once to switch from hugs to GHC, because I could not
find a way of expressing in Hugs what I needed.

The problematic piece of code was the following:

instance (LUB a b c,Full c d) => Run(a->b) where
interpret e a = (p1.p2)(evalExp [(e2.e1) a] (expand e))
where e1 = embed :: a->c
  e2 = embed :: c->d
  p2 = project :: d->c
  p1 = project :: c->b

As you can see, this is using multi-parameter classes (and functional 
dependencies),
and whether it is a matter of language expressiveness or not is probably 
connected
to whether these features are around or not.

Explanation:

An instance "Run t" meant to provide an evaluation of expressions (some fixed 
type)
that returns type t.  This essentially worked by picking one of the approximants
of the D_infty model that was "big enough" to do the evaluation in,
embed inputs into that type, evaluate over there, and then project results out 
of it.

The class instance above is the case for function types.
To do this for a->b, I first need to find an upper bound
into which I can safely embed/project types a and b - and that is c;
then I find the next type into which I can embed c,
at which I can do D_infty-style evaluation, and that is d.
I get these types from deterministic multi-parameter classes.

The embed/project functions come from the class instances LUB a b c and Full c 
d.
There are instances giving me versions of embed with the same argument type
but different result types, thus I need to be able to tell my program
which ones to use when they are applied to a value of type a.
Above, I do this with type annotations, but I need that the type
variables I use here correspond to those of the class instance definition.
In hugs, I was stumped.

Stefan Kahrs

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: The dreaded M-R

2006-01-26 Thread Malcolm Wallace
"Simon Marlow" <[EMAIL PROTECTED]> writes:

> On 26 January 2006 09:59, John Hughes wrote:
> > The solution I favour is simply to use *different syntax* for the two
> > forms of binding,
> 
> I wonder if there's an alternative solution along these lines:
>   - We use ParialTypeSignatures to make bindings monomorphic:
> eg.
> 
>   x :: _
>   x = (+1)

I agree with Simon that two forms of binding feels like a heavyweight
solution.  Variable-binding is just such a fundamental thing, that
introducing a second form would need exceptional justification IMO.
However partial type signatures seem like a very nice alternative.
Just as currently, the decision on monomorphising a binding is based
on the type signature (its presence, absence, or form).

Regards,
Malcolm
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: The dreaded M-R

2006-01-26 Thread Philippa Cowderoy
On Thu, 26 Jan 2006, Johannes Waldmann wrote:

> If this seems impossible, then the function itself probably *is*
> complex, and its type would give valuable information,
> and I don't see what a programmer (or a reader) benefits
> from a language that allows to omit this information.
> 

For one, because that makes it possible to load it into an interpreter and 
be told the type. 

-- 
[EMAIL PROTECTED]

A problem that's all in your head is still a problem.
Brain damage is but one form of mind damage.
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


RE: The dreaded M-R

2006-01-26 Thread Simon Marlow
On 26 January 2006 09:59, John Hughes wrote:

> The solution I favour is simply to use *different syntax* for the two
> forms of binding, so that a definition is monomorphic, and computed
> at most once, if it uses the monomorphic binding operator, and
> polymorphic/overloaded, computed at each use, if it uses the other.
> Whether it's a function definition or not is irrelevant, as is whether
> or not it carries a type signature.
> 
> The trick is finding good syntax. I suggest = for bind-by-name, and
> := for bind-by-need.

The reasoning for the proposal makes complete sense to me, but I don't
feel the proposed solution strikes the right balance.  The MR is a
subtle point that we don't want to have to burden newcomers to the
language with, but having two forms of binding is a fundamental part of
the language design that would surely crop up early on the Haskell
learning curve.  John - how do you envisage teaching this?

I wonder if there's an alternative solution along these lines:

  - We use ParialTypeSignatures to make bindings monomorphic:
 
 
http://haskell.galois.com/cgi-bin/haskell-prime/trac.cgi/wiki/PartialTyp
eSigs

eg.

  x :: _
  x = (+1)

(incedentally until recently it was possible to do this
in GHC using scoped type variables, but the change in the semantics
of scoped type variables has removed that possibility).

  - we make it a static error for a variable bound by a simple pattern
binding ("x = e") to be overloaded, unless a type signature is
given.
The error message would explain the problem, and how to fix it.
Alternatively, we make it a strong warning.

It seems to me that the partial type signatures extension provides a lot
of bang for the buck - it gives us a way out of the MR in addition to
partial type signatures.

I'm not sure what to do about non-simple pattern bindings, though.

Cheers,
Simon
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: Haskell-prime Digest, Vol 1, Issue 4

2006-01-26 Thread John Hughes

Ross Paterson wrote:


I suggest = for bind-by-name, and
:= for bind-by-need. 
...
 



You're proposing that the =/:= distinction both decides whether
constrained type variables are monomorphic and whether the binding
should be implemented using sharing.  If it only did the former (and the
expectation was that all pattern bindings with unconstrained types used
sharing), then existing legal programs would still be legal, and the
examples that currently trip over the MR would be legal but inefficient.
(Also, what does the shared/unshared distinction mean for functions?)

 

Not just constrained type variables. All type variables. Because changes 
in the program
elsewhere can easily change the status of a type variable from 
unconstrained to constrained,
thus triggering monomorphism unexpectedly--that was the point of my 
comment about
introducing an equality test in a function called from the definition. 
Changing the status
of a type variable should not change the way it is treated by any 
replacement for the M-R.


I don't quite follow what you're suggesting above. The main point of a 
=/:= distinction is
to distinguish between sharing and non-sharing, isn't it? And sharing 
means you have to
be monomorphic, at least for constrained type variables, and (by the 
argument above)
thus for unconstrained ones too. How can sharing/unsharing and 
monomorphic/overloaded

be separated?

I'd really like to avoid ANOTHER rule that "guesses" what method to use, 
based on the
form of the definition (your reference to pattern bindings above). That 
leads to surprises
for the programmer, at least the less-than-expert one, when a definition 
is replaced by
something that LOOKS equivalent, but type-checking or sharing suddenly 
behaves
differently. Much preferable is a simple and obvious rule: = means 
unshared and

polymorphic, := means shared and monomorphic.

Shared/unshared doesn't matter for function definitions, but 
monomorphic/polymorphic
can still be important. There is an interaction with implicit parameters 
here--which I
suppose might make it into Haskell'. A := definition says: resolve all 
overloading here.
Thus, if there is an implicit parameter in the RHS of such a definition, 
then it refers
to the instance of that parameter in scope at the point of definition. 
With a = definition,
it refers to the instance in scope at the point of use. This is an 
important distinction
whether you're defining a function or anything else. This is discussed 
in my paper
on Global Variables in Haskell, which suggested using implicit 
parameters to refer
to global variables, rather than an unsafe unsafePerformIO applied to a 
newIORef.


What if one has mutually recursive bindings, some using = and some := ?
Does monomorphism kick in if some of the variables in a binding group
use :=, or would we just require that all bindings in the same group
use the same binder?  (At first I couldn't see why one would ever use :=
with function bindings, but perhaps that's the reason.)

 

I don't think there's really a problem in allowing a mixture of = and := 
in the same
mutually recursive group, even if it could be quite confusing to do so! 
= just means
that type variables and dictionaries should be abstracted, and that the 
binding
should be by-name... let's assume that we're translating to System F, 
and we always
insert at least a \()-> on such bindings in the translation. := means, 
on the other hand,
that type variables and dictionaries are not abstracted, and so must be 
inherited from

an enclosing scope. So in a group of the form

   f = ...g...
   g := ...f...

then any type variables in the definition of g must refer to the 
enclosing scope, which
means that they cannot be generalised in the definition of f either 
(since they are
"free in the context"). But if there are type variables in the 
definition of f which do
NOT occur in the type of g, then they can be generalised as usual. 
Meanwhile f can
be bound by name, and g by need--there's no difficulty with that. This 
would be an
odd thing to do, but I think it makes perfect sense. (I wonder what 
happens today,
if you write mutually recursive definitions where the M-R applies to 
some, but

not others?)

Of course, polymorphic recursion would REQUIRE an = binding, but that 
shouldn't

surprise anybody.

John

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: The dreaded M-R

2006-01-26 Thread John Hughes

Johannes Waldmann wrote:


(entering ironic mode, but not quite:)

So, what about making type signatures mandatory,
as the rest of the civilized world does happily for decades ...

 



If that's a serious proposal, then I'll argue against it--but do we
really want to raise that question? One of the strengths of
Haskell is that it supports both implicit and explicit typing
well.

John
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: more flexible partial application

2006-01-26 Thread Dinko Tenev
On 1/26/06, Conor McBride <[EMAIL PROTECTED]> wrote:
[...]
> We'd do daft stuff like
>
>   (200 * _ ^ 2) unitsquare

Yes, I played with a concept like that at one point, and came to the
conclusion that it was better done with lambdas.  I am all
specifically about function application, not arbitrary expressions.

[...]
> Giving parentheses this murky binding power interferes with their innocence.

The parentheses won't bind, they'll only delimit the expression that
will be subject to re-interpretation, and then simply in a by-the-way
manner, very much like in the operator sections case.  They'll still
be innocent in the absense of relevant syntax :)

> If you do want to pull a stunt like this, you need some other funny
> brackets which specifically indicate this binding power, and then you
> can do grouping inside them, to create larger linear abstractions. You
> could have something like
>
>   (| f (_ * 3) _ |)

We already have lambdas for this, and they're shorter, clearer, and
more powerful.

> But in my wild and foolish adulthood, I'm not sure it's worth spending a
> kind of bracket on.

Definitely not.  But an underscore can still be spent on the much
simpler case :)

>
> All the best
>
> Conor


Cheers,

Dinko
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: The dreaded M-R

2006-01-26 Thread Henrik Nilsson

Dear all,

Johannes Waldmann wrote:

> So, what about making type signatures mandatory,
> as the rest of the civilized world does happily for decades ...

Given that explicit type signatures increasingly are required
for dealing with other aspects (polymorphic recursion,
rank 2-or-higher polymorphism, GADTs ...) that would
seem reasonable.

Personally, though, I have to admit that I've never had
all that much problems with the M-R restriction in the
first place. Probably because I do write top-level
type signatures as soon as I get into serious programming.

That said, I do find it convenient that type signatures
can be omitted.

And I wonder if this is a sufficiently significant problem
to warrant breaking backwards compatibility in this respect.

All the best,

/Henrik

--
Henrik Nilsson
School of Computer Science and Information Technology
The University of Nottingham
[EMAIL PROTECTED]


This message has been checked for viruses but the contents of an attachment
may still contain software viruses, which could damage your computer system:
you are advised to perform your own checks. Email communications with the
University of Nottingham may be monitored as permitted by UK legislation.

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: The dreaded M-R

2006-01-26 Thread Ross Paterson
On Thu, Jan 26, 2006 at 10:59:22AM +0100, John Hughes wrote:
> The fact is, Haskell has two different binding mechanisms--bind-by-name
> (used for overloaded definitions), and bind-by-need (monomorphic). Both
> are useful: bind-by-name lets us name overloaded expressions, while
> bind-by-need gives performance guarantees. The trouble is just the way
> we distinguish them--where the compiler is basically guessing from the
> form of a definition which one to use.
> [...]
> The solution I favour is simply to use *different syntax* for the two
> forms of binding, so that a definition is monomorphic, and computed
> at most once, if it uses the monomorphic binding operator, and
> polymorphic/overloaded, computed at each use, if it uses the other.
> Whether it's a function definition or not is irrelevant, as is whether
> or not it carries a type signature.
> 
> The trick is finding good syntax. I suggest = for bind-by-name, and
> := for bind-by-need. (Some object that := "means" assignment--but come
> on, we're not reserving := for future use as assignment in Haskell, are we?
> Why should we give up a perfectly good symbol because it's used elsewhere
> to mean something else?). With this notation, = would be appropriate
> for function definitions, and := for most non-function definitions. It
> would be instantly clear where there was a possibility of repeated
> evaluation, and where not.
> 
> The problem with making such a syntactic distinction is that, however
> it's done, many changes must be made to existing programs. Just because
> existing programs contain many bindings of each sort, there's no
> getting away from the fact that a syntactic distinction will force
> changes. In principle this could be automated, of course--not hard
> but somebody would have to do it. But perhaps it would be worth it,
> to eliminate probably the number one wart, and solve the problems
> above.

You're proposing that the =/:= distinction both decides whether
constrained type variables are monomorphic and whether the binding
should be implemented using sharing.  If it only did the former (and the
expectation was that all pattern bindings with unconstrained types used
sharing), then existing legal programs would still be legal, and the
examples that currently trip over the MR would be legal but inefficient.
(Also, what does the shared/unshared distinction mean for functions?)

What if one has mutually recursive bindings, some using = and some := ?
Does monomorphism kick in if some of the variables in a binding group
use :=, or would we just require that all bindings in the same group
use the same binder?  (At first I couldn't see why one would ever use :=
with function bindings, but perhaps that's the reason.)

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: The dreaded M-R

2006-01-26 Thread Johannes Waldmann
John Hughes wrote:

> * You can't eta-convert definitions freely, if there is no type signature.
...
> * Definitions without a type-signature can change ...

(entering ironic mode, but not quite:)

So, what about making type signatures mandatory,
as the rest of the civilized world does happily for decades ...

Yeah I know this would "break" some programs,
but aren't these "broken" from the start
because they are missing the easiest and safest
and most effective way of documentation?

If you say "writing out all type signatures is awkward",
then exactly why? Because the type system is too complex?
Then it should be fixed. I think it's not. Then perhaps
because the types of the functions are too complex?
Then these functions should be fixed (by refactoring,
introducing helper type names, etc.).

If this seems impossible, then the function itself probably *is*
complex, and its type would give valuable information,
and I don't see what a programmer (or a reader) benefits
from a language that allows to omit this information.

Respectfully submitted,
-- 
-- Johannes Waldmann -- Tel/Fax (0341) 3076 6479/80 --
 http://www.imn.htwk-leipzig.de/~waldmann/ ---

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: more flexible partial application

2006-01-26 Thread Conor McBride

Hi folks

John Hughes wrote:


On 1/23/06, Sebastian Sylvan <[EMAIL PROTECTED]> wrote:
 


Are there any subtle reasons for why something like the following
couldn't be allowed?
  


foo x y z w = ...
bar x w = foo x _ _ w
   



Or would (f _ x) y and f _ x y maybe be different? That would fix the
problem above, while introducing another. Please, no!



For what it's worth, I agree with John. In my wild and foolish youth 
(c1990), I implemented a programming language with this very feature. It 
was a kind of higher-order LOGO on the complex plane, where a function 
applied to a drawing transformed its coordinates. We'd do daft stuff like


 (200 * _ ^ 2) unitsquare

What fun we had, but it was a source of top quality mystery as far as 
the semantics was concerned. Figuring out how to bracket stuff was total 
guesswork.


As things stand in Haskell, parentheses do grouping, and they do 
sections for infix operators. These are cleanly separable, because 
what's in a section bracket is plainly not an expression. Extra explicit 
grouping of expressions is harmless. (f a) b is f a b. Giving 
parentheses this murky binding power interferes with their innocence.


If you do want to pull a stunt like this, you need some other funny 
brackets which specifically indicate this binding power, and then you 
can do grouping inside them, to create larger linear abstractions. You 
could have something like


 (| f (_ * 3) _ |)

This makes some kind of sense, provided you don't expect to be able to 
transform the contents of these brackets naively


 (| flip f _ _ |)  ain't  (| f _ _ |)

But in my wild and foolish adulthood, I'm not sure it's worth spending a 
kind of bracket on.


All the best

Conor
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: more flexible partial application

2006-01-26 Thread Dinko Tenev
On 1/26/06, John Hughes <[EMAIL PROTECTED]> wrote:
> I'd be against this--its semantics isn't clear enough to me. For example,
> I usually assume id e = e, for any e, but
>
> id (f _ x) y  =  id (\y->f y x) y = f y x
> /=
> f _ x y = \z -> f z x y
>
> Or would (f _ x) y and f _ x y maybe be different? That would fix the
> problem above, while introducing another. Please, no!

They should be different for this to work.

The reasonable thing to do would be to rewrite every
(e _ a1 a2 ... an)
as
(\x -> (e x a1 a2 ... an))
and the parentheses should be mandatory.

Note that this can be done recursively, so that e.g.
(f _ y _ t)  ==>  (\x1 -> (f x1 y _ t))  ==>  (\x1 -> (\x2 -> (f
x1 y x2 t)))

I see this as no worse than operator sections: we already have (- x)
and (-) x meaning different things.  Having in mind that (e _ ...) is
just syntax, it should be easy to keep it separate from application,
so f x y z will still be the same as ((f x) y) z.

>
> John


Cheers,

Dinko
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: more flexible partial application

2006-01-26 Thread Johannes Waldmann

> Or would (f _ x) y and f _ x y maybe be different? That would fix the
> problem above, while introducing another. Please, no!

seconded.

I think the original problem (want to omit lambda notation
in a few cases) does not need to be fixed.


Functions with too many parameters are bad style anyway, in most cases
there should actually be a record type for them. See Code Smell: long
parameter list, Refactoring: introduce parameter object.
e. g. http://wiki.java.net/bin/view/People/SmellsToRefactorings

On the other hand, standard OO languages do not have partial evaluation
so perhaps this changes the idea a bit. The question is, how much.

best regards,
-- 
-- Johannes Waldmann -- Tel/Fax (0341) 3076 6479/80 --
 http://www.imn.htwk-leipzig.de/~waldmann/ ---

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


The dreaded M-R

2006-01-26 Thread John Hughes
I had promised myself not to propose anything that was not already 
implemented,
tried, and tested, but I just can't resist bringing up a proposal I've 
made in the

past to fix the monomorphism restriction. Maybe now is the time to do so.

I know many will proclaim "just get rid of it", but consider this: 
without the

M-R, some programs can run exponentially slower than you expect. This
actually happened to me, which is how we discovered something of the sort
was needed in the first place. But the current design is surely a wart.

The fact is, Haskell has two different binding mechanisms--bind-by-name
(used for overloaded definitions), and bind-by-need (monomorphic). Both
are useful: bind-by-name lets us name overloaded expressions, while
bind-by-need gives performance guarantees. The trouble is just the way
we distinguish them--where the compiler is basically guessing from the
form of a definition which one to use. Two problems that leads to:

* You can't eta-convert definitions freely, if there is no type signature.
 We've all hit this one, where you write something like sum=foldr(+)0
 and you can't export it, because it's monomorphic.

* Definitions without a type-signature can change from polymorphic
 to monomorphic as a result of changes elsewhere in the program.
 Because the M-R applies only to overloaded definitions, then introducing,
 for example, an equality test in a function the definition calls can
 change its type, and make the M-R suddenly apply where it did not before.
 That can lead to unexpected errors.

The solution I favour is simply to use *different syntax* for the two
forms of binding, so that a definition is monomorphic, and computed
at most once, if it uses the monomorphic binding operator, and
polymorphic/overloaded, computed at each use, if it uses the other.
Whether it's a function definition or not is irrelevant, as is whether
or not it carries a type signature.

The trick is finding good syntax. I suggest = for bind-by-name, and
:= for bind-by-need. (Some object that := "means" assignment--but come
on, we're not reserving := for future use as assignment in Haskell, are we?
Why should we give up a perfectly good symbol because it's used elsewhere
to mean something else?). With this notation, = would be appropriate
for function definitions, and := for most non-function definitions. It
would be instantly clear where there was a possibility of repeated
evaluation, and where not.

The problem with making such a syntactic distinction is that, however
it's done, many changes must be made to existing programs. Just because
existing programs contain many bindings of each sort, there's no
getting away from the fact that a syntactic distinction will force
changes. In principle this could be automated, of course--not hard
but somebody would have to do it. But perhaps it would be worth it,
to eliminate probably the number one wart, and solve the problems
above.

I put it on the table, anyway.

John

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: more flexible partial application

2006-01-26 Thread John Hughes



On 1/23/06, Sebastian Sylvan <[EMAIL PROTECTED]> wrote:
 


Are there any subtle reasons for why something like the following
couldn't be allowed?


   


foo x y z w = ...
bar x w = foo x _ _ w
 


I.e. a more flexible version of partial application. This would be
translated too

   


bar x w = \y z -> foo x y z w
 


I.e a function which takes the "_" parameters in the same order they
were encountered in the function application.

   


I'd be against this--its semantics isn't clear enough to me. For example,
I usually assume id e = e, for any e, but

   id (f _ x) y  =  id (\y->f y x) y = f y x
   /=
   f _ x y = \z -> f z x y

Or would (f _ x) y and f _ x y maybe be different? That would fix the
problem above, while introducing another. Please, no!

John
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime