RE: Bang patterns, ~ patterns, and lazy let

2006-02-08 Thread Simon Peyton-Jones
I've updated the Wiki to add your strict proposal, but rather briefly.
If you want to add stuff, send it to me and I'll add it.

Meanwhile:

| And as a consequence, it is no longer possible to transform a pair of
| bindings into a binding of a pair. In Haskell 98,
| 
| p1 = e1
| p2 = e2
| 
| is always equivalent to
| 
| (~p1, ~p2) = (e1,e2)

In your strict proposal, I'm sure you hope that the above pair would be
equivalent to
(p1,p2) = (e1,e2)
which would be even nicer.

But sadly I don't think it is, because that'd change the strongly
connected component structure.  Somehow that smells wrong.

Simon
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: Bang patterns, ~ patterns, and lazy let

2006-02-08 Thread John Hughes

Simon Peyton-Jones wrote:


I've updated the Wiki to add your strict proposal, but rather briefly.
If you want to add stuff, send it to me and I'll add it.

Meanwhile:

| And as a consequence, it is no longer possible to transform a pair of
| bindings into a binding of a pair. In Haskell 98,
| 
| p1 = e1

| p2 = e2
| 
| is always equivalent to
| 
| (~p1, ~p2) = (e1,e2)


In your strict proposal, I'm sure you hope that the above pair would be
equivalent to
(p1,p2) = (e1,e2)
which would be even nicer.

But sadly I don't think it is, because that'd change the strongly
connected component structure.  Somehow that smells wrong.

Simon
 

What have you got in mind? ANY tupling of bindings may change the SCC 
structure, and hence the results of type inference--I'm taking that as 
read. But that still leaves the question of whether the dynamic 
semantics of the program is changed. Let's assume for the time being 
that all bindings carry a type signature--then the SCC structure is 
irrelevant, isn't it? Or am I missing something here? I'm under the 
impression that the *dynamic* semantics of


   p1 = e1
   p2 = e2

*would* be the same as (p1,p2) = (e1,e2) under my strict matching 
proposal. I don't see how the SCC structure can affect that.


John

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: Bang patterns, ~ patterns, and lazy let

2006-02-07 Thread John Hughes


   From: Ross Paterson [EMAIL PROTECTED]
   John Hughes wrote:

I would urge that either we stick with the present design, or, if bang 
patterns are added (which a lot speaks for), that the language be 
simplified at the same time so that patterns are matched in the same way 
everywhere, and BOTH warts above are excised. Some existing code would 
break, but in return the language would become simpler and more expressive.
   


   Would top-level bindings of constructor patterns and !x be evaluated
   when the module was loaded (or at compile time)?


Yes. Nothing else makes sense, does it? If that's problematic (although 
I can't see why it would be), just forbid strict patterns at the top 
level of modules.


Load time rather than compile-time, I think--otherwise the compiled code 
for a module could depend on the *code* of modules it imports, not just 
on their interfaces, which would be harmful for separate compilation.


John

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: Bang patterns, ~ patterns, and lazy let

2006-02-07 Thread John Hughes






  
From: Ben Rudiak-Gould [EMAIL PROTECTED]
Subject: Re: Bang patterns, ~ patterns, and lazy let


  It's also not that case that !x has the same 
meaning in both proposals, e.g.

 let { !x = y ; !y = const 'a' x } in x

means 'a' in the current proposal but _|_ in yours.
  

Aargh, you're right, it does mean _|_ in mine! That's not very nice.

But wait, I'm not sure about
 let { !x = const undefined y ; !y = const 'a' x } in y
  
desugars in the current proposal to
  
let { x = const undefined y ; y = const 'a' x } in x `seq` y `seq` y
  
which is _|_, but absent implicit ~,
  
let { x = const undefined y ; y = const 'a' x } in y
  
had better (and does) mean 'a'. 
Applying the rules on the wiki, the first step is to translate the
first _expression_ into a tuple binding, omitting the implicit ~:

 let (x,y) = (const undefined y, const 'a' x) in y

This desugars to

 let (x,y) = fix (\ ~(x,y)-(const undefined y, const 'a' x)) in y

which evaluates to 'a'. In other words, despite the ! on x, the current
proposal is not strict in x.

Maybe the intention was that !s on the lhs of let bindings should be
transferred to the corresponding patterns when a tuple pattern is
introduced? Let's try it: then the example desugars by pattern tupling
to

 let (!x, !y) = (const undefined y, const 'a' x) in y

Now we can introduce fix:

 let (!x, !y) = fix (\ ~(!x, !y) - (const undefined y, const 'a'
x)) in y

and finally case:

 case fix (\~(!x,!y) - (const undefined y, const 'a' x)) of
~(!x, !y) - y

and this is consistent with what you said above.

But if I return to your first example, and do the same thing, I get

 let !x = y; !y = const 'a' x in x

desugars by tupling to

 let (!x, !y) = (y, const 'a' x) in x

which desugars by fix and case introduction to

 case fix (\ ~(!x, !y) - (y, const 'a' x)) of ~(!x, !y) - x

The first approximation to the fixed point is _|_, so the second is
(_|_, 'a'). Now, when ~(!x,!y) is matched against (_|_,'a') then *both*
variables are bound to _|_ --- the effect of the ~ is just to delay
matching (!x,!y) until one of the variables is used, but as soon as y,
say, *is* used, then the match is performed and, of course, it loops.
Thus (_|_, 'a') is the fixed point. For the same reason, x and y are
both bound to _|_ in the body of the case, and so the entire _expression_
evaluates to _|_, not 'a' as you claimed.

Bottom line: I can't find a way to interpret the translation rules in
the Haskell report, modified as the Wiki page suggests, to produce the
results you expect in both cases.

But maybe the fault is in the translation rules in the Haskell report.
It was always rather tricky to explain a group of recursive bindings in
Haskell in terms of a single tuple binding, because Haskell tuples are
lifted. I see that you have a more direct understanding of what ! is
supposed to mean. Is it possible, I wonder, to give a direct
denotational semantics to a declaration group--say mapping environments
to environments--in which there is only one case for ! (its natural
semantics in patterns)? Such a semantics should have the property that

 let x1 = e1; x2 = e2 in e0 === let x1 = e1 in let x2 = e2 in e0

provided x1 does not occur in e2. Finding a simple and compositional
denotational semantics with these properties, and proving the law
above, would be a good way to show that ! patterns do NOT introduce
semantic warts---and would probably also suggest that the
semantics-by-translation used in the report is fundamentally flawed. We
did construct denotational semantics of fragments of Haskell as part of
the original design, and it had quite an impact on the result--I
recommend it as a way of debugging ideas!

John


___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime