Re: Bang patterns, ~ patterns, and lazy let

2006-02-08 Thread John Hughes

Simon Peyton-Jones wrote:


| What have you got in mind? ANY tupling of bindings may change the SCC
| structure, and hence the results of type inference--I'm taking that as
| read. But that still leaves the question of whether the dynamic
| semantics of the program is changed. Let's assume for the time being
| that all bindings carry a type signature--then the SCC structure is
| irrelevant, isn't it? Or am I missing something here? I'm under the
| impression that the *dynamic* semantics of
| 
| p1 = e1

| p2 = e2
| 
| *would* be the same as (p1,p2) = (e1,e2) under my strict matching

| proposal. I don't see how the SCC structure can affect that.

Well I put the example that you sent me on the Wiki, right at the
bottom.  Did I get it wrong?

 let { (y:ys) = xs; (z:zs) = ys } in body
means
 case xs of (y:ys) -> case ys of (z:zs) -> body

whereas
 let (y:ys, z:zs) = (xs,ys)  in body
means
 case (fix (\~(y:ys, z:zs). (xs,ys))) of (y:ys, z:zs) -> body

which isn't the same.

Simon
 


Oh yes, you're right of course.

In the denotational semantics I wrote last night, multiple bindings are 
combined using (+), which *is* the same as tupling them. But remember 
the thing I left unproven, because it was late at night?


E[[let defs1 in let defs2 in exp]]env = E[[let defs1; defs2 in exp]]env


It's not true, as this example shows. That'll teach me! In

let y:ys = xs; z:zs = ys in body

then the result is _|_, because matching the entire *group* against (xs, _|_) 
fails, but once the example is split into two nested lets then everything 
works. Yuck.


John

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


RE: Bang patterns, ~ patterns, and lazy let

2006-02-08 Thread Simon Peyton-Jones
| What have you got in mind? ANY tupling of bindings may change the SCC
| structure, and hence the results of type inference--I'm taking that as
| read. But that still leaves the question of whether the dynamic
| semantics of the program is changed. Let's assume for the time being
| that all bindings carry a type signature--then the SCC structure is
| irrelevant, isn't it? Or am I missing something here? I'm under the
| impression that the *dynamic* semantics of
| 
| p1 = e1
| p2 = e2
| 
| *would* be the same as (p1,p2) = (e1,e2) under my strict matching
| proposal. I don't see how the SCC structure can affect that.

Well I put the example that you sent me on the Wiki, right at the
bottom.  Did I get it wrong?

  let { (y:ys) = xs; (z:zs) = ys } in body
means
  case xs of (y:ys) -> case ys of (z:zs) -> body

whereas
  let (y:ys, z:zs) = (xs,ys)  in body
means
  case (fix (\~(y:ys, z:zs). (xs,ys))) of (y:ys, z:zs) -> body

which isn't the same.

Simon
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: Bang patterns, ~ patterns, and lazy let

2006-02-08 Thread John Hughes

Simon Peyton-Jones wrote:


I've updated the Wiki to add your strict proposal, but rather briefly.
If you want to add stuff, send it to me and I'll add it.

Meanwhile:

| And as a consequence, it is no longer possible to transform a pair of
| bindings into a binding of a pair. In Haskell 98,
| 
| p1 = e1

| p2 = e2
| 
| is always equivalent to
| 
| (~p1, ~p2) = (e1,e2)


In your strict proposal, I'm sure you hope that the above pair would be
equivalent to
(p1,p2) = (e1,e2)
which would be even nicer.

But sadly I don't think it is, because that'd change the strongly
connected component structure.  Somehow that smells wrong.

Simon
 

What have you got in mind? ANY tupling of bindings may change the SCC 
structure, and hence the results of type inference--I'm taking that as 
read. But that still leaves the question of whether the dynamic 
semantics of the program is changed. Let's assume for the time being 
that all bindings carry a type signature--then the SCC structure is 
irrelevant, isn't it? Or am I missing something here? I'm under the 
impression that the *dynamic* semantics of


   p1 = e1
   p2 = e2

*would* be the same as (p1,p2) = (e1,e2) under my strict matching 
proposal. I don't see how the SCC structure can affect that.


John

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


RE: Bang patterns, ~ patterns, and lazy let

2006-02-08 Thread Simon Peyton-Jones
I've updated the Wiki to add your strict proposal, but rather briefly.
If you want to add stuff, send it to me and I'll add it.

Meanwhile:

| And as a consequence, it is no longer possible to transform a pair of
| bindings into a binding of a pair. In Haskell 98,
| 
| p1 = e1
| p2 = e2
| 
| is always equivalent to
| 
| (~p1, ~p2) = (e1,e2)

In your strict proposal, I'm sure you hope that the above pair would be
equivalent to
(p1,p2) = (e1,e2)
which would be even nicer.

But sadly I don't think it is, because that'd change the strongly
connected component structure.  Somehow that smells wrong.

Simon
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: Bang patterns, ~ patterns, and lazy let

2006-02-08 Thread John Hughes

Simon Peyton-Jones wrote:


| The trouble with those parts is that NOWHERE do they discuss how to
| translate a let or where containing more than one binding. If they're
| not to be translated via tupling, then how are they to be translated?

Sorry I wasn't clear.  Given 
	let { p1 = e1; ... ; pn = en } in e0


(P1)  For each pattern pi that is of form !qi = ei, transform it to
[EMAIL PROTECTED] = ei
and replace e0 by (xi `seq` e0)

(P2)  Now no pattern has a ! at the top.  Now apply the existing rules
in
3.12 of the Haskell report.


So step (P1) above adds some seqs, and after that it's all just standard
Haskell 98.


My summary so far:
 


Good summary.


1) Bang patterns by themselves are quite decent, well-behaved patterns.

2) Rule (P1) is simple to describe.   But the ! in a pattern binding is
treated as part of the *binding* rather than part of the *pattern* which
is wart-y.
 

And as a consequence, it is no longer possible to transform a pair of 
bindings into a binding of a pair. In Haskell 98,


   p1 = e1
   p2 = e2

is always equivalent to

   (~p1, ~p2) = (e1,e2)

and you can make this change *locally*, without consideration of the 
body of the let in which the bindings appear. With ! bindings (let's use 
a different name from ! patterns, because they are not the same thing), 
there's no way to rewrite


   !p1 = e1
   !p2 = e2

as a single tuple binding, because there's nowhere you can put the ! 
that will have the same effect. Thus we lose a law from the algebra of 
bindings, which is part of the reason why this is warty.


John
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


RE: Bang patterns, ~ patterns, and lazy let

2006-02-08 Thread Simon Peyton-Jones
| The trouble with those parts is that NOWHERE do they discuss how to
| translate a let or where containing more than one binding. If they're
| not to be translated via tupling, then how are they to be translated?

Sorry I wasn't clear.  Given 
let { p1 = e1; ... ; pn = en } in e0

(P1)  For each pattern pi that is of form !qi = ei, transform it to
[EMAIL PROTECTED] = ei
and replace e0 by (xi `seq` e0)

(P2)  Now no pattern has a ! at the top.  Now apply the existing rules
in
3.12 of the Haskell report.


So step (P1) above adds some seqs, and after that it's all just standard
Haskell 98.


My summary so far:

1) Bang patterns by themselves are quite decent, well-behaved patterns.

2) Rule (P1) is simple to describe.   But the ! in a pattern binding is
treated as part of the *binding* rather than part of the *pattern* which
is wart-y.

3) There is a good argument to be made that pattern bindings should be
strict by default.  That is
let (x,y) = e in b
would evaluate e strictly.  However that is *not* the same as saying
that 'let' is strict.  
let x = e in b
remains a lazy binding of x (because, as usual, a variable pattern
matches without evaluation).

4)  John argues that it would be bad to adopt bang patterns without also
adopting (3).  I don't agree.  But I'm still attracted by (3).
 

I will add some of this to the Wiki.  Please do not treat it as "my"
page --- any committee member can edit it.

Simon
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


RE: Bang patterns, ~ patterns, and lazy let

2006-02-07 Thread John Hughes


 


From: Simon Peyton-Jones
To: John Hughes ; haskell-prime@haskell.org
Sent: Tuesday, February 07, 2006 11:37 PM
Subject: RE: Bang patterns, ~ patterns, and lazy let


Applying the rules on the wiki, the first step is to translate the first 
expression into a tuple binding, omitting the implicit ~:




Not so! I changed it a few days ago after talking to Ben, to a simpler 
form that works nicely for recursive bindings too. Darn I forgot to 
change the rules at the bottom.




Anyway, read the section “Let and where bindings”. Sorry about the rules 
at the end.




Simon

The trouble with those parts is that NOWHERE do they discuss how to 
translate a let or where containing more than one binding. If they're 
not to be translated via tupling, then how are they to be translated? 
The only relevant thing I could find was in the "modifications to the 
report" section at the bottom, which just tells you to omit implicit ~ 
when applying the tuplling rules in the report.




So I don't understand how the semantics of multiple bindings is supposed 
to be defined (and I admit my proposal isn't so nice either). But more 
and more complex translations make me very nervous!




I have a feeling there could be a nice direct semantics, though, 
including both ! and ~ in a natural way. Let's see now.




Let environments be (unlifted) functions from identifiers to values, 
mapping unbound identifiers to _|_ for simplicity. The semantics of 
patterns is given by




P[[pat]] :: Value -> Maybe Env



The result is Just env if matching succeeds, Nothing if matching fails, 
and _|_ if matching loops.




Two important clauses:



P[[! pat]] v = _|_ if v=_|_

P[[pat]]v otherwise



P[[~ pat]] v = Just _|_ if P[[pat]]v <= Nothing

P[[pat]]v otherwise



In definitions, pattern matching failure is the same as looping, so we 
define




P'[[pat]]v = _|_ if P[[pat]]v = Nothing

P[[pat]]v otherwise



We do need to distinguish, though, between _|_ (match failure or 
looping), and Just _|_ (success, binding all variables to _|_).




The semantics of a definition in an environment is



D[[pat = exp]]env = P'[[pat]] (E[[exp]]env) (*)



where E is the semantics of expressions. Note that this takes care of 
both ! and ~ on the top level of a pattern.




Multiple definitions are interpreted by



D[[d1 ; d2]]env = D[[d1]]env (+) D[[d2]]env



where (+) is defined by



_|_ (+) _ = _|_

Just env (+) _|_ = _|_

Just env (+) Just env' = Just (env |_| env')



Note that (+) is associative and commutative.



Let's introduce an explicit marker for recursive declarations:



D[[rec defs]]env = fix menv'. D[[defs]](env |_| fromJust menv')



Notice:



This ignores the possibility of local variables shadowing variables from 
outer scopes.




*Within defs* it makes no difference whether menv' is _|_ (matching 
fails or loops), or Just _|_ (succeeds with variables bound to _|_)




If defs are not actually recursive, then D[[rec defs]]env = D[[defs]]env.



Now let expressions are defined by



E[[let defs in exp]]env = E[[exp]](env |_| D[[rec defs]]env)



(this also ignores the possibility of local definitions shadowing 
variables from an outer scope).




Too late at night to do it now, but I have the feeling that it should 
not be hard now to prove that




E[[let defs1 in let defs2 in exp]]env = E[[let defs1; defs2 in exp]]env



under suitable assumptions on free variable occurrences. That implies, 
together with commutativity and associativity of (+), that the division 
of declaration groups into strongly connected components does not affect 
semantics.




I like this way of giving semantics--at least I know what it means! But 
it does demand, really, that matching in declarations is strict by 
default. Otherwise I suppose, if one doesn't care about 
compositionality, one could replace definition (*) above by




D[[!pat = exp]]env = P'[[pat]](E[[exp]]env)

D[[pat = exp]]env = P'[[~pat]](E[[exp]]env), otherwise



But this really sucks big-time, doesn't it?



John

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


RE: Bang patterns, ~ patterns, and lazy let

2006-02-07 Thread Simon Peyton-Jones










Applying the rules on the wiki, the first step is to
translate the first _expression_ into a tuple binding, omitting the implicit ~:

 

Not so!  I changed it a few days ago
after talking to Ben, to a simpler form that works nicely for recursive
bindings too.  Darn – I forgot to change the rules at the bottom.

 

Anyway, read the section “Let and
where bindings”. Sorry about the rules at the end.

 

Simon








___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: Bang patterns, ~ patterns, and lazy let

2006-02-07 Thread John Hughes






  
From: Ben Rudiak-Gould <[EMAIL PROTECTED]>
Subject: Re: Bang patterns, ~ patterns, and lazy let


  It's also not that case that !x has the same 
meaning in both proposals, e.g.

 let { !x = y ; !y = const 'a' x } in x

means 'a' in the current proposal but _|_ in yours.
  

Aargh, you're right, it does mean _|_ in mine! That's not very nice.

But wait, I'm not sure about
 let { !x = const undefined y ; !y = const 'a' x } in y
  
desugars in the current proposal to
  
let { x = const undefined y ; y = const 'a' x } in x `seq` y `seq` y
  
which is _|_, but absent implicit ~,
  
let { x = const undefined y ; y = const 'a' x } in y
  
had better (and does) mean 'a'. 
Applying the rules on the wiki, the first step is to translate the
first _expression_ into a tuple binding, omitting the implicit ~:

    let (x,y) = (const undefined y, const 'a' x) in y

This desugars to

    let (x,y) = fix (\ ~(x,y)->(const undefined y, const 'a' x)) in y

which evaluates to 'a'. In other words, despite the ! on x, the current
proposal is not strict in x.

Maybe the intention was that !s on the lhs of let bindings should be
transferred to the corresponding patterns when a tuple pattern is
introduced? Let's try it: then the example desugars by pattern tupling
to

    let (!x, !y) = (const undefined y, const 'a' x) in y

Now we can introduce fix:

    let (!x, !y) = fix (\ ~(!x, !y) -> (const undefined y, const 'a'
x)) in y

and finally case:

    case fix (\~(!x,!y) -> (const undefined y, const 'a' x)) of
~(!x, !y) -> y

and this is consistent with what you said above.

But if I return to your first example, and do the same thing, I get

    let !x = y; !y = const 'a' x in x

desugars by tupling to

    let (!x, !y) = (y, const 'a' x) in x

which desugars by fix and case introduction to

    case fix (\ ~(!x, !y) -> (y, const 'a' x)) of ~(!x, !y) -> x

The first approximation to the fixed point is _|_, so the second is
(_|_, 'a'). Now, when ~(!x,!y) is matched against (_|_,'a') then *both*
variables are bound to _|_ --- the effect of the ~ is just to delay
matching (!x,!y) until one of the variables is used, but as soon as y,
say, *is* used, then the match is performed and, of course, it loops.
Thus (_|_, 'a') is the fixed point. For the same reason, x and y are
both bound to _|_ in the body of the case, and so the entire _expression_
evaluates to _|_, not 'a' as you claimed.

Bottom line: I can't find a way to interpret the translation rules in
the Haskell report, modified as the Wiki page suggests, to produce the
results you expect in both cases.

But maybe the fault is in the translation rules in the Haskell report.
It was always rather tricky to explain a group of recursive bindings in
Haskell in terms of a single tuple binding, because Haskell tuples are
lifted. I see that you have a more direct understanding of what ! is
supposed to mean. Is it possible, I wonder, to give a direct
denotational semantics to a declaration group--say mapping environments
to environments--in which there is only one case for ! (its natural
semantics in patterns)? Such a semantics should have the property that

    let x1 = e1; x2 = e2 in e0   ===   let x1 = e1 in let x2 = e2 in e0

provided x1 does not occur in e2. Finding a simple and compositional
denotational semantics with these properties, and proving the law
above, would be a good way to show that ! patterns do NOT introduce
semantic warts---and would probably also suggest that the
semantics-by-translation used in the report is fundamentally flawed. We
did construct denotational semantics of fragments of Haskell as part of
the original design, and it had quite an impact on the result--I
recommend it as a way of debugging ideas!

John


___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: Bang patterns, ~ patterns, and lazy let

2006-02-07 Thread Ben Rudiak-Gould

John Hughes wrote:

* ! on the left hand side of a let or where *has a different meaning to !
in a pattern* -- it means that the ~ that would have been implicitly 
inserted by the previous rule, is not inserted after all!


I wish it were that simple, but I don't think it is.

let { !x = const undefined y ; !y = const 'a' x } in y

desugars in the current proposal to

let { x = const undefined y ; y = const 'a' x } in x `seq` y `seq` y

which is _|_, but absent implicit ~,

let { x = const undefined y ; y = const 'a' x } in y

had better (and does) mean 'a'. It's also not that case that !x has the same 
meaning in both proposals, e.g.


let { !x = y ; !y = const 'a' x } in x

means 'a' in the current proposal but _|_ in yours.

My experience is that "top-level" strictness information has a very 
different nature from "nested" strictness information, and it's not 
generally possible to find a single interpretation that covers both. The 
reason is that strictness is a relationship between a value and its 
continuation (i.e. context). Nested strictness annotations connect data to a 
datatype context; top-level strictness annotations in this case connect data 
to either a case context or a let context. Each of the three situations has 
to be considered separately.



This is not the same as banging the pattern with the implicit ~, because
as I remarked above, !~p is not the same as p.


Actually if ! patterns were handled consistently in let they would come out 
as ~!p = ~p, so the ! would have no effect. The current proposal effectively 
borrows the ! notation, which would otherwise be useless, for a different 
purpose in this case.


-- Ben

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: Bang patterns, ~ patterns, and lazy let

2006-02-07 Thread John Hughes


   From: Ross Paterson <[EMAIL PROTECTED]>
   John Hughes wrote:

I would urge that either we stick with the present design, or, if bang 
patterns are added (which a lot speaks for), that the language be 
simplified at the same time so that patterns are matched in the same way 
everywhere, and BOTH warts above are excised. Some existing code would 
break, but in return the language would become simpler and more expressive.
   


   Would top-level bindings of constructor patterns and !x be evaluated
   when the module was loaded (or at compile time)?


Yes. Nothing else makes sense, does it? If that's problematic (although 
I can't see why it would be), just forbid strict patterns at the top 
level of modules.


Load time rather than compile-time, I think--otherwise the compiled code 
for a module could depend on the *code* of modules it imports, not just 
on their interfaces, which would be harmful for separate compilation.


John

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: Bang patterns, ~ patterns, and lazy let

2006-02-07 Thread Ross Paterson
On Tue, Feb 07, 2006 at 09:02:36AM +0100, John Hughes wrote:
> I would urge that either we stick with the present design, or, if bang 
> patterns are added (which a lot speaks for), that the language be 
> simplified at the same time so that patterns are matched in the same way 
> everywhere, and BOTH warts above are excised. Some existing code would 
> break, but in return the language would become simpler and more expressive.

Would top-level bindings of constructor patterns and !x be evaluated when
the module was loaded (or at compile time)?

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime