Re: [Haskell-cafe] Compiler's bane

2008-09-04 Thread Ryan Ingram
On Thu, Sep 4, 2008 at 10:41 AM, Andrew Coppin
<[EMAIL PROTECTED]> wrote:
> I love the way other people have wildly different ideas of "simple" than me.
> I'm staring at this and completely failing to comprehend it. (But then,
> anything with "co" in the name generally makes little sense to me...) Why on
> earth would you need a reader monad? Surely if you want to add bound
> variables and then later query what variables are bound, you'd want a state
> monad? Hmm, I'm completely lost here.

Other people have already covered the reader vs. state issue; but to
sum up, this is a state monad, but the state can only be mutated in
sub-expressions, not in future expressions.

In my quick implementation, the branch of arbExp that makes a
lambda-expression looks something like this:

> do
>-- get a new variable name
>v <- lift arbitrary
>-- construct a new expression that may use v as a variable
>e <- local (v:) arbExp
>-- return the new expression
>return (Lambda v e)

As to all the crazy "co" stuff, it's just an implementation detail for
QuickCheck.  It took me a while figure out what was actually going on,
but the implementation is basically just boilerplate at this point.  A
simpler implementation is

> coarbitrary _ = id

A full explanation:

> coarbitrary :: Arbitrary a => a -> Gen b -> Gen b

Gen is a simple state monad that holds the random number generator
state and some additional QuickCheck data.  What "coarbitrary" is
supposed to do is to modify the state of the random number generator
based on the input data.  This allows quickCheck to create automatic
arbitrary instances for functions that output your type a; that is, if
you had a property

> prop_compose_assoc h g f x = (f . (g . h)) x == ((f . g) . h) x

ghci> quickCheck (prop_compose_assoc ::
(Int -> Expression) ->
(Expression -> Expression) ->
(Expression -> Int) ->
Int -> Bool)
... OK, passed 100 tests.

In order to do this, it needs to be able to generate a function of
type (Expression -> Expression), but at the time that function is
constructed you have just the random number generator state at that
point.  In order to do this, it creates a function that uses
coarbitrary on the input Expression to modify that fixed random number
generator state to get a new state which is then used to generate the
output expression.  Using the "simple" implementation of coarbitrary
above, QuickCheck will only generate constant functions; that is,
functions which generate the same (random) expression each time they
are called.

The internals of doing so are kind of ugly, but thankfully you are
free to ignore that and build your coarbitrary out of a couple of
simple building blocks:

> variant :: Int -> Gen b -> Gen b
> coarbitrary :: Arbitrary a => a -> Gen b -> Gen b

But wait, aren't you supposed to be defining coarbitrary?  Well, as
long as you use coarbitrary on smaller data structures, you're fine.
The strategy for finite data structures is really simple:

- Each constructor uses "variant" with a number representing that constructor.
- Each data inside a constructor uses coarbitrary recursively.

The reason I included the 'coarbitrary' code in my post is that it is
the more complicated part of the instance to understand and write
before you "get it"; understanding how to write "arbitrary" is the
important part.

  -- ryan

> Ryan Ingram wrote:
>>
>> It's pretty simple, I think.
>>
>> type ExpGen = ReaderT [String] Gen
>>
>> arbExp :: ExpGen Expression
>> -- exercise for the reader
>>
>> instance Arbitrary Expression where
>>arbitrary = runReaderT arbExp []
>>coarbitrary = coarbExp
>>
>> coarbExp (Var s)  = variant 0 . coarbitrary s
>> coarbExp (Apply a b)  = variant 1 . coarbitrary a . coarbitrary b
>> coarbExp (Lambda s e) = variant 2 . coarbitrary s . coarbitrary e
>>
>> instance Arbitrary Char where
>>  arbitrary   = elements "abcdefghijklmnopqrstuvwxyz_"
>>  coarbitrary = coarbitrary . fromEnum
>>
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Compiler's bane

2008-09-04 Thread Brandon S. Allbery KF8NH

On Sep 4, 2008, at 13:41 , Andrew Coppin wrote:

Ryan Ingram wrote:

It's pretty simple, I think.

type ExpGen = ReaderT [String] Gen

arbExp :: ExpGen Expression
-- exercise for the reader

instance Arbitrary Expression where
   arbitrary = runReaderT arbExp []
   coarbitrary = coarbExp

coarbExp (Var s)  = variant 0 . coarbitrary s
coarbExp (Apply a b)  = variant 1 . coarbitrary a . coarbitrary b
coarbExp (Lambda s e) = variant 2 . coarbitrary s . coarbitrary e

instance Arbitrary Char where
 arbitrary   = elements "abcdefghijklmnopqrstuvwxyz_"
 coarbitrary = coarbitrary . fromEnum



o_O

I love the way other people have wildly different ideas of "simple"  
than me. I'm staring at this and completely failing to comprehend  
it. (But then, anything with "co" in the name generally makes little  
sense to me...) Why on earth would you need a reader monad? Surely  
if you want to add bound variables and then later query what  
variables are bound, you'd want a state monad? Hmm, I'm completely  
lost here.


Reader, in this case, is a State monad with the addition of scopes:   
to create a new nested scope r' given a scope r, "let r' = local r".   
The [String] is a list of variable names, if this is doing what I  
think it is.


--
brandon s. allbery [solaris,freebsd,perl,pugs,haskell] [EMAIL PROTECTED]
system administrator [openafs,heimdal,too many hats] [EMAIL PROTECTED]
electrical and computer engineering, carnegie mellon universityKF8NH


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Compiler's bane

2008-09-04 Thread Jonathan Cast
On Thu, 2008-09-04 at 18:41 +0100, Andrew Coppin wrote:
> Ryan Ingram wrote:
> > It's pretty simple, I think.
> >
> > type ExpGen = ReaderT [String] Gen
> >
> > arbExp :: ExpGen Expression
> > -- exercise for the reader
> >
> > instance Arbitrary Expression where
> > arbitrary = runReaderT arbExp []
> > coarbitrary = coarbExp
> >
> > coarbExp (Var s)  = variant 0 . coarbitrary s
> > coarbExp (Apply a b)  = variant 1 . coarbitrary a . coarbitrary b
> > coarbExp (Lambda s e) = variant 2 . coarbitrary s . coarbitrary e
> >
> > instance Arbitrary Char where
> >   arbitrary   = elements "abcdefghijklmnopqrstuvwxyz_"
> >   coarbitrary = coarbitrary . fromEnum
> >   
> 
> o_O
> 
> I love the way other people have wildly different ideas of "simple" than 
> me. I'm staring at this and completely failing to comprehend it. (But 
> then, anything with "co" in the name generally makes little sense to 
> me...) Why on earth would you need a reader monad? Surely if you want to 
> add bound variables and then later query what variables are bound, you'd 
> want a state monad? Hmm, I'm completely lost here.

Motto (off-the-cuff): State monads are for APIs with function names like
`set'; reader monads are for APIs with function names like `with'.  In
this case, you definitely do not want to bring names into scope with

  bringNameIntoScope :: Name -> ExpGen ()

because then you'd just have to implement

  bringNameOutofScope :: Name -> ExpGen ()

and remember to call it after you've generated the body of the lambda,
except that you have to check before bringing the name into scope
whether it's already in scope and if so make sure it's still in scope
afterwards.

A simpler alternative is to pull out the entire scope, using get, save
it off, modify the state, and then put it back later:

  withNameInScope :: Name -> ExpGen alpha -> ExpGen alpha
  withNameInScope name a = do
scope <- get
modify (name:)
x <- a
set scope
return x

But by adopting that API, you're suggesting the use of a reader monad to
implement scoping, since withNameInScope would then be just

  withNameInScope name = local (name:)

jcc

http://haskell.org/ghc/docs/latest/html/libraries/mtl/Control-Monad-Reader-Class.html#v%3Alocal


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Compiler's bane

2008-09-04 Thread Andrew Coppin

Ryan Ingram wrote:

It's pretty simple, I think.

type ExpGen = ReaderT [String] Gen

arbExp :: ExpGen Expression
-- exercise for the reader

instance Arbitrary Expression where
arbitrary = runReaderT arbExp []
coarbitrary = coarbExp

coarbExp (Var s)  = variant 0 . coarbitrary s
coarbExp (Apply a b)  = variant 1 . coarbitrary a . coarbitrary b
coarbExp (Lambda s e) = variant 2 . coarbitrary s . coarbitrary e

instance Arbitrary Char where
  arbitrary   = elements "abcdefghijklmnopqrstuvwxyz_"
  coarbitrary = coarbitrary . fromEnum
  


o_O

I love the way other people have wildly different ideas of "simple" than 
me. I'm staring at this and completely failing to comprehend it. (But 
then, anything with "co" in the name generally makes little sense to 
me...) Why on earth would you need a reader monad? Surely if you want to 
add bound variables and then later query what variables are bound, you'd 
want a state monad? Hmm, I'm completely lost here.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Compiler's bane

2008-09-04 Thread Andrew Coppin

Brandon S. Allbery KF8NH wrote:

On 2008 Sep 3, at 14:34, Andrew Coppin wrote:
The amusing (?) part is that the interpretter itself is essentially 
quite simple. I've implemented it several times before now. But what 
I'm trying to do it make it print out elaborately formatted execution 
traces so that a human user can observe how execution proceeds. This 
transforms an essentially simple algorithm into something quite 
nausiatingly complex, with many subtle bugs and issues.


This seems odd to me:  I would expect to wrap a WriterT around it, log 
unformatted actions there, and dump it to a file at the end to be read 
by an analyzer tool which can optionally reformat the log to be 
human-readable.


Well actually, the interpretter itself takes an expression and returns a 
large, complex data structure representing the reduction sequence. Then 
a second function takes the reduction sequence and transforms it into a 
target-neutral markup structure. Finally, one of several rendering 
backends transforms this into something displayable - text, HTML, LaTeX, 
whatever. There are no monads involved. Anywhere.


Actually, that's not completely true. The interpretter takes a set of 
transformation rules as an argument, and attempts to apply each rule in 
sequence, and then recursively over all subexpressions. The logic for 
this makes extensive use of the Maybe monad. (I spent ages looking for a 
function Maybe x -> Maybe x -> Maybe x that would keep Just and throw 
away Nothing, rather than (>>=) which does the opposite. Eventually I 
discovered that this is actually mplus. It wasn't easy...)


The basic interpretter is just one function, that does a little pattern 
matching. It's one module of a few dozen lines. But as soon as you want 
to *record* the transformations applied, suddenly everything gets very 
much more complicated. And when you start wanting to have many possibly 
rules to apply, and turning individual rules on and off, and applying 
rules only to certain parts of the expression... it all starts to get 
very complicated.


I've spent about 3 hours this afternoon trying to get my renamer to 
work. I *had* a working renamer, but then I discovered that 
locally-unique names are insufficient. You must have _globally_ unique 
names. This is much harder; you cannot now rename each subexpression 
independently. You must put the whole algorithm into a state monad. The 
algorithm is maddeningly subtle to get right. It's still not working 
properly. It *almost* works, but not quite. I think I know how to fix it 
- we'll see tomorrow - but isn't it just typical that an "uninteresting" 
part of the program turns out to be harder than the interesting bits?


My head hurts...
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Compiler's bane

2008-09-03 Thread Brandon S. Allbery KF8NH

On 2008 Sep 3, at 14:34, Andrew Coppin wrote:

Brandon S. Allbery KF8NH wrote:
You can define a set of valid transformations, have the interpreter  
log each transformation, and verify that all are correct (that is,  
that both the transformation and the logged result are correct.


This assumes the interpreter can be resolved down to a sufficiently  
simple set of transformations; if not, you're right back at having  
the tester being the interpreter itself.  Note that you don't check  
if the transformation plan for the program matches a specified  
list, just that all transformations are correct.  (Just remember  
that "logic is an organized way of going wrong with confidence.")


The amusing (?) part is that the interpretter itself is essentially  
quite simple. I've implemented it several times before now. But what  
I'm trying to do it make it print out elaborately formatted  
execution traces so that a human user can observe how execution  
proceeds. This transforms an essentially simple algorithm into  
something quite nausiatingly complex, with many subtle bugs and  
issues.


This seems odd to me:  I would expect to wrap a WriterT around it, log  
unformatted actions there, and dump it to a file at the end to be read  
by an analyzer tool which can optionally reformat the log to be human- 
readable.


--
brandon s. allbery [solaris,freebsd,perl,pugs,haskell] [EMAIL PROTECTED]
system administrator [openafs,heimdal,too many hats] [EMAIL PROTECTED]
electrical and computer engineering, carnegie mellon universityKF8NH


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Compiler's bane

2008-09-03 Thread Ryan Ingram
On Wed, Sep 3, 2008 at 11:34 AM, Andrew Coppin
<[EMAIL PROTECTED]> wrote:
>  data Expression = Var String | Apply Expression Expression | Lambda String
> Expression
>
> How would you go about building random expression trees?

It's pretty simple, I think.  Keep an environment of variable names
that are in enclosing lambdas, then randomly choose Apply/Lambda/Var,
ignoring Var if the environment is empty, and with Lambda adding to
the environment of its subexpression.

I suggest

type ExpGen = ReaderT [String] Gen

arbExp :: ExpGen Expression
-- exercise for the reader

instance Arbitrary Expression where
arbitrary = runReaderT arbExp []
coarbitrary = coarbExp

coarbExp (Var s)  = variant 0 . coarbitrary s
coarbExp (Apply a b)  = variant 1 . coarbitrary a . coarbitrary b
coarbExp (Lambda s e) = variant 2 . coarbitrary s . coarbitrary e

-- Also, apparently there is no default Arbitrary instance for strings, so...
instance Arbitrary Char where
  arbitrary   = elements "abcdefghijklmnopqrstuvwxyz_"
  coarbitrary = coarbitrary . fromEnum


Here's some examples I got with a quick and dirty implementation for arbExp:
Lambda "" (Lambda "" (Var ""))
Lambda "jae" (Lambda "iq" (Var "jae"))
Lambda "sj" (Lambda "" (Var ""))
Lambda "n" (Var "n")
Lambda "lxy" (Lambda "md_fy" (Lambda "b" (Var "b")))
Lambda "" (Apply (Lambda "" (Var "")) (Lambda "" (Lambda "" (Var ""
Lambda "vve" (Lambda "mvy" (Var "vve"))
Lambda "" (Apply (Apply (Var "") (Lambda "km" (Var "km"))) (Var ""))
Lambda "" (Lambda "_" (Var ""))
Lambda "aeq" (Var "aeq")
Lambda "l_" (Apply (Var "l_") (Lambda "" (Var "l_")))

My implementation doesn't choose Apply nearly enough, but I was
worried about exponential blowup; the solution is to use the "sized"
combinator from QuickCheck and have the size determine the number of
Apply you are allowed to have.

It's also probably a good idea to not allow empty strings for variable names :)

  -- ryan
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Compiler's bane

2008-09-03 Thread Andrew Coppin

Brandon S. Allbery KF8NH wrote:

On 2008 Sep 1, at 14:47, Andrew Coppin wrote:
I wonder - how do the GHC developers check that GHC works properly? 
(I guess by compiling stuff and running it... It's a bit harder to 
check that a lambda interpretter is working right.)


GHC has a comprehensive test suite (not included in the source tarball 
or the default checkout but easily checked out atop GHC).


I'm sure a large, complex product like GHC would have a large test 
suite. I was asking how it works. ;-)


Since GHC actually transforms Haskell to machine code in several stages, 
I presume each one can be checked independently, which probably makes 
things easier. But I bet the GHC developers don't have any way to just 
automatically build 1,000 random test cases and check that the compiler 
"works" for those...


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Compiler's bane

2008-09-03 Thread Andrew Coppin

Brandon S. Allbery KF8NH wrote:
You can define a set of valid transformations, have the interpreter 
log each transformation, and verify that all are correct (that is, 
that both the transformation and the logged result are correct.


This assumes the interpreter can be resolved down to a sufficiently 
simple set of transformations; if not, you're right back at having the 
tester being the interpreter itself.  Note that you don't check if the 
transformation plan for the program matches a specified list, just 
that all transformations are correct.  (Just remember that "logic is 
an organized way of going wrong with confidence.")


The amusing (?) part is that the interpretter itself is essentially 
quite simple. I've implemented it several times before now. But what I'm 
trying to do it make it print out elaborately formatted execution traces 
so that a human user can observe how execution proceeds. This transforms 
an essentially simple algorithm into something quite nausiatingly 
complex, with many subtle bugs and issues.


Still, I guess it's not news to anybody that proof-of-concept programs 
are much easier that real-world implementations.


One thing I could do is have QuickCheck build arbitrary expressions, run 
those through the pretty printer, and then run the result back through 
the parser and check that I get the same expression. Can anybody tell me 
how to get QuickCheck to build arbitrary expressions though? Let's say I had


 data Expression = Var String | Apply Expression Expression | Lambda 
String Expression


How would you go about building random expression trees?


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Compiler's bane

2008-09-01 Thread Brandon S. Allbery KF8NH

On 2008 Sep 1, at 20:06, Tillmann Rendel wrote:

Andrew Coppin wrote:

Any hints? Just how *do* you check something large like this?


You could write a lot of test cases, calculating the correct answers  
by hand.


You could check that during evaluation, you have always wellformed  
terms (e.g. evaluation does not introduce new free variables). You  
could even check that after evaluation stops, you have indeed a  
normal form.


You could try to check the components of your interpreter (e.g.  
environment lookup, term substituation, simplification) independently.



You can define a set of valid transformations, have the interpreter  
log each transformation, and verify that all are correct (that is,  
that both the transformation and the logged result are correct.


This assumes the interpreter can be resolved down to a sufficiently  
simple set of transformations; if not, you're right back at having the  
tester being the interpreter itself.  Note that you don't check if the  
transformation plan for the program matches a specified list, just  
that all transformations are correct.  (Just remember that "logic is  
an organized way of going wrong with confidence.")


--
brandon s. allbery [solaris,freebsd,perl,pugs,haskell] [EMAIL PROTECTED]
system administrator [openafs,heimdal,too many hats] [EMAIL PROTECTED]
electrical and computer engineering, carnegie mellon universityKF8NH


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Compiler's bane

2008-09-01 Thread Tillmann Rendel

Andrew Coppin wrote:
Here is a *much* bigger problem: How do you check that an interpretter 
is correct??


Before checking for correctness, you have to define correctness. What is 
the specification of your interpreter?


You can't very easily write a QuickCheck property that will generate 
every possible valid expression and then check that the output of the 
interpretter is formally equivilent. The QuickCheck property would be a 
second copy of the interpretter, which proves nothing!


Well, if you have two *different* interpreters, that would at least 
reassure you somewhat. Consider a naive substituation-based interpreter 
and a clever environment-based interpreter. The former is often easier 
to write and easier to get correct, and the latter is often what you 
want in the end.


Since you seem to be implementing a standard language, you could use an 
existing implementation to check against. many lambda calculus style 
languages can easily be transformed into scheme, for example.



Any hints? Just how *do* you check something large like this?


You could write a lot of test cases, calculating the correct answers by 
hand.


You could check that during evaluation, you have always wellformed terms 
(e.g. evaluation does not introduce new free variables). You could even 
check that after evaluation stops, you have indeed a normal form.


You could try to check the components of your interpreter (e.g. 
environment lookup, term substituation, simplification) independently.


  Tillmann
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Compiler's bane

2008-09-01 Thread Brandon S. Allbery KF8NH

On 2008 Sep 1, at 14:47, Andrew Coppin wrote:
I wonder - how do the GHC developers check that GHC works properly?  
(I guess by compiling stuff and running it... It's a bit harder to  
check that a lambda interpretter is working right.)



GHC has a comprehensive test suite (not included in the source tarball  
or the default checkout but easily checked out atop GHC).


--
brandon s. allbery [solaris,freebsd,perl,pugs,haskell] [EMAIL PROTECTED]
system administrator [openafs,heimdal,too many hats] [EMAIL PROTECTED]
electrical and computer engineering, carnegie mellon universityKF8NH


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Compiler's bane

2008-09-01 Thread Andrew Coppin
Here is a *much* bigger problem: How do you check that an interpretter 
is correct??


You can't very easily write a QuickCheck property that will generate 
every possible valid expression and then check that the output of the 
interpretter is formally equivilent. The QuickCheck property would be a 
second copy of the interpretter, which proves nothing!


Any hints? Just how *do* you check something large like this?

I wonder - how do the GHC developers check that GHC works properly? (I 
guess by compiling stuff and running it... It's a bit harder to check 
that a lambda interpretter is working right.)


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Compiler's bane

2008-09-01 Thread Andrew Coppin

Jeremy Apthorp wrote:

2008/9/1 Andrew Coppin <[EMAIL PROTECTED]>:
  

Trouble is, as soon as you allow let-bindings, some clever person is going
to start writing recursive ones. And actually, that's a useful thing to be
able to do, but it makes figuring out the technical details... rather
nontrivial. (Seriously, I had no idea I was going to get into this much
trouble!)



I'm confused -- why is this different to having recursive top-level bindings?
  


It isn't different - and if my interpretter ever has top-level bindings, 
it'll introduce all the same problems. :-S


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Compiler's bane

2008-08-31 Thread Jeremy Apthorp
2008/9/1 Andrew Coppin <[EMAIL PROTECTED]>:
> Ryan Ingram wrote:
>>
>> What are you trying to get from the "let" binding?  Sharing?
>>
>
> Convinience.
>
>  let x = foo in bar
>
> is so much easier to write than
>
>  (\x -> bar) foo
>
> when foo and/or bar is large.
>
> Trouble is, as soon as you allow let-bindings, some clever person is going
> to start writing recursive ones. And actually, that's a useful thing to be
> able to do, but it makes figuring out the technical details... rather
> nontrivial. (Seriously, I had no idea I was going to get into this much
> trouble!)

I'm confused -- why is this different to having recursive top-level bindings?

Jeremy
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Compiler's bane

2008-08-31 Thread Andrew Coppin

Ryan Ingram wrote:

What are you trying to get from the "let" binding?  Sharing?
  


Convinience.

 let x = foo in bar

is so much easier to write than

 (\x -> bar) foo

when foo and/or bar is large.

Trouble is, as soon as you allow let-bindings, some clever person is 
going to start writing recursive ones. And actually, that's a useful 
thing to be able to do, but it makes figuring out the technical 
details... rather nontrivial. (Seriously, I had no idea I was going to 
get into this much trouble!)

The usual idea is that "let" represents heap allocation, and you
evaluate the leftmost-outermost redex as usual, doing let substitution
only when necessary to continue evaluation, and garbage-collecting
bindings that no longer refer to variables in the current computation


Right. So ignore the let-bindings unless the redex of interest is a 
let-bound variable? That sounds reasonably easy...


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Compiler's bane

2008-08-31 Thread Ryan Ingram
On Sun, Aug 31, 2008 at 9:29 AM, Andrew Coppin
<[EMAIL PROTECTED]> wrote:
> All of this strongly suggests that if you execute things in the correct
> order, you can arrange it so that expressions that have a normal form will
> be evaluated to it. But how to decide the correct evaluation order? For the
> plain lambda calculus, IIRC you just have to execute the left-most,
> outer-most redex every time and it'll work. For a language with let-binds,
> it is unclear to me how to proceed...

What are you trying to get from the "let" binding?  Sharing?

The usual idea is that "let" represents heap allocation, and you
evaluate the leftmost-outermost redex as usual, doing let substitution
only when necessary to continue evaluation, and garbage-collecting
bindings that no longer refer to variables in the current computation.
 You also make lambda-expressions create let-bindings during
substitution to maximize sharing.

Here's an example of how an interpreter could work:

let fact_acc = \n x. (== n 0) x (fact_acc (- n 1) (* n x))
in fact_acc 3 1

The "top-level" redex is a let bind that is already in WHNF; if it
wasn't, you continue evaluation inside of it.  So instead you
substitute and lose sharing because there are no other options.

I'm going to leave all previous "lets" as "..." in this evaluation to
save typing:

(substitute because leftmost-outermost redex is a let-bind)
=> let ... in (\n x. (== n 0) x (fact_acc (- n 1) (* n x))) 3 1
(beta, beta)
=> let ...; n0 = 3; x0 = 1 in (== n0 0) x0 (fact_acc (- n0 1) (* n0 x0))
(primitive == is strict, n0 is already in normal form, evaluate ==)
=> let ... in False x0 (fact_acc (- n0 1) (* n0 x0))
(False x y = y)
=> let ... in fact_acc (- n0 1) (* n0 x0)
(substitute fact_acc, beta, beta)
=> let ...; n1 = (- n0 1); x0 = (* n0 x0) in (== n1 0) x1 (fact_acc (-
n1 1) (* n1 x1))
(primitive == is strict, evaluate its argument.
 primitive - is strict, argument is in whnf, evaluate -)
(This is a very important step; note that I am now evaluating inside
of a let bind!)
=> let ...; n1 = 2; ... in (== n1 0) x1 (fact_acc (- n1 1) (* n1 x1))
(evaluate ==)
=> let ... in False x1 (fact_acc (- n1 1) (* n1 x1))
(False x y = y)
=> let ... in fact_acc (- n1 1) (* n1 x1)
=> let ...; n2 = (- n1 1); x2 = (* n1 x1) in (== n2 0) x2 (fact_acc (-
n2 1) (* n2 x2))
=> let ...; n2 = 1; ... in (== n2 0) x2 (fact_acc (- n2 1) (* n2 x2))
=> let ... in False x2 (fact_acc (- n2 1) (* n2 x2))
=> let ... in fact_acc (- n2 1) (* n2 x2))
=> let ...; n3 = (- n2 1); x3 = (* n2 x2) in (== n3 0) x3 (fact_acc (-
n3 1) (* n3 x3))
=> let ...; n3 = 0; ... in (== n3 0) x3 (fact_acc (- n3 1) (* n3 x3))
=> let ... in True x3 (fact_acc (- n3 1) (* n3 x3))
=> let ... in x3
(We can garbage collect fact_acc and n3 here)
=> let
  n0 = 3; x0 = 1;
  n1 = 2; x1 = (* n0 x0);
  n2 = 1; x2 = (* n1 x1);
  x3 = (* n2 x2) in x3
(primitive * is strict, so we get a big spine of evaluating x3 which
requires x2 which requires x1 which requires x0 which is finally
already in head normal form)
=> let ...; x1 = (* 3 1); x2 = (* 2 x1); x3 = (* 1 x2) in x3
(garbage collect n0, n1, n2, n3)
=> let x1 = 3; x2 = (* 2 x1); x3 = (* 1 x2) in x3
(substitute & garbage collect x1)
=> let x2 = (* 2 3); x3 = (* 1 x2) in x3
=> let x2 = 6; x3 = (* 1 x3) in x3
=> let x3 = (* 1 6) in x3
=> let x3 = 6 in x3
=> 6

Does this seem like something you could do?

  -- ryan
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Compiler's bane

2008-08-31 Thread Jonathan Cast
On Sun, 2008-08-31 at 17:29 +0100, Andrew Coppin wrote:
> Ryan Ingram wrote:
> > On Sun, Aug 31, 2008 at 12:46 AM, Andrew Coppin
> > <[EMAIL PROTECTED]> wrote:
> >   
> >> Right. So if I have, say, the factorial function defined using the Y
> >> combinator, there's no way to stop the interpretter expanding an infinite
> >> number of copies of Y, even though in theory the factorial function should
> >> terminate?
> >> 
> >
> > Well, this is exactly what ghci is doing; you just have to choose an
> > evaluation order that makes sense.
> >
> > Lets consider a simple translation for recursive lets:
> >   
> >> let x = exp1 in exp2
> >> 
> > (where x may be free in exp1 and/or exp2)
> >
> > Here's a simple translation into lambda-calculus:
> >   
> >> (\x. exp2) (y (\x. exp1))
> >> 
> >
> > (Of course, this representation for let can lose sharing; without
> > knowing what you have or your goals I don't know if you care).
> >
> > Now, you just have to choose a y-combinator that makes sense for your
> > evaluation order.  If you do normal-order evaluation like Haskell
> > ("non-strict"), this Y should work for you:
> >
> > y = \f. (\x. f (x x)) (\x. f (x x))
> >
> >   
> >> Oh well, I guess I'll just have to live with that one. Maybe I can make the
> >> binding to expand user-selectable or something...
> >> 
> >
> > Now, if you are eagerly-expanding the entire expression, trying to get
> > to head normal form, then any expression that ends up with "y" in it
> > won't terminate.  But if you use normal form evaluation order, then if
> > a normal form exists, you will find it and terminate.
> >
> > So evaluating "fact" to normal form won't terminate, but evaluating
> > "fact 5" should.
> >   
> 
> All of this strongly suggests that if you execute things in the correct 
> order, you can arrange it so that expressions that have a normal form 
> will be evaluated to it. But how to decide the correct evaluation order? 
> For the plain lambda calculus, IIRC you just have to execute the 
> left-most, outer-most redex every time and it'll work. For a language 
> with let-binds, it is unclear to me how to proceed...

It depends on whether you want to preserve sharing or not.  If not, you
can use fix to de-sugar lambdas, and then de-sugar lets using the rule

  let x = e in f
= (\ x -> f) e

and proceed as before.  Or (again, if you don't care about sharing), you
can descend recursively into the body of a let, remembering where the
bindings were defined, and expand variables when you see them.

Or keep an environment around, extend it on let, and if the left-most
outermost redex is a variable, look it up.

jcc


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Compiler's bane

2008-08-31 Thread Andrew Coppin

Ryan Ingram wrote:

On Sun, Aug 31, 2008 at 12:46 AM, Andrew Coppin
<[EMAIL PROTECTED]> wrote:
  

Right. So if I have, say, the factorial function defined using the Y
combinator, there's no way to stop the interpretter expanding an infinite
number of copies of Y, even though in theory the factorial function should
terminate?



Well, this is exactly what ghci is doing; you just have to choose an
evaluation order that makes sense.

Lets consider a simple translation for recursive lets:
  

let x = exp1 in exp2


(where x may be free in exp1 and/or exp2)

Here's a simple translation into lambda-calculus:
  

(\x. exp2) (y (\x. exp1))



(Of course, this representation for let can lose sharing; without
knowing what you have or your goals I don't know if you care).

Now, you just have to choose a y-combinator that makes sense for your
evaluation order.  If you do normal-order evaluation like Haskell
("non-strict"), this Y should work for you:

y = \f. (\x. f (x x)) (\x. f (x x))

  

Oh well, I guess I'll just have to live with that one. Maybe I can make the
binding to expand user-selectable or something...



Now, if you are eagerly-expanding the entire expression, trying to get
to head normal form, then any expression that ends up with "y" in it
won't terminate.  But if you use normal form evaluation order, then if
a normal form exists, you will find it and terminate.

So evaluating "fact" to normal form won't terminate, but evaluating
"fact 5" should.
  


All of this strongly suggests that if you execute things in the correct 
order, you can arrange it so that expressions that have a normal form 
will be evaluated to it. But how to decide the correct evaluation order? 
For the plain lambda calculus, IIRC you just have to execute the 
left-most, outer-most redex every time and it'll work. For a language 
with let-binds, it is unclear to me how to proceed...


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Compiler's bane

2008-08-31 Thread Ryan Ingram
On Sun, Aug 31, 2008 at 12:46 AM, Andrew Coppin
<[EMAIL PROTECTED]> wrote:
> Right. So if I have, say, the factorial function defined using the Y
> combinator, there's no way to stop the interpretter expanding an infinite
> number of copies of Y, even though in theory the factorial function should
> terminate?

Well, this is exactly what ghci is doing; you just have to choose an
evaluation order that makes sense.

Lets consider a simple translation for recursive lets:
> let x = exp1 in exp2
(where x may be free in exp1 and/or exp2)

Here's a simple translation into lambda-calculus:
> (\x. exp2) (y (\x. exp1))

(Of course, this representation for let can lose sharing; without
knowing what you have or your goals I don't know if you care).

Now, you just have to choose a y-combinator that makes sense for your
evaluation order.  If you do normal-order evaluation like Haskell
("non-strict"), this Y should work for you:

y = \f. (\x. f (x x)) (\x. f (x x))

> Oh well, I guess I'll just have to live with that one. Maybe I can make the
> binding to expand user-selectable or something...

Now, if you are eagerly-expanding the entire expression, trying to get
to head normal form, then any expression that ends up with "y" in it
won't terminate.  But if you use normal form evaluation order, then if
a normal form exists, you will find it and terminate.

So evaluating "fact" to normal form won't terminate, but evaluating
"fact 5" should.

  -- ryan
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Compiler's bane

2008-08-31 Thread Andrew Coppin

Jonathan Cast wrote:

On Fri, 2008-08-29 at 20:56 +0100, Andrew Coppin wrote:
  
I've been playing with this today, and no matter what rules I come up 
with, it seems to be ridiculously easy to put the interpretter into an 
infinite loop from which it will never escape. Is there any way to know 
which binds are "worth" expanding and which ones aren't? (I have a 
sinking feeling this might be equivilent to the Halting Problem...)


For example, if you have "let x = f y; y = g x in x" then since all the 
functions are free variables, there's really nothing much "useful" you 
can do to simplify this any further. However, my interpretter merrily 
goes into an endless loop building a huge expression like "f (g (f (g (f 
(g..." Is it possible to avoid this?



If you want to avoid infinite normal forms (or just plain lack of normal
forms, as in let x = x in x or (\x -> x x) (\ x -> x x)), you need to
follow three rules:

1) Static typing
2) No value recursion
3) If you allow data types, no recursive data types

Otherwise, your language will be Turing-complete, so yes, trying to
determine ahead of time if even a HNF exists will be the halting
problem.

If you really want to write a general-purpose evaluator, then infinite
normal forms may not be an issue, as long as they are generated lazily,
so your evaluator can at least print something out while it goes into an
infinite loop.  Otherwise, you can declare f x, where f is an unknown
function, a HNF, and stop at that point, or go on only under the
client's control.

The optimizers used by GHC and YHC pick one function out of each
recursive binding group, and just refuse to inline it at all.  If you
don't mind producing expressions that aren't really in any definable
HNF, that would be an alternative as well.
  


Right. So if I have, say, the factorial function defined using the Y 
combinator, there's no way to stop the interpretter expanding an 
infinite number of copies of Y, even though in theory the factorial 
function should terminate?


Oh well, I guess I'll just have to live with that one. Maybe I can make 
the binding to expand user-selectable or something...


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Compiler's bane

2008-08-29 Thread Jonathan Cast
On Fri, 2008-08-29 at 21:48 +0100, Conor McBride wrote:
> Hi
> 
> On 29 Aug 2008, at 21:12, Jonathan Cast wrote:
> 
> >
> > If you want to avoid infinite normal forms (or just plain lack of  
> > normal
> > forms, as in let x = x in x or (\x -> x x) (\ x -> x x)), you need to
> > follow three rules:
> >
> > 1) Static typing
> 
> With you there.
> 
> > 2) No value recursion
> 
> Mostly with you: might allow productive corecursion
> computing only on demand.
> 
> > 3) If you allow data types, no recursive data types
> 
> I can think of a few Turing incomplete languages with
> (co)recursive data types, so

> > Otherwise, your language will be Turing-complete,
> 
> seems suspect to me.

OK.  If you have any of

1) Dynamic typing, as implemented in say Scheme, or
2) Unlimited recursion, as implemented in say Haskell, or
3) Unlimited recursive data types, as implemented in say Haskell

your language is Turing-complete.

> It's quite possible to identify a total fragment of
> Haskell, eg, by seeing which of your Haskell programs
> compile in Agda.

> Things aren't as hopeless as you suggest.

OK.  There are intermediate cases that involve some suitably restricted
form of recursion.  I don't actually know anything about them, so I
won't comment on the suitability of such restrictions for Andrew's
interpreter.  But I should have remembered their existence.

jcc


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Compiler's bane

2008-08-29 Thread Conor McBride

Hi

On 29 Aug 2008, at 21:12, Jonathan Cast wrote:



If you want to avoid infinite normal forms (or just plain lack of  
normal

forms, as in let x = x in x or (\x -> x x) (\ x -> x x)), you need to
follow three rules:

1) Static typing


With you there.


2) No value recursion


Mostly with you: might allow productive corecursion
computing only on demand.


3) If you allow data types, no recursive data types


I can think of a few Turing incomplete languages with
(co)recursive data types, so


Otherwise, your language will be Turing-complete,


seems suspect to me.

It's quite possible to identify a total fragment of
Haskell, eg, by seeing which of your Haskell programs
compile in Agda.

Things aren't as hopeless as you suggest.

Cheers

Conor

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Compiler's bane

2008-08-29 Thread Jonathan Cast
On Fri, 2008-08-29 at 20:56 +0100, Andrew Coppin wrote:
> Neil Mitchell wrote:
> > Hi
> >
> >   
> >> I'm writing a simple interpretter for a small extended-lambda-calculus sort
> >> of language. And I'd just like to say... RECURSIVE LET-BINDS! GH!!! >_<
> >> 
> >
> > Agreed :-)
> >   
> 
> I'm glad it's not just me! ;-)
> 
> (Oleg cat sez: "see, yr type problum not so hard".)
> 
> >> To illustrate:
> >>
> >>  let x = f x; y = 5 in x y
> >>
> >> A simple-minded interpretter might try to replace every occurrance of "x"
> >> with "f x". This yields
> >>
> >>  let y = 5 in (f x) y
> >> 
> >
> > That's wrong, its a two step transformation. You just substitute all
> > occurances of x for f x:
> >
> > let x = f (f x); y = 5 in (f x) y
> >
> > For the case let x = 5 in x, you do the same thing:
> >
> > let x = 5 in 5
> >
> > Now as a second step you hoover up unused let bindings and disguard:
> >
> > 5
> >
> > You seem to be combining the substitution and the removing of dead let
> > bindings, if you separate them you should have more luck.
> >   
> 
> Right. So substitute in the RHS and then perform a let-cull afterwards. 
> Got it.
> 
> I've been playing with this today, and no matter what rules I come up 
> with, it seems to be ridiculously easy to put the interpretter into an 
> infinite loop from which it will never escape. Is there any way to know 
> which binds are "worth" expanding and which ones aren't? (I have a 
> sinking feeling this might be equivilent to the Halting Problem...)
> 
> For example, if you have "let x = f y; y = g x in x" then since all the 
> functions are free variables, there's really nothing much "useful" you 
> can do to simplify this any further. However, my interpretter merrily 
> goes into an endless loop building a huge expression like "f (g (f (g (f 
> (g..." Is it possible to avoid this?

If you want to avoid infinite normal forms (or just plain lack of normal
forms, as in let x = x in x or (\x -> x x) (\ x -> x x)), you need to
follow three rules:

1) Static typing
2) No value recursion
3) If you allow data types, no recursive data types

Otherwise, your language will be Turing-complete, so yes, trying to
determine ahead of time if even a HNF exists will be the halting
problem.

If you really want to write a general-purpose evaluator, then infinite
normal forms may not be an issue, as long as they are generated lazily,
so your evaluator can at least print something out while it goes into an
infinite loop.  Otherwise, you can declare f x, where f is an unknown
function, a HNF, and stop at that point, or go on only under the
client's control.

The optimizers used by GHC and YHC pick one function out of each
recursive binding group, and just refuse to inline it at all.  If you
don't mind producing expressions that aren't really in any definable
HNF, that would be an alternative as well.

jcc


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Compiler's bane

2008-08-29 Thread Andrew Coppin

Neil Mitchell wrote:

Hi

  

I'm writing a simple interpretter for a small extended-lambda-calculus sort
of language. And I'd just like to say... RECURSIVE LET-BINDS! GH!!! >_<



Agreed :-)
  


I'm glad it's not just me! ;-)

(Oleg cat sez: "see, yr type problum not so hard".)


To illustrate:

 let x = f x; y = 5 in x y

A simple-minded interpretter might try to replace every occurrance of "x"
with "f x". This yields

 let y = 5 in (f x) y



That's wrong, its a two step transformation. You just substitute all
occurances of x for f x:

let x = f (f x); y = 5 in (f x) y

For the case let x = 5 in x, you do the same thing:

let x = 5 in 5

Now as a second step you hoover up unused let bindings and disguard:

5

You seem to be combining the substitution and the removing of dead let
bindings, if you separate them you should have more luck.
  


Right. So substitute in the RHS and then perform a let-cull afterwards. 
Got it.


I've been playing with this today, and no matter what rules I come up 
with, it seems to be ridiculously easy to put the interpretter into an 
infinite loop from which it will never escape. Is there any way to know 
which binds are "worth" expanding and which ones aren't? (I have a 
sinking feeling this might be equivilent to the Halting Problem...)


For example, if you have "let x = f y; y = g x in x" then since all the 
functions are free variables, there's really nothing much "useful" you 
can do to simplify this any further. However, my interpretter merrily 
goes into an endless loop building a huge expression like "f (g (f (g (f 
(g..." Is it possible to avoid this?


(The problem isn't unique to recursive let-binds of course. I discovered 
today that (\x -> x x) (\x -> x x) reduces to itself, so that puts the 
interpretter into a loop too. However, NON-recursive let-binds always 
terminate eventually. It's only the recursive ones that cause problems.)


My word, this stuff is really surprisingly hard! At least this time I 
built a general-purpose renamer rather than trying to avoid name capture 
by construction! ;-)



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Compiler's bane

2008-08-27 Thread Jeremy Apthorp
2008/8/28 Lennart Augustsson <[EMAIL PROTECTED]>:
> You can get rid of all recursive bindings by transforming them into a
> use of a fixpoint combinator.
> And then you can use a non-recursive definition of the fixpoint
> combinator, and never worry about recursive bindings again.

This[1] might be a good explanation of why that works. It helped me
understand it, at least.

[1] - http://mvanier.livejournal.com/2897.html

Jeremy
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Compiler's bane

2008-08-27 Thread Lennart Augustsson
You can get rid of all recursive bindings by transforming them into a
use of a fixpoint combinator.
And then you can use a non-recursive definition of the fixpoint
combinator, and never worry about recursive bindings again.

  -- Lennart

On Wed, Aug 27, 2008 at 8:59 PM, Andrew Coppin
<[EMAIL PROTECTED]> wrote:
> Hi guys.
>
> I'm writing a simple interpretter for a small extended-lambda-calculus sort
> of language. And I'd just like to say... RECURSIVE LET-BINDS! GH!!! >_<
>
> No other part of the program has consumed nearly as much brain power as me
> trying to figure out when it is and isn't safe to replace a variable with
> its RHS.
>
> To illustrate:
>
>  let x = f x; y = 5 in x y
>
> A simple-minded interpretter might try to replace every occurrance of "x"
> with "f x". This yields
>
>  let y = 5 in (f x) y
>
> ...and x is now a free variable. OOPS!
>
> Trying to tease out exactly under which conditions you can and cannot
> perform the substitution is utterly maddening. Since this is a Haskell
> mailing list and as such it is populated by vast numbers of people with PhDs
> and so forth... does anybody happen to know the *correct* solution to this
> conundrum? Before I become clinically insane...? o_O
>
>
>
> By the way... To all those people who work on projects like GHC and so on,
> who have to get this stuff right "for real": you have my infinite respect!
>
> ___
> Haskell-Cafe mailing list
> Haskell-Cafe@haskell.org
> http://www.haskell.org/mailman/listinfo/haskell-cafe
>
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Compiler's bane

2008-08-27 Thread John Meacham
On Wed, Aug 27, 2008 at 08:59:28PM +0100, Andrew Coppin wrote:
>  let y = 5 in (f x) y
>
> ...and x is now a free variable. OOPS!
>
> Trying to tease out exactly under which conditions you can and cannot  
> perform the substitution is utterly maddening. Since this is a Haskell  
> mailing list and as such it is populated by vast numbers of people with  
> PhDs and so forth... does anybody happen to know the *correct* solution  
> to this conundrum? Before I become clinically insane...? o_O

A simple solution is to separate your 'substitution' and
'simplification' passes. As in, perform any substitutions you like, but
_keep_ the original binding for x.  let the simplifier/optimizer
recognize when bindings are dead and remove them. It can do so simply by
keeping track of free variables in the subterms, which is also useful
for things like let floating transformations.

John


-- 
John Meacham - ⑆repetae.net⑆john⑈
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Compiler's bane

2008-08-27 Thread Neil Mitchell
Hi

> I'm writing a simple interpretter for a small extended-lambda-calculus sort
> of language. And I'd just like to say... RECURSIVE LET-BINDS! GH!!! >_<

Agreed :-)

> To illustrate:
>
>  let x = f x; y = 5 in x y
>
> A simple-minded interpretter might try to replace every occurrance of "x"
> with "f x". This yields
>
>  let y = 5 in (f x) y

That's wrong, its a two step transformation. You just substitute all
occurances of x for f x:

let x = f (f x); y = 5 in (f x) y

For the case let x = 5 in x, you do the same thing:

let x = 5 in 5

Now as a second step you hoover up unused let bindings and disguard:

5

You seem to be combining the substitution and the removing of dead let
bindings, if you separate them you should have more luck.

Thanks

Neil
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Compiler's bane

2008-08-27 Thread Andrew Coppin

Hi guys.

I'm writing a simple interpretter for a small extended-lambda-calculus 
sort of language. And I'd just like to say... RECURSIVE LET-BINDS! 
GH!!! >_<


No other part of the program has consumed nearly as much brain power as 
me trying to figure out when it is and isn't safe to replace a variable 
with its RHS.


To illustrate:

 let x = f x; y = 5 in x y

A simple-minded interpretter might try to replace every occurrance of 
"x" with "f x". This yields


 let y = 5 in (f x) y

...and x is now a free variable. OOPS!

Trying to tease out exactly under which conditions you can and cannot 
perform the substitution is utterly maddening. Since this is a Haskell 
mailing list and as such it is populated by vast numbers of people with 
PhDs and so forth... does anybody happen to know the *correct* solution 
to this conundrum? Before I become clinically insane...? o_O




By the way... To all those people who work on projects like GHC and so 
on, who have to get this stuff right "for real": you have my infinite 
respect!


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe