Most enlightening, thanks. The same effect can be seen with Debug.Trace.trace around the two versions of f.
I suppose this means that the points-free/pattern binding-style version is a bit less work for ghc to execute (fewer reductions), whereas the version with lambda bound variables is easier to debug. On balance, I think I'll frequently write my functions with lambda bound variables then. Getting better use out of the ghc debugger seems worth the few extra cycles. 2009/4/26 Bernie Pope <florbit...@gmail.com>: > 2009/4/25 Thomas Hartman <tphya...@gmail.com> >> >> In the program below, can someone explain the following debugger output to >> me? >> >> After :continue, shouldn't I hit the f breakpoint two more times? >> Why do I only hit the f breakpoint once? >> Is this a problem in the debugger? >> >> thart...@ubuntu:~/haskell-learning/debugger>cat debugger.hs >> >> -- try this: >> -- ghci debugger.hs >> -- > :break f >> -- > :trace t >> -- > :history -- should show you that f was called from h >> t = h . g . f $ "hey!" >> t2 = h . g . f $ "heh!" >> t3 = h . g . f $ "wey!" >> >> f = ("f -- " ++) >> g = ("g -- " ++) >> h = ("h -- " ++) >> >> ts = do >> putStrLn $ t >> putStrLn $ t2 >> putStrLn $ t3 > > What you are observing is really an artifact of the way breakpoints are > attached to definitions in the debugger, and the way that GHCi evaluates > code. > f is clearly a function, but its definition style is a so-called "pattern > binding". The body contains no free (lambda bound) variables, so it is also > a constant. GHCi arranges for f to be evaluated at most once. The breakpoint > associated with the definition of f is fired if and when that evaluation > takes place. Thus, in your case it fires exactly once. > You can re-write f to use a so-called "function binding" instead, by > eta-expansion (introduce a new fresh variable, and apply the function to it > on both sides): > f x = ("f -- " ++) x > This denotes the same function, but the breakpoint on f works differently. > In this case, a breakpoint attached to f will fire whenever an application > of f is reduced. If you write it this way you will see that the program > stops three times instead of one. > You might ask: if both definitions denote the same function, why does the > debugger behave differently? The short answer is that the debugger in GHCi > is an operational debugger, so it exposes some of the operational details > which may be invisible in a denotational semantics. In this case it revels > that GHCi treats the two definitions of f differently. > Cheers, > Bernie. > _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe