On 29 Nov 2007, at 06:32, PR Stanley wrote:
Hi
Thanks for the response.
JCC: In most languages, if you have some expression E, and when the
computer attempts to evaluate E it goes in to an infinite loop, then
when the computer attempts to evaluate the expression f(E), it also
goes into an infinite loop, regardless of what f is. That's the
definition of a strict language.
PRS: Does that mean that a strict language is also imperative?
Nope, not at all. Just a strict language has slightly fewer programs
it can evaluate correctly, as more will loop infinitely.
Either e or f(e) could result in an infinite loop.
JCC: In Haskell, this isn't the case ---we can write functions f
such that the computation f(E) terminates,
even when E does not. (:) is one such function, as are some
functions built from it, such as (++); xn ++ ys terminates whenever
xn does, even if ys is an infinite loop. This is what makes it easy
and convenient to build infinite loops in Haskell; in most strict
languages, if you said
let fibs = 0 : 1 : zipWith (+) fibs (tail fibs)
the language would insist on evaluating fibs before it actually
assigned anything to the memory cell for fibs, giving rise to an
infinite loop. (For this reason, most strict languages make such
definitions compile-time errors).
Unfortunately, non-strictness turns out to be a pain in the ass to
implement, since it means when the code generator sees an
expression, it can't just generate code to evaluate it --- it has to
hide the code somewhere else, and then substitute a pointer to that
code for the value of the expression.
PRS: Is there a kind of strictness applied when the compiler/
interpreter sorts the various sub-expressions into little memory
compartments indexed with pointers for later evaluation? To put it
another way, does lazy evaluation begin with the outer-most
expression, the most abstract, and determine what sshould go where
in relation to the subsequent inner expressions? For example:
takeWhile (<20) [0..9] ++ [10..]
The compiler determiens at the outset that the result of takeWhile
is a list followed by the calculation of the length of that list
based on the predicate (<20), and then calls ++ which is for all
intents and purposes on its own an infinite loop. Is this what
happens?
Not really. For lazy evaluation the compiler doesn't "decide" the
order statically -- it merely gives the program rules to follow for
what the next expression to be evaluated should be. Lets look at a
slightly simpler example:
takeWhile (< 2) (map (+1) [0..])
We will always attempt to evaluate the outermost left most
expression. We do this by matching against the rules given in the
program, to make this clearer, here are the rules for takeWhile and map:
takeWhile _ [] = []
takeWhile p (x:xs) | p x = x : takeWhile p xs
| otherwise = []
map _ [] = []
map f (x:xs) = f x : map f xs
takeWhile (< 2) (map (+1) [0..])
-- We start by evaluating the leftmost outermost expression. We
attempt to match on the first rule of takeWhile, and discover that we
can't because we don't know whether the result of (map (+1) [0..]) is
the empty list or not. Therefore we demand the evaluation of (map +1)
[0..])
-> takeWhile (< 2) ((+1) 0 : map (+1) [1..])
-- We now know that we don't have the empty list, so we must use
the second rule of takeWhile. We must evaluate the guard first though:
-> (<2) ((+1) 0) |
-- To do this, we must evaluate ((+1) 0)
-> (<2) 1 |
-- This evaluates to True, so we may insert the right hand side --
note that x remains evaluated
-> True | 1 : takeWhile (<2) (map (+1) [1..])
-- We can drop the guard now, but lets carry on. We have already
evaluated the outermost expression, so lets evaluate the next in.
Again pattern matching on takeWhile demands the evaluation of map:
-> 1 : takeWhile (<2) ((+1) 1 : map (+1) [2..])
-- We again, can pattern match on takeWhile, and must evaluate the
guard again:
-> 1 : ((<2) ((+1) 1) |)
-- Again, we must evaluate the result of the addition
-> 1 : ((<2) 2 |)
-- This time we get False, so we must evaluate the next guard
-> 1 : (otherwise |)
-- otherwise is a synonym for True, so we use this right hand side.
-> 1 : (True | [])
-- and we can get rid of the guard, and prettify the result,
giving us:
-> [1]
Note that we followed a set of rules that gave us non-strict
semantics. The set of rules is called lazy evaluation. We may come
up with several other sets of rules that give us different evaluation
orders, but still non-strict semantics (e.g. Optimistic Evaluation).
This is a very simple example, that's to say, I am aware that the
compiler may be faced with a much more complex job of applying lazy
evaluation. Nevertheless, I wonder if there are a set of fundamental
rules to which the compiler must always adhere in lazy evaluation.
JCC: There are a number of clever optimizations you can use here
(indeed, most of the history of Haskell compilation techniques is a
list of clever techniques to get around the limitations of compiling
non-strict languages), but most of them rely on the compiler knowing
that, in this case, if a sub- expression is an infinite loop, the
entire expression is an infinite loop. This is actually pretty easy
to figure out (most of the time), but sometimes the compiler needs a
little help.
That's where $! (usually) comes in. When the compiler sees (f $ x),
it has to look at f to see whether, if x is an infinite loop, f $ x
is one as well. When the compiler sees (f $! x), it doesn't need to
look at f --- if x is an infinite loop, (f $! x) always is one as
well. So, where in (f $ x) the compiler sometimes needs to put the
code for x in a separate top-level block, to be called later when
it's needed, in (f $! x) the compiler can always generate code for x
inline, like a compiler for a normal language would. Since most CPU
architectures are optimized for normal languages that compile f(E)
by generating code for E inline, this is frequently a big speed-up.
PrS: Your description of $! reminds me of the difference between
inline functions and "ordinary" functions in C++ with the former
being faster. Am I on the right track? In either case, (f $ x) and
(f $! x), lazy evaluation must be applied at a higher level
otherwise either instruction could result in an infinite loop.
Therefore, is efficiency the only consideration here?
If Haskell is a lazy language and $ merely implies lazy evaluation
then what's the difference between (f $ x \oplus y) and (f (x \oplus
y))?
$ does not mean "do lazy evaluation" it means "apply". It's a
function, like any other:
f ($) x = f x
All it does is takes the function, and the argument and applies one to
the other, it can be used for eliminating ugly bracketing, or is
useful in creating sections, e.g.
> map ($ 5) [(1 +), (2 *), (3 ^)]
[6, 10, 243]
$! is the special case, which means strictly apply. It evaluates its
argument first, *then* does the application. This may or may not be
faster (and usually isn't, due to evaluating more of the argument):
f ($!) x = seq x (f x)
seq is a special function that says "first fully evaluate my first
argument, then return my second argument", it breaks non-strict
semantics. Personally, my only use for such functions is a little bit
of debugging work, seq for example can be used to force something to
be printed whenever an expression is evaluated:
seq (unsafePerformIO $ putStrLn "At the nasty evaluation") (some
problematic expression)
There is however a nicer version of this in the libraries, that masks
it nicely for me:
trace "At the nasty evaluation" (some problematic expression)
I hope this helped somewhat.
Tom Davie
_______________________________________________
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe