Laziness is a great advantage in some cases.  When, at compile time, you do
not know what needs to be evaluated at run time, you can
  a) evaluate everything that may need to be evaluated (and waste a great
deal of time)
  b) write your own system to determine what needs to be evaluated at
run-time
  c) use a lazy language
imho, for complex problems
      c) is often faster than a)
and   c) usually involves less development effort than b)


-----Original Message-----
From: George Russell [mailto:[EMAIL PROTECTED]]
Sent: Monday, March 20, 2000 10:58 AM
To: Ch. A. Herrmann
Cc: [EMAIL PROTECTED]
Subject: Re: speed of compiled Haskell code.


"Ch. A. Herrmann" wrote:
>   I believe that if as much research were spent on Haskell compilation as
>   on C compilation, Haskell would outperform C.
I wish I could say I believed that.  The big thing you aren't going to be
able to optimise away is laziness, which means you are going to have
unevaluated
thunks rather than values all over the place.  Of course you can put
strictness
annotations in all over the place if you want to, but then that rather
spoils
the point of Haskell.

On the other hand Haskell may start looking more favourable in a few years.
It seems to me that the Von Neumann architecture
(one processor sitting in the middle, sending out messages when it wants
data)
is really creaking at the seams right now.  Multiple caches and pipelining
are becoming ever more important, but there's only so much that can be
done with conventional programming languages.  (Hence claims of processor
speed, which usually assume zero CPU wait time and the sort of scheduling
only
available with hand-tuned assembler, bear less and less resemblance to
reality
every year.)  However Haskell is much easier to reason about and should be
much easier to parallelise, so its time may come even where performance is
important.  I hope.

Reply via email to