On Wed, Nov 5, 2008 at 4:33 AM, Achim Schneider <[EMAIL PROTECTED]> wrote:
> I know that I, coming from a C/Scheme POV, in the beginning attributed
> much magic to Haskell's inner workings and assumed, because of the
> general wizardly air of the whole language, avg to run in O(n) time and
> constant resp. O(n) space.

Haskell's great strength is its equational semantics.  I would like
Haskell programmers to think equationally, mathematically, rather than
operationally, when writing Haskell programs.  If I were to teach a
course in Haskell, I would like to launch off of denotational
semantics, hopefully without ever having to say "lazy evaluation".
(It's a pipe dream, of course, but you get the idea)

However, this strength is also a weakness when we want to analyze the
efficiency of programs.  The denotational semantics of Haskell say
nothing about time or space complexity, and give us no tools to reason
about it.  A Haskell interpreter which randomly rewrites terms until
it happens upon a normal form is considered correct (or rather,
"correct with probability 1" :-).

How can we support analysis of time and space complexity without
expanding ourselves into the complicated dirty world of operational
thinking?

Luke
_______________________________________________
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe

Reply via email to