Andrew Lentvorski wrote:
Christopher Smith wrote:
On Jun 27, 2007, at 12:15 PM, Stewart Stremler wrote:
begin quoting Tracy R Reed as of Wed, Jun 27, 2007 at 01:16:51AM
-0700:
A neat example is to compare quicksort in haskell:
quicksort :: (Ord a) => [a] -> [a] quicksort [] = [] quicksort
(p:xs) = quicksort [ x | x <- xs, x <= p ] ++ [ p ] ++ quicksort [
x | x <- xs, x > p ]
Unexpected truncation?
Huh?
Plus, it needs comments.
I disagree here. You need to know the language to interpret it, but
even without that, if you have a familiarity with the quicksort
algorithm, it is very easy to see what is happening here.
I'm not convinced. Specifically, why is there an "x <= p"? vs. "x >
p". That kind of asymmetric comparison sets off all kinds of alarm
bells.
'cause that's how you do things with quicksort. That said, what you'd
normally do (and this includes what I've seen other Haskell quicksort
samples do) is define a predicate for "lessThan" and then filter on
"lessThan" and "!lessThan", which is much more obvious in what you are
doing and why you are doing it. Either way, comments aren't the
solution: better code is.
Is the sort stable? Unstable? What kind of ordering guarantee is
required?
Normally if you want a stable quicksort you use a function with
specifically that name ("stable_sort" or some such). You also generally
don't expect quicksort to be stable.
Are "<" and "=" of the correct O(n)?
Hopefully you know about the important of these before you start using
functions named "quicksort". Like I said initially, knowing about
Are we constructing or reusing lists?
Haskell is a purely functional language. :-)
Since this has to recurse with pretty big lists, how does it handle
that? Since it allocates memory on the fly rather than at the
beginning, how does it signal out of memory?
See above.
I can't even *begin* to reason about the O(?) complexity without a
significant dive through the minutiae of Haskell.
It is O(nlogn) for the average case and O(n^2) for worst cases. Big O is
a property of the algorithm, not the language. Of course, a given
runtime might do something stupid, but that certainly isn't tied to the
source code, so comments shouldn't be necessary until a problem is
encountered.
How easy is it to add such things?
Again, one can reasonably extrapolate from the code sample just how
difficult this is.
I don't agree. Recursive spins like this are often hard to add extra
cases and exceptions into. If I happen to want to do something to the
left side only, I'm likely to have to create a pair of mutually
recursive functions that are dependent upon one another with quite a
bit of code duplication.
A good pattern about this is:
Quoting http://norvig.com/sudoku.html :
There is a design pattern at work here that I have not seen mentioned
elsewhere. The pattern is:
If you have two mutually-recursive functions that both alter the
state of an object, try to move almost all the functionality into
just one of the functions. Otherwise you will probably end up
duplicating code.
I would focus your attention on the "...that both alter the state of an
object..." and repeat again: Haskell is a purely functional language.
This is one of the great advantages of functional languages: by allowing
you to avoid worrying about state when you don't have to, you can avoid
whole classes of problems.
This kind of stuff isn't trivial, and even functional stuff doesn't
make it that much easier. The apocryphal story is how it took
seventeen years from when binary search was first published until when
a *correct* version was published (attributed to Knuth, original
reference not found).
One of the nice things about Haskell is you can often *prove* that your
algorithm is correct. If in fact Haskell had been around back in the day
when binary search was first invented, I have little doubt that it would
have taken little more than a day for a correct implementation to have
been put together.
--Chris
--
[email protected]
http://www.kernel-panic.org/cgi-bin/mailman/listinfo/kplug-lpsg