"Matthias Walter" <xa...@xammy.homelinux.net> wrote in message news:mailman.1065.1292557052.21107.digitalmar...@puremagic.com...
On 12/16/2010 09:36 PM, Craig Black wrote:
It was brought to my attention that the quick sort has a very bad
worst case, so I implemented a simple fix for it.  Now the worst case
(completely ordered) is the best case, and it only slows down the
general case by a small percentage.  I thought to myself, "it can't be
this easy to fix quick sort".  Does anyone see a flaw in this simple
fix?  Performs much better than Phobos in completely random and
completely sorted data.  Perhaps there is another case where it
doesn't do as well?

Yes, there is a "flaw": There are still instances of arrays where you
will end up with a pivot element being one of the largest or one of the
smallest elements in *every* call. The means, that you split your array
from length n not into two arrays roughly of size n/2 and n/2, but of
O(1) and n - O(1). This implies a running time of n^2 (in contrast to n
log n), which is obviously bad.

I don't know how std.algorithm.sort works, but C++ STL uses an
Introspective sort, which is a quick-sort variant like you have, but it
has some measurements that observe whether the above worst case occurs
(e.g. by looking at the recursion depth) and switches to a heap-sort in
this case. [1]

Matthias

[1] http://en.wikipedia.org/wiki/Introsort

Thanks for the advice! I have been looking on the internet and it seems introsort is the best, but I haven't found any free C/C++ code for it.

-Craig

Reply via email to