On Wednesday 15 December 2010 22:44:39 Sean Kelly wrote:
> Nick Voronin Wrote:
> > On Thu, 16 Dec 2010 04:52:53 +0300, Craig Black <craigbla...@cox.net>
> > wrote:
> > 
> > And here is why. Quicksort is quite famous for being O(n^2) worst case
> > (for sorted data). Your straightforward, simple  (and less generic, I
> > must say) implementation shines for random data, but performs terribly
> > for ordered data. Phobos' sort isn't really plain quicksort so it is
> > slower for random data (yet still of the same complexity as your code
> > best case), but it is pretty fast for ordered data.
> 
> A tweaked version of the Tango sort routine is slower for random data but
> roughly 25% faster for ordered data.  The straightforward quicksort is
> about 30 times slower though.  All in all, the Phobos sort routine seems
> to do quite well.  I'd like to see a test with other types of contrived
> worst-case data though.

It would be nice to get a fairly extensive lists of types to sort with a 
variety 
of values and number of values of those types and set up an extensive set of 
benchmarking tests. Heck, such a set of types and collections of those types 
could be a useful benchmarking tool for a variety of algorithms. Then we'll 
have 
a good base to build from and compare to. If we're going to tweak algorithms 
for 
efficiency, we're going to want to some thorough tests so that we're sure that 
any 
tweaks are actually overall improvements. It's easy to find cases which do 
poorly 
for a particular algorithm. It can even be fairly easy to tweak an algorithm to 
improve that particular case. But it's not so easy to be sure that that tweak 
is 
actually an overal improvement.

- Jonathan M Davis

Reply via email to