On Monday, 23 June 2014 at 16:33:13 UTC, Chris Cain wrote:
It's not really about the time complexity but the absolute time it must take. But I showed the example that shows that the fact that any stable sort must do extra work:

[2,2,2,2,1]
[...]

I'm sorry if i'm going to say some stupid things, but I'm still not sure about this.

You said: "you can prove that there exists some unstable sort that is always faster than a stable sort"

First of all: stable sort are also unstable, ok, but I think you meant: "you can prove that there exists some unstable sort - that it isn't a stable sort too - that is always faster than a stable sort", didn't you?

With your example you're showing that is this specific case is faster, you're not proving that is *always* faster (btw, i think you meant equal or faster).

In my opinion you're just proving that for *swap-based algorithms* number of swaps you need to unstable sorting is always *less or equals* to the number of swaps you need to stable sorting.

Anyway some sorting algorithms don't use comparison between elements so swap doesn't make any sense. Radix sort, for example, never compares two elements. It "directly" write the output array, so I don't think the number of swaps is the right metric. Counting sort doesn't do any comparison too, and it doesn't swap elements.

Moreover you're assuming it should work "in-place". Usually we need a sorted copy, so you can't just swap that elements, you need to copy the other elements too and then apply your swaps over copy. If element are complex you have to copy element to new array one-by-one.

Using your example a "magical-stable-sort-without-swaps-planning-algorithm" could guess that we just need to prepend last element before the first one: it is a stable sort. There's no swap at all. And we copy exactly n elements. (and it's faster than copy all n elements and then do the swap).

Reply via email to