On Monday, 23 June 2014 at 16:33:13 UTC, Chris Cain wrote:
It's not really about the time complexity but the absolute time it must take. But I showed the example that shows that the fact that any stable sort must do extra work:

[2,2,2,2,1]

An unstable sort may swap the first 2 and the 1 whereas a stable sort must, by definition, maintain the relative positioning of the 2s, which means it must move all of the 2s upwards. Think of a "magical" sort that knows exactly the perfect swaps to do in order to sort (that is, "magical" sort's complexity is O(n)). In the case above, "magical unstable sort" does 1 swap and "magical stable sort" must do 4 (if it doesn't, it's not stable). This shows that stable sorts must strictly be equal to or slower than unstable sort (and an unstable sort could sometimes do better than a stable sort).

Or, to look at it another way, all "stable" sorts satisfy the qualities needed by an "unstable" sort, but not all "unstable" sorts satisfy the qualities of a "stable" sort. The minimum number of swaps/operations to create a sorted array won't necessarily maintain the stability property of the elements.

Nobody, never, measures sort algorithms by amount of swaps.

Swapping two references is zero-cost operation; *comparing* two elements of an array is what's important. Because it requires comparison function calls, fields resolve/read, actual comparison math etc. That's why sort algorithms are always measured by amount of compares.

And both versions of sort - stable and unstable - for this particular example you demonstrated above - [2,2,2,2,1] - should involve all five elements in comparison. If your unstable sort only compares first and last elements, swaps them and returns, it's broken.

Reply via email to