I've been playing with sorting algorithms a lot in recent months, so I want to implement a *working* stable sort for Phobos which is broken at the moment. I have a working library and I'm still adding to it. It's much more complex than a simple merge sort, being over 300 lines of code at the moment.

- It's a natural merge sort, which is faster on partially sorted lists, and adds little overhead for mostly random lists.
- It uses O(log n log n) additional space for merging.
- I wrote it to sort random-access ranges *without* slicing, but I think the exclusion of slicing makes it slower. I'm writing a separate implementation which uses slicing and I'll keep it if it's much faster. - To avoid multiple allocations, the user can allocate their own temporary memory and pass it to the sort function. - I decided against using pointers. While I could use them to write highly optimized code for arrays, pointers can't be used in safe code and don't work very well in CTFE at the moment.

Is it worth keeping the implementation *without* slicing? Many functions in Phobos do require slicing, including the unstable sort, and I think most random-access ranges do or could support slicing.

What would you prefer in terms of memory usage vs performance? O(n/2) space is optimal for performance, O(1) (in-place) requires zero allocations but is slower, and O(log n log n) provides a good balance.

Should I implement concurrency? Merge sort is very easy to parallelize, and being in-place or natural doesn't change this fact.

Should we take a different path and go for a basic merge sort or even Timsort? I've considered writing a Timsort though that seems like daunting task to me, so here's an implementation written in C - https://github.com/swenson/sort

Reply via email to