On Tue, 11 May 2021 14:37:21 GMT, Jörn Horstmann 
<github.com+689138+jhorstm...@openjdk.org> wrote:

>> Sorting:
>> 
>> - adopt radix sort for sequential and parallel sorts on 
>> int/long/float/double arrays (almost random and length > 6K)
>> - fix tryMergeRuns() to better handle case when the last run is a single 
>> element
>> - minor javadoc and comment changes
>> 
>> Testing:
>> - add new data inputs in tests for sorting
>> - add min/max/infinity values to float/double testing
>> - add tests for radix sort
>
> src/java.base/share/classes/java/util/DualPivotQuicksort.java line 669:
> 
>> 667: 
>> 668:         for (int i = low; i < high; ++i) {
>> 669:             count1[ a[i]         & 0xFF]--;
> 
> Not a reviewer, but having recently implemented a radixsort myself I'm 
> wondering what is the logic or benefit of decrementing and counting backwards 
> here?
> 
> One thing I did differently, and I'm not fully sure is an optimization, is 
> remembering the last bucket for each of the 4 counts. Checking whether the 
> data is already sorted by that digit can then be done by checking 
> `count[last_bucket] == size`, which avoids the first loop in `passLevel`. 
> Again, not sure whether it is actually faster, maybe the two separate simple 
> loops like here are better.

@jhorstmann In counting backwards we perform comparing with 0 in a loop, 
Comparing with 0 may be faster than comparing with other value. In general, 
backward or forward moving is your favor.

Keeping last_bucket and use check count[last_bucket] == size sounds good, but 
we need to run benchmarking and compare. I think we will not see big difference 
because the size of count array is too small, 256 only, whereas we scan whole 
array with size at least 6K. The corner case when we can see max effect of 
using last_bucket is array with all equal elements (4 count arrays will have 
only one non-zero element). but such array will be caught by tryMerge method.

-------------

PR: https://git.openjdk.java.net/jdk/pull/3938

Reply via email to