Github user cloud-fan commented on a diff in the pull request:

    https://github.com/apache/spark/pull/20561#discussion_r167387192
  
    --- Diff: 
sql/core/src/main/java/org/apache/spark/sql/execution/UnsafeKVExternalSorter.java
 ---
    @@ -98,10 +99,20 @@ public UnsafeKVExternalSorter(
             numElementsForSpillThreshold,
             canUseRadixSort);
         } else {
    -      // The array will be used to do in-place sort, which require half of 
the space to be empty.
    -      // Note: each record in the map takes two entries in the array, one 
is record pointer,
    -      // another is the key prefix.
    -      assert(map.numKeys() * 2 <= map.getArray().size() / 2);
    +      // `BytesToBytesMap`'s point array is only guaranteed to hold all 
the distinct keys, but
    +      // `UnsafeInMemorySorter`'s point array need to hold all the 
entries. Since `BytesToBytesMap`
    +      // can have duplicated keys, here we need a check to make sure the 
point array can hold
    +      // all the entries in `BytesToBytesMap`.
    +      final LongArray pointArray;
    +      // The point array will be used to do in-place sort, which require 
half of the space to be
    +      // empty. Note: each record in the map takes two entries in the 
point array, one is record
    +      // pointer, another is the key prefix.
    +      if (map.numValues() > map.getArray().size() / 4) {
    +        pointArray = map.allocateArray(map.numValues() * 4);
    --- End diff --
    
    `map.allocateArray` will trigger other consumers to spill is memory is not 
enough. If the allocation still fails, there is nothing we can do, just let the 
execution fail.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to