[ 
https://issues.apache.org/jira/browse/SPARK-18800?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15766039#comment-15766039
 ] 

Liang-Chi Hsieh commented on SPARK-18800:
-----------------------------------------

Note: this jia is motivated by the issue reported on dev mailing list at 
http://apache-spark-developers-list.1001551.n3.nabble.com/java-lang-IllegalStateException-There-is-no-space-for-new-record-tc20108.html

As we don't have a repro, we can't make sure where the root cause is exactly. 
But I fixed the assert at UnsafeKVExternalSorter, so we can easily make sure 
whether this array location is the problem in the future.

> Correct the assert in UnsafeKVExternalSorter which ensures array size
> ---------------------------------------------------------------------
>
>                 Key: SPARK-18800
>                 URL: https://issues.apache.org/jira/browse/SPARK-18800
>             Project: Spark
>          Issue Type: Improvement
>          Components: SQL
>            Reporter: Liang-Chi Hsieh
>
> UnsafeKVExternalSorter uses UnsafeInMemorySorter to sort the records of 
> BytesToBytesMap if it is given a map.
> Currently we use the number of keys in BytesToBytesMap to determine if the 
> array used for sort is enough or not. We has an assert that ensures the size 
> of the array is enough: map.numKeys() <= map.getArray().size() / 2.
> However, each record in the map takes two entries in the array, one is record 
> pointer, another is key prefix. So the correct assert should be map.numKeys() 
> * 2 <= map.getArray().size() / 2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to