[ 
https://issues.apache.org/jira/browse/SPARK-20801?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jin xing updated SPARK-20801:
-----------------------------
    Issue Type: Sub-task  (was: Improvement)
        Parent: SPARK-19659

> Store accurate size of blocks in MapStatus when it's above threshold.
> ---------------------------------------------------------------------
>
>                 Key: SPARK-20801
>                 URL: https://issues.apache.org/jira/browse/SPARK-20801
>             Project: Spark
>          Issue Type: Sub-task
>          Components: Spark Core
>    Affects Versions: 2.1.1
>            Reporter: jin xing
>
> Currently, when number of reduces is above 2000, HighlyCompressedMapStatus is 
> used to store size of blocks. in HighlyCompressedMapStatus, only average size 
> is stored for non empty blocks. Which is not good for memory control when we 
> shuffle blocks. It makes sense to store the accurate size of block when it's 
> above threshold.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to