Charles Reiss created SPARK-7214:
------------------------------------

             Summary: Unrolling never evicts blocks when MemoryStore is nearly 
full
                 Key: SPARK-7214
                 URL: https://issues.apache.org/jira/browse/SPARK-7214
             Project: Spark
          Issue Type: Bug
          Components: Block Manager
            Reporter: Charles Reiss
            Priority: Minor


When less than spark.storage.unrollMemoryThreshold (default 1MB) is left in the 
MemoryStore, new blocks that are computed with unrollSafely (e.g. any cached 
RDD split) will always fail unrolling even if old blocks could be dropped to 
accommodate it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to