SuYan created SPARK-6157:
----------------------------

             Summary: Unroll unsuccessful memory_and_disk level block should 
release reserved unroll memory after put success in disk
                 Key: SPARK-6157
                 URL: https://issues.apache.org/jira/browse/SPARK-6157
             Project: Spark
          Issue Type: Bug
          Components: Spark Core
    Affects Versions: 1.2.1
            Reporter: SuYan


Current code:
Now we want to cache a Memory_and_disk level block
1. Try to put in memory and unroll unsuccessful. then reserved unroll memory 
because we got a iterator from an unroll Array 
2. Then put into disk.
3. Get value from get(blockId), and iterator from that value, and then nothing 
with an unroll Array. So here we should release the reserved unroll memory 
instead will release  until the task is end.

and also, have somebody already pull a request, for get Memory_and_disk level 
block, while cache in memory from disk, we should, use file.length to check if 
we can put in memory store instead just allocate a file.length buffer, may lead 
to OOM.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to