GitHub user suyanNone opened a pull request:

    https://github.com/apache/spark/pull/4887

    Unroll unsuccessful memory_and_disk level block should release reserved ...

    Current code:
    Now we want to cache a Memory_and_disk level block
    1. Try to put in memory and unroll unsuccessful. then reserved unroll 
memory because we got a iterator from an unroll Array 
    2. Then put into disk.
    3. Get value from get(blockId), and iterator from that value, and then 
nothing with an unroll Array. So here we should release the reserved unroll 
memory instead will release when the task is end.
    
    and also, have somebody already pull a request, for get Memory_and_disk 
level block, while cache in memory from disk, we should, use file.length to 
check if we can put in memory store instead just allocate a file.length buffer, 
may lead to OOM.

You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/suyanNone/spark unroll-memory_and_disk

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/spark/pull/4887.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #4887
    
----
commit 20698e00cb1be2b50ef39afae7d9ff5d3a4962d3
Author: hushan[胡珊] <hus...@xiaomi.com>
Date:   2015-03-04T07:08:40Z

    Unroll unsuccessful memory_and_disk level block should release reserved 
unroll memory after put success in disk

----


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to