Github user suyanNone commented on the pull request:

    https://github.com/apache/spark/pull/4887#issuecomment-147957284
  
    @andrewor14 
    For we have three methods to release unroll memory for three situation
     1. Unroll success. We expect to cache this block in `tryToPut`. We do not 
release and re-acquire memory from the MemoryManager in order to avoid race 
conditions where another component steals the memory that we're trying to 
transfer. (SPARK-4777)
    2. Unroll failed for memory_disk_level block. We should release memory 
after putting this block into diskStore in order to re-acquire memory for other 
purpose. (SPARK-6157)
    3. Unroll failed for memory_only_level block,  We do not release until we 
finished task.
    
    According that, we may hold the reserved unroll memory after unrolled a 
block. the `pendingUnrollMemory` was designed only for unroll success block, 
for situation 2, we also need hold that memory, we may could transfer a 
parameter  to `unrollSafely` to tell that block contains disk level, so we can 
hold that memory after unroll failed a memory_and_disk block. and done some 
release after we put that block into diskStore.
    
    another solution is not add parameter for `unrollSafely`, for each Task 
unrolling a block, first acquire memory into `pendingUnrollMemoryMap`, we do 
some release for situation 1 and 2, and move that memory for `unrollMemoryMap` 
for situation 3. 
    
    The commit version adopt second solution, May have a more suitable 
solution? look forward for your suggestions
    
    



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to