Github user jerryshao commented on a diff in the pull request:

    https://github.com/apache/spark/pull/21440#discussion_r203581903
  
    --- Diff: core/src/main/scala/org/apache/spark/storage/BlockManager.scala 
---
    @@ -659,6 +659,11 @@ private[spark] class BlockManager(
        * Get block from remote block managers as serialized bytes.
        */
       def getRemoteBytes(blockId: BlockId): Option[ChunkedByteBuffer] = {
    +    // TODO if we change this method to return the ManagedBuffer, then 
getRemoteValues
    +    // could just use the inputStream on the temp file, rather than 
memory-mapping the file.
    +    // Until then, replication can cause the process to use too much 
memory and get killed
    +    // by the OS / cluster manager (not a java OOM, since its a 
memory-mapped file) even though
    +    // we've read the data to disk.
    --- End diff --
    
    > not a java OOM, since its a memory-mapped file
    
    I'm not sure why memory-mapped file will cause too much memory? AFAIK 
memory mapping is a lazy loading mechanism in page-wise, system will only load 
the to-be-accessed file segment to memory page, not the whole file to memory. 
So from my understanding even very small physical memory could map a super 
large file. 


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to