Github user squito commented on a diff in the pull request:

    https://github.com/apache/spark/pull/21440#discussion_r191476566
  
    --- Diff: core/src/main/scala/org/apache/spark/storage/BlockManager.scala 
---
    @@ -659,6 +659,11 @@ private[spark] class BlockManager(
        * Get block from remote block managers as serialized bytes.
        */
       def getRemoteBytes(blockId: BlockId): Option[ChunkedByteBuffer] = {
    +    // TODO if we change this method to return the ManagedBuffer, then 
getRemoteValues
    +    // could just use the inputStream on the temp file, rather than 
memory-mapping the file.
    +    // Until then, replication can cause the process to use too much 
memory and get killed
    +    // by the OS / cluster manager (not a java OOM, since its a 
memory-mapped file) even though
    +    // we've read the data to disk.
    --- End diff --
    
    btw this fix is such low-hanging fruit that I would try to do this 
immediately afterwards.  (I haven't filed a jira yet just because there are 
already so many defunct jira related to this, I was going to wait till my 
changes got some traction).
    
    I think its OK to get it in like this first, as this makes the behavior for 
2.01 gb basically the same as it was for 1.99 gb.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to