Ngone51 commented on a change in pull request #32287:
URL: https://github.com/apache/spark/pull/32287#discussion_r618964745



##########
File path: 
core/src/main/scala/org/apache/spark/storage/ShuffleBlockFetcherIterator.scala
##########
@@ -683,7 +694,28 @@ final class ShuffleBlockFetcherIterator(
             }
           }
 
-        case FailureFetchResult(blockId, mapIndex, address, e) =>
+        // Catching OOM and do something based on it is only a workaround for 
handling the
+        // Netty OOM issue, which is not the best way towards memory 
management. We can
+        // get rid of it when we find a way to manage Netty's memory precisely.
+        case FailureFetchResult(blockId, mapIndex, address, size, 
isNetworkReqDone, e)
+            if e.isInstanceOf[OutOfDirectMemoryError] || 
e.isInstanceOf[NettyOutOfMemoryError] =>
+          assert(address != blockManager.blockManagerId &&
+            !hostLocalBlocks.contains(blockId -> mapIndex),
+            "Netty OOM error should only happen on remote fetch requests")
+          logWarning(s"Failed to fetch block $blockId due to Netty OOM, will 
retry", e)
+          NettyUtils.isNettyOOMOnShuffle = true
+          numBlocksInFlightPerAddress(address) = 
numBlocksInFlightPerAddress(address) - 1
+          bytesInFlight -= size
+          if (isNetworkReqDone) {
+            reqsInFlight -= 1
+            logDebug("Number of requests in flight " + reqsInFlight)
+          }
+          val defReqQueue =
+            deferredFetchRequests.getOrElseUpdate(address, new 
Queue[FetchRequest]())
+          defReqQueue.enqueue(FetchRequest(address, 
Array(FetchBlockInfo(blockId, size, mapIndex))))

Review comment:
       > When an executor is not assigned enough off-heap memory, is it 
possible that in this case, it keeps retrying forever? Let's say there is some 
skew and one of the blocks is large and whenever this block is fetched Netty 
OOMs. In this case will this keep retrying? Maybe I am missing something.
   
   To clarify first, Netty memory is not a part of executor memory,  but a 
separate memory region (logically) instead. Users could use 
`io.netty.maxDirectMemory` to set and Netty uses 
[maxDirectMemory0](https://github.com/netty/netty/blob/df53de5b687cd25ba1079b556318da2b773cf4f1/common/src/main/java/io/netty/util/internal/PlatformDependent.java#L1140)
 by default if not configured manually.
   
   In the case of skew/large block, Spark has 
`spark.network.maxRemoteBlockSizeFetchToMem` (200M by default) to save the 
block into disk instead of holding the whole block in Netty memory. 
   
   And it's true that we can fall into endless retry if Netty memory is still 
less enough to serve a small block..
   
   I can add a check like:
   
   ```
   if (NettyUtils.maxDirectMemory() < blockSize) {
    // throw fetch failure directly instead of retrying
   }
   ```
   
   Does it look good to you? @mridulm @otterc 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to