Ngone51 commented on a change in pull request #32287:
URL: https://github.com/apache/spark/pull/32287#discussion_r623533776



##########
File path: 
core/src/main/scala/org/apache/spark/storage/ShuffleBlockFetcherIterator.scala
##########
@@ -708,6 +785,15 @@ final class ShuffleBlockFetcherIterator(
   }
 
   private def fetchUpToMaxBytes(): Unit = {
+    if (isNettyOOMOnShuffle.get()) {
+      if (reqsInFlight > 0) {
+        // Return immediately if Netty is still OOMed and there're ongoing 
fetch requests
+        return
+      } else {
+        ShuffleBlockFetcherIterator.resetNettyOOMFlagIfPossible(0)
+      }
+    }
+

Review comment:
       Yes, that's true. I also considered another way previously, which is to 
adjust the threshold of in-flight requests dynamically. For example, when OOM 
throws, the threshold would be reduced to the max number of in-flight requests 
before OOM. And if OOM happens again, we continue to reduce the threshold. 
Then, it comes to a question: when do we increase the threshold? When the 
backlogged requests are too much and OOM has been disappeared for a while? If 
we go this way, we should be very careful about the adjustment algorithm as 
it's directly related to the performance.
   
   Let me think more about it. Thanks!




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to