[ https://issues.apache.org/jira/browse/SPARK-21175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16058892#comment-16058892 ]
Apache Spark commented on SPARK-21175: -------------------------------------- User 'jinxing64' has created a pull request for this issue: https://github.com/apache/spark/pull/18388 > Slow down "open blocks" on shuffle service when memory shortage to avoid OOM. > ----------------------------------------------------------------------------- > > Key: SPARK-21175 > URL: https://issues.apache.org/jira/browse/SPARK-21175 > Project: Spark > Issue Type: Improvement > Components: Shuffle > Affects Versions: 2.1.1 > Reporter: jin xing > > A shuffle service can serves blocks from multiple apps/tasks. Thus the > shuffle service can suffers high memory usage when lots of {{shuffle-read}} > happen at the same time. In my cluster, OOM always happens on shuffle > service. Analyzing heap dump, memory cost by Netty(chunks) can be up to 2~3G. > It might make sense to reject "open blocks" request when memory usage is high > on shuffle service. -- This message was sent by Atlassian JIRA (v6.4.14#64029) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org