[ https://issues.apache.org/jira/browse/SPARK-19755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16369227#comment-16369227 ]
Igor Berman commented on SPARK-19755: ------------------------------------- This Jira is very relevant for the case when running with dynamic allocation turned on, where starting and stopping executors is part of natural lifecycle of the driver. The chances to fail when starting executor are increasing(e.g. due to transient port collisions) The threshold of 2 seems too low and artificial for this usecases. I've observed situation where at some point almost 1/3 of mesos-slave nodes are marked as blacklisted(but they were ok). This creates situation where the cluster has free resources but frameworks can't use them since they actively decline offers from the master. > Blacklist is always active for MesosCoarseGrainedSchedulerBackend. As result > - scheduler cannot create an executor after some time. > ----------------------------------------------------------------------------------------------------------------------------------- > > Key: SPARK-19755 > URL: https://issues.apache.org/jira/browse/SPARK-19755 > Project: Spark > Issue Type: Bug > Components: Mesos, Scheduler > Affects Versions: 2.1.0 > Environment: mesos, marathon, docker - driver and executors are > dockerized. > Reporter: Timur Abakumov > Priority: Major > > When for some reason task fails - MesosCoarseGrainedSchedulerBackend > increased failure counter for a slave where that task was running. > When counter is >=2 (MAX_SLAVE_FAILURES) mesos slave is excluded. > Over time scheduler cannot create a new executor - every slave is is in the > blacklist. Task failure not necessary related to host health- especially for > long running stream apps. > If accepted as a bug: possible solution is to use: spark.blacklist.enabled to > make that functionality optional and if it make sense MAX_SLAVE_FAILURES > also can be configurable. -- This message was sent by Atlassian JIRA (v7.6.3#76005) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org