[ 
https://issues.apache.org/jira/browse/SPARK-33031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Graves updated SPARK-33031:
----------------------------------
    Description: 
I was running a test with blacklisting  standalone mode and all the executors 
were initially blacklisted.  Then one of the executors died and we got 
allocated another one. The scheduler did not appear to pick up the new one and 
try to schedule on it though.

You can reproduce this by starting a master and slave on a single node, then 
launch a shell like where you will get multiple executors (in this case I got 3)

$SPARK_HOME/bin/spark-shell --master spark://yourhost:7077 --executor-cores 4 
--conf spark.blacklist.enabled=true

>From shell run:
{code:java}
import org.apache.spark.TaskContext
val rdd = sc.makeRDD(1 to 1000, 5).mapPartitions { it =>
 val context = TaskContext.get()
 if (context.attemptNumber() < 2) {
 throw new Exception("test attempt num")
 }
 it
}
rdd.collect(){code}
 

Note that I tried both with and without dynamic allocation enabled.

 

You can see screen shot related on 
https://issues.apache.org/jira/browse/SPARK-33029

  was:
I was running a test with blacklisting on yarn (and standalone mode) and all 
the executors were initially blacklisted.  Then one of the executors died and 
we got allocated another one. The scheduler did not appear to pick up the new 
one and try to schedule on it though.

You can reproduce this by starting a master and slave on a single node, then 
launch a shell like where you will get multiple executors (in this case I got 3)

$SPARK_HOME/bin/spark-shell --master spark://yourhost:7077 --executor-cores 4 
--conf spark.blacklist.enabled=true

>From shell run:
{code:java}
import org.apache.spark.TaskContext
val rdd = sc.makeRDD(1 to 1000, 5).mapPartitions { it =>
 val context = TaskContext.get()
 if (context.attemptNumber() < 2) {
 throw new Exception("test attempt num")
 }
 it
}
rdd.collect(){code}
 

Note that I tried both with and without dynamic allocation enabled.

 

You can see screen shot related on 
https://issues.apache.org/jira/browse/SPARK-33029


> scheduler with blacklisting doesn't appear to pick up new executor added
> ------------------------------------------------------------------------
>
>                 Key: SPARK-33031
>                 URL: https://issues.apache.org/jira/browse/SPARK-33031
>             Project: Spark
>          Issue Type: Bug
>          Components: Scheduler
>    Affects Versions: 3.0.0, 3.1.0
>            Reporter: Thomas Graves
>            Priority: Critical
>
> I was running a test with blacklisting  standalone mode and all the executors 
> were initially blacklisted.  Then one of the executors died and we got 
> allocated another one. The scheduler did not appear to pick up the new one 
> and try to schedule on it though.
> You can reproduce this by starting a master and slave on a single node, then 
> launch a shell like where you will get multiple executors (in this case I got 
> 3)
> $SPARK_HOME/bin/spark-shell --master spark://yourhost:7077 --executor-cores 4 
> --conf spark.blacklist.enabled=true
> From shell run:
> {code:java}
> import org.apache.spark.TaskContext
> val rdd = sc.makeRDD(1 to 1000, 5).mapPartitions { it =>
>  val context = TaskContext.get()
>  if (context.attemptNumber() < 2) {
>  throw new Exception("test attempt num")
>  }
>  it
> }
> rdd.collect(){code}
>  
> Note that I tried both with and without dynamic allocation enabled.
>  
> You can see screen shot related on 
> https://issues.apache.org/jira/browse/SPARK-33029



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to