[ https://issues.apache.org/jira/browse/SPARK-18820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
jin xing updated SPARK-18820: ----------------------------- Description: CoarseGrainedSchedulerBackend will update executorDataMap after receiving "RegisterExecutor", thus task scheduler may assign tasks on to this executor; If LaunchTask arrives at CoarseGrainedExecutorBackend before RegisteredExecutor, it will result in NullPointerException and executor backend will exit; Is it a bug? If so can I make a pr? I think driver should send "LaunchTask" after "RegisteredExecutor" is already received. was: CoarseGrainedSchedulerBackend will update executorDataMap after receiving "RegisterExecutor", thus task scheduler may assign tasks on to this executor; If LaunchTask arrives at CoarseGrainedExecutorBackend before RegisteredExecutor, it will result in NullPointerException and executor backend will exit; Is it a bug? I think driver should send "LaunchTask" after "RegisteredExecutor" is already received. > Driver may send "LaunchTask" before executor receive "RegisteredExecutor" > ------------------------------------------------------------------------- > > Key: SPARK-18820 > URL: https://issues.apache.org/jira/browse/SPARK-18820 > Project: Spark > Issue Type: Bug > Components: Scheduler > Affects Versions: 1.6.3 > Environment: spark-1.6.3 > Reporter: jin xing > > CoarseGrainedSchedulerBackend will update executorDataMap after receiving > "RegisterExecutor", thus task scheduler may assign tasks on to this executor; > If LaunchTask arrives at CoarseGrainedExecutorBackend before > RegisteredExecutor, it will result in NullPointerException and executor > backend will exit; > Is it a bug? If so can I make a pr? I think driver should send "LaunchTask" > after "RegisteredExecutor" is already received. -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org