Hi,

My setup: tomcat (running a web app which initializes SparkContext) and
dedicated Spark cluster (1 master 2 workers, 1VM per each).
I am able to properly start this setup where SparkContext properly
initializes connection with master. I am able to execute tasks and perform
required calculations... everything works fine.

The problem I'm facing is in the situation when Spark cluster goes dow,
after mentioned proper startup (I'm trying to mimic a possible production
issue where Spark cluster simply goes down for a reason and my web
application should still work apart from the Spark related functionality).
What happens is that even though the Spark cluster is not there DAGScheduler
still schedules tasks and creates JobWaiters which wait endlessly for the
task completion blocking the main thread.
As a result of this my application runs out of available threads (this is
happening in the part where I handle JMS with a pool of 10 threads) and
can't proceed working correctly. I do not see any error in logs apart from
Akka endlessly trying to reconnect to MasterExecutor.

Is this a known issue or am I"m missing sth. obvious in the configuration?

Thanks a lot for any suggestion.




--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-tasks-still-scheduled-after-Spark-goes-down-tp16521.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to