Hi All,
 We are running spark on kubernetes. There is a scenario in which the spark
driver(pod) was not able to communicate properly with master and it got
stuck saying insufficient resources.

On restarting the spark driver (pod) manually , It was able to run properly.

Is there a way to just kill the driver if it gets insufficient resources
(or not being able to start up) ?

PS: This is required because kubernetes supervise the driver instance and
we would like to kill the driver immediately so that it will restart.

Thanks
Vimal

Reply via email to