I have just restarted the job and it doesn't seem that the shutdown hook is
executed. I have attached to this email the log from the driver. It seems
that the slave are not accepting the tasks... but we haven't change
anything on our mesos cluster, we have only upgrade one job to spark 1.4;
is ther
Spark 1.4.0 added shutdown hooks in the driver to cleanly shutdown the
Sparkcontext in the driver, which would shutdown the executors. I am not
sure whether this is related or not, but somehow the executor's shutdown
hook is being called.
Can you check the driver logs to see if driver's shutdown ho
I forgot to mention that this is a long running job, actually a spark
streaming job, and it's using mesos coarse mode. I'm still using the
unreliable kafka receiver.
2015-07-13 17:15 GMT+01:00 Luis Ángel Vicente Sánchez <
langel.gro...@gmail.com>:
> I have just upgrade one of my spark jobs from s
I have just upgrade one of my spark jobs from spark 1.2.1 to spark 1.4.0
and after deploying it to mesos, it's not working anymore.
The upgrade process was quite easy:
- Create a new docker container for spark 1.4.0.
- Upgrade spark job to use spark 1.4.0 as a dependency and create a new
fatjar.