I think if your job is running and you want to deploy a new jar which is the
new version for the other, spark will think the new jar is another job ,
they distinguish job by Job ID , so if you want to replace the jar ,you have
to kil job every time;
--
Are you referring to have spark picking up a new jar build? If so, you can
probably script that on bash.
Thank You,
Irving Duran
On Wed, Nov 28, 2018 at 12:44 PM Mina Aslani wrote:
> Hi,
>
> I have a question for you.
> Do we need to kill a spark job every time we change and deploy it to
>
Hi,
I have a question for you.
Do we need to kill a spark job every time we change and deploy it to
cluster? Or, is there a way for Spark to automatically pick up the recent
jar version?
Best regards,
Mina