On Tue, Nov 15, 2016 at 5:57 PM, Elkhan Dadashov <elkhan8...@gmail.com> wrote:
> This is confusing in the sense that, the client needs to stay alive for
> Spark Job to finish successfully.
>
> Actually the client can die  or finish (in Yarn-cluster mode), and the spark
> job will successfully finish.

That's an internal class, and you're looking at an internal javadoc
that describes how the app handle works. For the app handle to be
updated, the "client" (i.e. the sub process) needs to stay alive. So
the javadoc is correct. It has nothing to do with whether the
application succeeds or not.


-- 
Marcelo

---------------------------------------------------------------------
To unsubscribe e-mail: user-unsubscr...@spark.apache.org

Reply via email to