Hi,

I launched a spark streaming job under YARN using default configuration for
Spark, using spark-submit with the master as yarn-cluster. It launched an
ApplicationMaster, and 2 CoarseGrainedExecutorBackend processes.

Everything ran fine, then I killed the application using yarn application
-kill <appid>.

On doing this, I noticed that it killed only the shell processes that
launch the Spark AM and other processes, but the Java processes were left
alone. They became orphaned and PPID changed to 1.

Is this a bug in Spark or Yarn ? I am using spark 1.0.2 and Hadoop 2.4.1.
The cluster is a single node setup in pseudo-distributed mode.

Thanks
hemanth

Reply via email to