Are these the right options:

1. If there is a spark script, just do a ctrl-c from spark-shell and the
job will be killed property.

2. For spark application also ctrl c will kill the job property on the
cluster:

Somehow the ctrl-c option did not work for us...

Similar option works fine for scalding for example but we see lot of dead
nodes if too many jobs are killed abruptly.

3. Use the Client script...

/bin/spark-class org.apache.spark.deploy.Client kill spark://
myspark.com:7077 app-20140316142129-0000
Runner java
Classpath
:/home/debasish/sag_spark/conf:/home/debasish/sag_spark/assembly/target/scala-2.10/spark-assembly-1.0.0-incubating-SNAPSHOT-hadoop2.0.0-mr1-cdh4.5.0.jar
Java opts  -Djava.library.path= -Xms512m -Xmx512m
Options -Dspark.cores.max=16
Sending kill command to spark://myspark.com:7077
Driver app-20140316142129-0000 has already finished or does not exist

This option also did not kill the job. I can still see the job running on
spark webui...

Thanks.
Deb

Reply via email to