Daniel,

The "hadoop job -list" command is a deprecated form of "mapred job -list",
which is only for Hadoop MapReduce jobs. For Spark jobs, which run on YARN,
you instead want "yarn application -list".

Hope this helps,
Jonathan (from the EMR team)

On Tue, Jan 26, 2016 at 10:05 AM Daniel Imberman <daniel.imber...@gmail.com>
wrote:

> Hi all,
>
> I want to set up a series of spark steps on an EMR spark cluster, and
> terminate the current step if it's taking too long. However, when I ssh
> into
> the master node and run hadoop jobs -list, the master node seems to believe
> that there is no jobs running. I don't want to terminate the cluster,
> because doing so would force me to buy a whole new hour of whatever cluster
> I'm running. Does anyone have any suggestions that would allow me to
> terminate a spark-step in EMR without terminating the entire cluster?
>
> Thank you,
>
> Daniel
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/Terminating-Spark-Steps-in-AWS-tp26076.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
>
>

Reply via email to