Hi,

You can checkout http://spark.apache.org/docs/latest/monitoring.html,
you can monitor hdfs, memory usage per job and executor and driver. I
have connected it to Graphite for storage and Grafana for
visualization. I have also connected to collectd which provides me all
server nodes metrics like disc, memory and cpu utilization.

On Tue, Jan 12, 2016 at 10:50 AM, laxmanvemula <laxman8...@gmail.com> wrote:
> I observe that YARN jobs history logs are created in /user/history/done
> (*.jhist files) for all the mapreduce jobs like hive, pig etc. But for spark
> jobs submitted in yarn-cluster mode, the logs are not being created.
>
> I would like to see resource utilization by spark jobs. Is there any other
> place where I can find the resource utilization by spark jobs (CPU, Memory
> etc). Or is there any configuration to be set so that the job history logs
> are created just like other mapreduce jobs.
>
>
>
> --
> View this message in context: 
> http://apache-spark-user-list.1001560.n3.nabble.com/Job-History-Logs-for-spark-jobs-submitted-on-YARN-tp25946.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
>

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to