For that you need SPARK-1537 and the patch to go with it

It is still the spark web UI, it just hands off storage and retrieval of the 
history to the underlying Yarn timeline server, rather than through the 
filesystem. You'll get to see things as they go along too.

If you do want to try it, please have a go, and provide any feedback on the 
JIRA/pull request. I should warn: it needs Hadoop 2.6 (Apache,  HDP 2.2, 
CDH5.4), due to some API changes. While the patch is for 1.4+,  I already have 
a local branch with it applied to spark 1.3.1

> On 12 Jun 2015, at 03:01, Elkhan Dadashov <elkhan8...@gmail.com> wrote:
> 
> Hi all,
> 
> I wonder if anyone has used use MapReduce Job History to show Spark jobs.
> 
> I can see my Spark jobs (Spark running on Yarn cluster) on Resource manager 
> (RM).
> 
> I start Spark History server, and then through Spark's web-based user 
> interface I can monitor the cluster (and track cluster and job statistics). 
> Basically Yarn RM gets linked to Spark History server, which enables 
> monitoring.
> 
> But instead of using Spark History Server , is it possible to see Spark jobs 
> on MapReduce job history ? (in addition to seeing them on RM)
> 
> (I know through yarn logs -applicationId <app ID> we can get all logs after 
> Spark job has completed, but my concern is to see the logs and completed jobs 
>  through common web ui - MapReduce Job History )
> 
> Thanks in advance.
> 


---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to