Re: Detecting application restart when running in supervised cluster mode

2016-04-05 Thread Saisai Shao
Hi Deepak,

I don't think supervise can be worked with yarn, it is a standalone and
Mesos specific feature.

Thanks
Saisai

On Tue, Apr 5, 2016 at 3:23 PM, Deepak Sharma  wrote:

> Hi Rafael
> If you are using yarn as the engine , you can always use RM UI to see the
> application progress.
>
> Thanks
> Deepak
>
> On Tue, Apr 5, 2016 at 12:18 PM, Rafael Barreto  > wrote:
>
>> Hello,
>>
>> I have a driver deployed using `spark-submit` in supervised cluster mode.
>> Sometimes my application would die for some transient problem and the
>> restart works perfectly. However, it would be useful to get alerted when
>> that happens. Is there any out-of-the-box way of doing that? Perhaps a hook
>> that I can use to catch an event? I guess I could poll my application state
>> using Spark REST API, but if there was something more elegant, I would
>> rather use it.
>>
>> Thanks in advance,
>> Rafael Barreto
>>
>
>
>
> --
> Thanks
> Deepak
> www.bigdatabig.com
> www.keosha.net
>


Re: Detecting application restart when running in supervised cluster mode

2016-04-05 Thread Deepak Sharma
Hi Rafael
If you are using yarn as the engine , you can always use RM UI to see the
application progress.

Thanks
Deepak

On Tue, Apr 5, 2016 at 12:18 PM, Rafael Barreto 
wrote:

> Hello,
>
> I have a driver deployed using `spark-submit` in supervised cluster mode.
> Sometimes my application would die for some transient problem and the
> restart works perfectly. However, it would be useful to get alerted when
> that happens. Is there any out-of-the-box way of doing that? Perhaps a hook
> that I can use to catch an event? I guess I could poll my application state
> using Spark REST API, but if there was something more elegant, I would
> rather use it.
>
> Thanks in advance,
> Rafael Barreto
>



-- 
Thanks
Deepak
www.bigdatabig.com
www.keosha.net


Detecting application restart when running in supervised cluster mode

2016-04-04 Thread Rafael Barreto
Hello,

I have a driver deployed using `spark-submit` in supervised cluster mode.
Sometimes my application would die for some transient problem and the
restart works perfectly. However, it would be useful to get alerted when
that happens. Is there any out-of-the-box way of doing that? Perhaps a hook
that I can use to catch an event? I guess I could poll my application state
using Spark REST API, but if there was something more elegant, I would
rather use it.

Thanks in advance,
Rafael Barreto