Hi Rutuja,

I am not certain whether such tool exists or not, However, opening a JIRA
may be beneficial and would not do any harm.

You may look for workaround. Now my understanding is that your need is for
monitoring the health of the cluster?

HTH

Dr Mich Talebzadeh



LinkedIn * 
https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*



http://talebzadehmich.wordpress.com



On 9 June 2016 at 19:45, Rutuja Kulkarni <rutuja.kulkarn...@gmail.com>
wrote:

>
>
> Thanks again Mich!
> If there does not exist any interface like REST API or CLI for this, I
> would like to open a JIRA on exposing such a REST interface in SPARK which
> would list all the worker nodes.
> Please let me know if this seems to be the right thing to do for the
> community.
>
>
> Regards,
> Rutuja Kulkarni
>
>
> On Wed, Jun 8, 2016 at 5:36 PM, Mich Talebzadeh <mich.talebza...@gmail.com
> > wrote:
>
>> The other way is to log in to the individual nodes and do
>>
>>  jps
>>
>> 24819 Worker
>>
>> And you Processes identified as worker
>>
>> Also you can use jmonitor to see what they are doing resource wise
>>
>> You can of course write a small shell script to see if Worker(s) are up
>> and running in every node and alert if they are down?
>>
>> HTH
>>
>> Dr Mich Talebzadeh
>>
>>
>>
>> LinkedIn * 
>> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>>
>>
>>
>> http://talebzadehmich.wordpress.com
>>
>>
>>
>> On 9 June 2016 at 01:27, Rutuja Kulkarni <rutuja.kulkarn...@gmail.com>
>> wrote:
>>
>>> Thank you for the quick response.
>>> So the workers section would list all the running worker nodes in the
>>> standalone Spark cluster?
>>> I was also wondering if this is the only way to retrieve worker nodes or
>>> is there something like a Web API or CLI I could use?
>>> Thanks.
>>>
>>> Regards,
>>> Rutuja
>>>
>>> On Wed, Jun 8, 2016 at 4:02 PM, Mich Talebzadeh <
>>> mich.talebza...@gmail.com> wrote:
>>>
>>>> check port 8080 on the node that you started start-master.sh
>>>>
>>>>
>>>>
>>>> [image: Inline images 2]
>>>>
>>>> HTH
>>>>
>>>>
>>>> Dr Mich Talebzadeh
>>>>
>>>>
>>>>
>>>> LinkedIn * 
>>>> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>>> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>>>>
>>>>
>>>>
>>>> http://talebzadehmich.wordpress.com
>>>>
>>>>
>>>>
>>>> On 8 June 2016 at 23:56, Rutuja Kulkarni <rutuja.kulkarn...@gmail.com>
>>>> wrote:
>>>>
>>>>> Hello!
>>>>>
>>>>> I'm trying to setup a standalone spark cluster and wondering how to
>>>>> track status of all of it's nodes. I wonder if something like Yarn REST 
>>>>> API
>>>>> or HDFS CLI exists in Spark world that can provide status of nodes on such
>>>>> a cluster. Any pointers would be greatly appreciated.
>>>>>
>>>>> --
>>>>> *Regards,*
>>>>> *Rutuja Kulkarni*
>>>>>
>>>>>
>>>>>
>>>>
>>>
>>>
>>> --
>>> *Regards,*
>>> *Rutuja Kulkarni*
>>>
>>>
>>>
>>
>
>
> --
> *Regards,*
> *Rutuja Kulkarni*
>
>
>

Reply via email to