Thanks Ravi, it helped. BTW, only the first trick worked :

hadoop dfsadmin  -report | grep "Name:" | cut -d":" -f2

2nd one may not be applicable as I need to automate this ( hence need
a commandline utility )
3rd approach didnt work, as the commands are getting ecxecuted only on
the local slave-node and not on all the slaves.

-Prasen

On Sun, Mar 7, 2010 at 7:05 AM, Ravi Phulari <rphul...@yahoo-inc.com> wrote:
> There are several ways to get slave ip address. ( Not sure if you can use
> all of these on Ec2 )
>
> hadoop dfsadmin  -report shows you list of nodes and there status
> Name node slaves page displays information about live nodes.
> You can execute commands on slaves nodes using bin/slaves.sh – bin/slaves.sh
> /sbin/ifconfig | grep “inet addr”
>
> -
> Ravi
>
> On 3/6/10 9:15 AM, "prasenjit mukherjee" <prasen....@gmail.com> wrote:
>
> I am using ec2 and dont see the slaves  in $HADOOP_HOME/conf/slaves file.
>
> On Sat, Mar 6, 2010 at 9:33 PM, Ted Yu <yuzhih...@gmail.com> wrote:
>> check conf/slaves file on master:
>>
>> http://www.michael-noll.com/wiki/Running_Hadoop_On_Ubuntu_Linux_%28Multi-Node_Cluster%29#conf.2Fslaves_.28master_only.29
>>
>> On Fri, Mar 5, 2010 at 7:13 PM, prasenjit mukherjee <
>> pmukher...@quattrowireless.com> wrote:
>>
>>> Is there any way ( like hadoop-commandline or files ) to know ip
>>> address of all the cluster nodes ( from master )
>>>
>>
>
>
> Ravi
> --
>
>

Reply via email to