I'm pretty sure that failed nodes won't be automatically added to the
cluster when they go down. It's the sysadmin's responsibility to deal
with downed nodes and get them back in to the cluster.

Alex

On 10/27/08, wmitchell <[EMAIL PROTECTED]> wrote:
>
> Hi All,
>
> Ive been working michael nolls multi-node cluster setup example
> (Running_Hadoop_On_Ubuntu_Linux) for hadoop and I have a working setup. I
> then on my slave machine -- which is currently running a datanode killed the
> process in an effort to try to simulate some sort of failure on the slave
> machine datanode. I had assumed that the namenode would have been polling
> its datanodes and thus attempted to bring up any node that goes down. On
> looking at my slave machine it seems that the datanode process is still down
> (I've checked jps).
>
> Obviously im missing something ! Does hadoop look after its datanodes ? Is
> there a config setting that i may have missed ? Do I need to create some
> sort of external tool to pool and attempt to bring up nodes that have gone
> down ?
>
> Thanks
> Will
>
> --
> View this message in context:
> http://www.nabble.com/How-does-an-offline-Datanode-come-back-up---tp20192214p20192214.html
> Sent from the Hadoop lucene-users mailing list archive at Nabble.com.
>
>

Reply via email to