I am on hadoop 0.20.
 
To add a data node to a cluster, if we do not use the include/exclude/slaves 
files, do we need to  do anything other than configuring the hdfs-site.xml to 
point to name node and the mapred-site.xml to point to job tracker?
 
For example, should the job tracker and name node be restarted always?  
 
On a related note, if we restart a data node(that has some blocks on it) and 
the data node now has new IP address, Should we restart namenode/job tracker 
for hdfs and map-reduce to function correctly? 
Would the blocks on the restarted data node be detected or would hdfs think 
that these blocks were lost and start replicating them?
 
Thanks,
Sumadhur

Reply via email to