Hello, After formatting the HDFS and removing several entries from the slaves file, when I start up $HADOOP/bin/start-all.sh (hadoop 0.20) I get this in jobtracker.log All the machines save acrux have been removed the from the slaves file. Both acrux and the jobtracker have the same conf files.
Why does it discover the old machines? Does it automatically discover new machines? Thanks and Regards Saptarshi WARN org.apache.hadoop.mapred.JobTracker: Serious problem, cannot find record of 'previous' heartbeat for 'tracker_deneb.'; reinitializing the tasktracker WARN org.apache.hadoop.mapred.JobTracker: Serious problem, cannot find record of 'previous' heartbeat for 'tracker_adhara.stat.purdue.edu:localhost.localdomain/127.0.0.1:37715'; reinitializing the tasktracker WARN org.apache.hadoop.mapred.JobTracker: Serious problem, cannot find record of 'previous' heartbeat for 'tracker_castor'; reinitializing the tasktracker 2009-08-16 02:15:03,146 INFO org.apache.hadoop.net.NetworkTopology: Adding a new node: /default-rack/deneb. 2009-08-16 02:15:03,161 INFO org.apache.hadoop.net.NetworkTopology: Adding a new node: /default-rack/adhara 2009-08-16 02:15:03,165 INFO org.apache.hadoop.net.NetworkTopology: Adding a new node: /default-rack/castor 2009-08-16 02:15:03,193 INFO org.apache.hadoop.net.NetworkTopology: Adding a new node: /default-rack/acrux. 2009-08-16 02:15:15,158 ERROR org.apache.hadoop.mapred.PoolManager: Failed to reload allocations file - will use existing allocation
