failing jobs with:all datanodes are bad and too many fetch-failures

2010-01-11 Thread himanshu chandola
Hi, I've been running sequential map and reduce with sufficient storage for mapred.local.dir and the HDFS (the storage for each of these is at least 50 Gigs on each node and there are 30 nodes). When the expected output from one of the map-reduce jobs was close to 20 GB, the jobs failed with the

Re: Questions about JobTracker and TaskTracker

2010-01-11 Thread Eric Sammer
On 1/11/10 9:23 AM, psdc1978 wrote: > Hi, > > I've some questions about hadoop MapRed architecture: > > 1 - It only exists one TaskTracker to one JobTracker? Pedro: There is a one JobTracker to many TaskTracker relationship. Generally, all slave (worker machines) in a cluster run task trackers.

Questions about JobTracker and TaskTracker

2010-01-11 Thread psdc1978
Hi, I've some questions about hadoop MapRed architecture: 1 - It only exists one TaskTracker to one JobTracker? 2 - The Tasktracker and the JobTracker are two different instances that are started only through the start-mapred.sh script? [snippet of start-mapred.sh] "$bin"/hadoop-daemon.sh --co