Cam Macdonell wrote:

Hi,

I'm getting the following warning when running the simple wordcount and grep examples.

09/04/15 16:54:16 INFO mapred.JobClient: Task Id : attempt_200904151649_0001_m_000019_0, Status : FAILED
Too many fetch-failures
09/04/15 16:54:16 WARN mapred.JobClient: Error reading task outputhttp://localhost.localdomain:50060/tasklog?plaintext=true&taskid=attempt_200904151649_0001_m_000019_0&filter=stdout 09/04/15 16:54:16 WARN mapred.JobClient: Error reading task outputhttp://localhost.localdomain:50060/tasklog?plaintext=true&taskid=attempt_200904151649_0001_m_000019_0&filter=stderr

The only advice I could find from other posts with similar errors is to setup /etc/hosts with all slaves and the host IPs. I did this, but I still get the warning above. The output seems to come out alright however (I guess that's why it is a warning).

I tried running a wget on the http:// address in the warning message and I get the following back

2009-04-15 16:53:46 ERROR 400: Argument taskid is required.

So perhaps the wrong task ID is being passed to the http request. Any ideas on what can get rid of these warnings?

Thanks,
Cam

Well, for future googlers, I'll answer my own post. Watch our for the hostname at the end of "localhost" lines on slaves. One of my slaves was registering itself as "localhost.localdomain" with the jobtracker.

Is there a way that Hadoop could be made to not be so dependent on /etc/hosts, but on more dynamic hostname resolution?

Cam

Reply via email to