Hi Manoj,

 

As you may be aware this means the reduces are unable to fetch intermediate
data from TaskTrackers that ran map tasks - you can try:

* increasing tasktracker.http.threads so there are more threads to handle
fetch requests from reduces. 

* decreasing mapreduce.reduce.parallel.copies : so fewer copy / fetches are
performed in parallel

 

It could also be due to a temporary DNS issue.

 

See slide 26 of this presentation for potential causes for this message:
http://www.slideshare.net/cloudera/hadoop-troubleshooting-101-kate-ting-clou
dera

 

Not sure why you did not receive the problem before but was it the same data
or different data? Did you have other jobs running on your cluster?

 

Hope that helps

 

Regards

Vijay

 

From: Manoj Babu [mailto:manoj...@gmail.com] 
Sent: 01 February 2013 15:09
To: user@hadoop.apache.org
Subject: Reg Too many fetch-failures Error

 

Hi All,

 

I am getting Too many fetch-failures exception.

What might be the reason for this exception, For same size of data i dint
face this error earlier and there is change in code.

How to avoid this?

 

Thanks in advance.

 

Cheers!

Manoj.

Reply via email to