s
Too many fetch-failures
From: David Parks
To: user@hadoop.apache.org
Sent: Monday, March 11, 2013 3:23 PM
Subject: Unexpected Hadoop behavior: map task re-running after reducer has been
running
I can’t explain this behavior, can someone help me here
Hi David
The issue with the maps getting re triggered is because one of the node
where map outputs are stored are getting lost during reduce phase. As a
result of this the map outputs are no longer availabe from that node for
reduce to process and the job is again re triggered.
Can you try re tri
Parks
To: user@hadoop.apache.org
Sent: Monday, March 11, 2013 3:23 PM
Subject: Unexpected Hadoop behavior: map task re-running after reducer has been
running
I can’t explain this behavior, can someone help me here:
Kind % Complete Num Tasks Pending Running Complete Killed Failed/Killed
I can't explain this behavior, can someone help me here:
Kind % Complete Num Tasks Pending Running Complete Killed Failed/Killed
Task Attempts
map 100.00%23547 0 123546 0 247 / 0
reduce 62.40%13738 30 6232 0 336