title says it all.. this isn't the first job i've written either. very
confused.
Lots of things can happen. If you have a cleanup method, that can fail
after map and reduce complete. Also, hadoop writes the output of a task
to local disk, and only commits the results of the individual tasks to
HDFS after they complete, so you might be failing on the copy to HDFS.
On 11/13/
It could be that the result can't be written to HDFS. Is there any hint
in the log? I recently encountered this behavior when writing many files
back.
Mike Kendall wrote:
title says it all.. this isn't the first job i've written either. very
confused.
Hi Mike,
This % reported represents % of records read by framework not % of records
processed. So, for sake of example lets say you only have one record in the
data, framework will report 100% as soon as it is read even though you might
be doing lot of processing on that record and that processing
Hmm.. let's collect some error messages. looks like the same task failed 4
times... is there a way that i can get better logs about this task?
MapAttempt TASK_TYPE="MAP" TASKID="task_200911131440_0001_m_000307"
TASK_ATTEMPT_ID="attempt_200911131440_0001_m_000307_0" TASK_STATUS="FAILED"
FINISH_T
oh and just fyi this is the only failed task. everything else works just
fine. maybe the data copied over incorrectly or was malformed... /me
checks
On Fri, Nov 13, 2009 at 3:03 PM, Mike Kendall wrote:
> Hmm.. let's collect some error messages. looks like the same task failed
> 4 times...