[
http://issues.apache.org/jira/browse/HADOOP-142?page=comments#action_12374831 ]
Runping Qi commented on HADOOP-142:
---
Ideally, it will be the best if the tasktracker can diagnose whether the
failure was task specific or is a general case.
If it is a ge
[ http://issues.apache.org/jira/browse/HADOOP-129?page=all ]
Doug Cutting updated HADOOP-129:
Attachment: path.patch
Here's a patch that replaces uses of java.io.File in Hadoop's FileSystem and
MapReduce API's with a new class named Path. I left some e
[
http://issues.apache.org/jira/browse/HADOOP-129?page=comments#action_12374824 ]
Doug Cutting commented on HADOOP-129:
-
> Does it make sense to create class that would extend File and override
> unsupported operations to throw UnsupportedOperationExcep
[
http://issues.apache.org/jira/browse/HADOOP-141?page=comments#action_12374822 ]
paul sutter commented on HADOOP-141:
reword that first sentance:
Reduce progress grinds to a halt with lots of MapOutputProtocol timeouts and
transferring the same file
failed tasks should be rescheduled on different hosts after other jobs
--
Key: HADOOP-142
URL: http://issues.apache.org/jira/browse/HADOOP-142
Project: Hadoop
Type: Improvement
Components: mapred
Disk thrashing / task timeouts during map output copy phase
---
Key: HADOOP-141
URL: http://issues.apache.org/jira/browse/HADOOP-141
Project: Hadoop
Type: Bug
Components: mapred
Environment: linux
Repo
Hi,
what is the reason that each job that has no mapper defined runs the
IdentityMapper?
Handling different formats (as discussed) between mapping and
reducing is difficult.
Having one job that just map in the one format and having another job
that just reduce
in a other format would be a n