[
https://issues.apache.org/jira/browse/HADOOP-1172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12485269
]
Doug Cutting commented on HADOOP-1172:
--------------------------------------
+0 If the logging disk is full, then the node is not useful. Optimizing the
case when it fills just as a task is completed, but before it logs that it's
completed doesn't seem worth the effort to me.
> Reduce job failed due to error in logging
> -----------------------------------------
>
> Key: HADOOP-1172
> URL: https://issues.apache.org/jira/browse/HADOOP-1172
> Project: Hadoop
> Issue Type: Bug
> Reporter: Runping Qi
>
> Here is the stack trace:
> java.io.IOException: No space left on device
> at java.io.FileOutputStream.writeBytes(Native Method)
> at java.io.FileOutputStream.write(FileOutputStream.java:260)
> at
> java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:65)
> at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:123)
> at
> org.apache.hadoop.mapred.TaskLog$Writer.writeIndexRecord(TaskLog.java:251)
> at org.apache.hadoop.mapred.TaskLog$Writer.close(TaskLog.java:235)
> at org.apache.hadoop.mapred.TaskRunner.runChild(TaskRunner.java:406)
> at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:281)
> Fail to log should not fail the task. Especially when closing the logwriter.
> At that time, the mapper was actually complete.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.