[
https://issues.apache.org/jira/browse/HADOOP-2121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12595442#action_12595442
]
jason_attributor edited comment on HADOOP-2121 at 5/9/08 5:21 AM:
-------------------------------------------------------
It looks like this is not the issue now. Now the machine is throwing to man
open files. I think the patch allowed the file descriptor limit problem to
become visible
as far as 0.16 goes, I will move forward soon but we have some production stuff
running and getting the time to port and move the production app and cluster is
difficult.
was (Author: jason_attributor):
It looks like this is not the issue now. Now the machine is throwing to man
open files.
as far as 0.16 goes, I will move forward soon but we have some production stuff
running and getting the time to port and move the production app and cluster is
difficult.
> Unexpected IOException in DFSOutputStream.close()
> -------------------------------------------------
>
> Key: HADOOP-2121
> URL: https://issues.apache.org/jira/browse/HADOOP-2121
> Project: Hadoop Core
> Issue Type: Bug
> Components: dfs
> Affects Versions: 0.14.3
> Reporter: Raghu Angadi
> Assignee: Raghu Angadi
> Fix For: 0.16.0
>
> Attachments: HADOOP-2121.patch, HADOOP-2121.patch, HADOOP-2121.patch
>
>
> While running a test with datanodes with disk space limitations, Hairong
> noticed many IOExceptions like this :
> {noformat}
> java.io.IOException: Mismatch in writeChunk() args
> at
> org.apache.hadoop.dfs.DFSClient$DFSOutputStream.writeChunk(DFSClient.java:1575)
> at
> org.apache.hadoop.fs.FSOutputSummer.writeChecksumChunk(FSOutputSummer.java:140)
> at org.apache.hadoop.fs.FSOutputSummer.flushBuffer(FSOutputSummer.java:122)
> at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.close(DFSClient.java:1715)
> at
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:49)
> at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:64)
> at org.apache.hadoop.io.SequenceFile$Writer.close(SequenceFile.java:918)
> at
> org.apache.hadoop.mapred.SequenceFileOutputFormat$1.close(SequenceFileOutputFormat.java:72)
> at
> org.apache.hadoop.mapred.MapTask$DirectMapOutputCollector.close(MapTask.java:232)
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:197)
> at org.apache.hadoop.mapred.TaskTracker$Child.main(TaskTracker.java:1935)
> {noformat}
> I will submit a patch. With the patch, we will still see an IOException, but
> an expected one.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.