[
https://issues.apache.org/jira/browse/HADOOP-1443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12501307
]
Doug Cutting commented on HADOOP-1443:
--------------------------------------
My current theory is that this was caused by HADOOP-894, which is not in 0.13.
If that's the case, then this should not be merged into the 0.13 branch, and
the "Fix Version" should be 0.14, not 0.13. Does that sound right to others,
or is this really a problem in 0.13?
> TestFileCorruption fails with ArrayIndexOutOfBoundsException
> ------------------------------------------------------------
>
> Key: HADOOP-1443
> URL: https://issues.apache.org/jira/browse/HADOOP-1443
> Project: Hadoop
> Issue Type: Bug
> Components: dfs
> Reporter: Nigel Daley
> Assignee: Konstantin Shvachko
> Priority: Blocker
> Fix For: 0.13.0
>
> Attachments: 1443.patch, EmptyFile.patch
>
>
> org.apache.hadoop.dfs.TestFileCorruption.testFileCorruption failed once on
> Windows with this exception:
> org.apache.hadoop.ipc.RemoteException: java.io.IOException:
> java.lang.ArrayIndexOutOfBoundsException: 1
> at
> org.apache.hadoop.dfs.FSNamesystem.getBlockLocations(FSNamesystem.java:472)
> at
> org.apache.hadoop.dfs.FSNamesystem.getBlockLocations(FSNamesystem.java:436)
> at org.apache.hadoop.dfs.NameNode.getBlockLocations(NameNode.java:272)
> at org.apache.hadoop.dfs.NameNode.open(NameNode.java:259)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:341)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:567)
> at org.apache.hadoop.ipc.Client.call(Client.java:471)
> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:165)
> at org.apache.hadoop.dfs.$Proxy0.open(Unknown Source)
> at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
> at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
> at org.apache.hadoop.dfs.$Proxy0.open(Unknown Source)
> at
> org.apache.hadoop.dfs.DFSClient$DFSInputStream.openInfo(DFSClient.java:590)
> at
> org.apache.hadoop.dfs.DFSClient$DFSInputStream.<init>(DFSClient.java:582)
> at org.apache.hadoop.dfs.DFSClient.open(DFSClient.java:273)
> at
> org.apache.hadoop.dfs.DistributedFileSystem$RawDistributedFileSystem.open(DistributedFileSystem.java:136)
> at
> org.apache.hadoop.fs.ChecksumFileSystem$FSInputChecker.<init>(ChecksumFileSystem.java:114)
> at
> org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:340)
> at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:234)
> at org.apache.hadoop.dfs.DFSTestUtil.checkFiles(DFSTestUtil.java:132)
> at
> org.apache.hadoop.dfs.TestFileCorruption.testFileCorruption(TestFileCorruption.java:66)
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.