[ 
https://issues.apache.org/jira/browse/HDFS-4067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-4067:
------------------------------
    Component/s: test

> TestUnderReplicatedBlocks may fail due to ReplicaAlreadyExistsException
> -----------------------------------------------------------------------
>
>                 Key: HDFS-4067
>                 URL: https://issues.apache.org/jira/browse/HDFS-4067
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: test
>    Affects Versions: 2.0.0-alpha
>            Reporter: Eli Collins
>            Assignee: Jing Zhao
>              Labels: test-fail
>             Fix For: 3.0.0-alpha1
>
>         Attachments: HDFS-4067.trunk.001.patch
>
>
> After adding the timeout to TestUnderReplicatedBlocks in HDFS-4061 we can see 
> the root cause of the failure is ReplicaAlreadyExistsException:
> {noformat}
> org.apache.hadoop.hdfs.server.datanode.ReplicaAlreadyExistsException: Block 
> BP-1541130889-172.29.121.238-1350435573411:blk_-3437032108997618258_1002 
> already exists in state FINALIZED and thus cannot be created.
>       at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createTemporary(FsDatasetImpl.java:799)
>       at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createTemporary(FsDatasetImpl.java:90)
>       at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.<init>(BlockReceiver.java:155)
>       at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:393)
>       at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:98)
>       at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:66)
>       at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:219)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to