Mingliang Liu created HDFS-9767: ----------------------------------- Summary: TestFileAppend# fails intermittently Key: HDFS-9767 URL: https://issues.apache.org/jira/browse/HDFS-9767 Project: Hadoop HDFS Issue Type: Bug Components: test Affects Versions: 3.0.0 Reporter: Mingliang Liu
*Stacktrace*: {code} java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[127.0.0.1:52139,DS-1db5fb50-ea3a-4ae2-8b37-ebda1a947c34,DISK], DatanodeInfoWithStorage[127.0.0.1:49736,DS-2929c4a8-389a-4ff3-92be-1539018d17d9,DISK]], original=[DatanodeInfoWithStorage[127.0.0.1:52139,DS-1db5fb50-ea3a-4ae2-8b37-ebda1a947c34,DISK], DatanodeInfoWithStorage[127.0.0.1:49736,DS-2929c4a8-389a-4ff3-92be-1539018d17d9,DISK]]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration. at org.apache.hadoop.hdfs.DataStreamer.findNewDatanode(DataStreamer.java:1165) at org.apache.hadoop.hdfs.DataStreamer.addDatanode2ExistingPipeline(DataStreamer.java:1235) at org.apache.hadoop.hdfs.DataStreamer.handleDatanodeReplacement(DataStreamer.java:1426) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineInternal(DataStreamer.java:1341) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1324) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:598) Standard Output {code} See recent builds: * https://builds.apache.org/job/PreCommit-HDFS-Build/14352/testReport/org.apache.hadoop.hdfs/TestFileAppend/testMultipleAppends/ * https://builds.apache.org/job/PreCommit-HDFS-Build/14392/testReport/org.apache.hadoop.hdfs/TestFileAppend/testMultipleAppends/ * https://builds.apache.org/job/PreCommit-HDFS-Build/14315/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_66.txt -- This message was sent by Atlassian JIRA (v6.3.4#6332)