ZanderXu created HDFS-16598: ------------------------------- Summary: All datanodes [DatanodeInfoWithStorage[127.0.0.1:57448,DS-1b5f7e33-a2bf-4edc-9122-a74c995a99f5,DISK]] are bad. Aborting... Key: HDFS-16598 URL: https://issues.apache.org/jira/browse/HDFS-16598 Project: Hadoop HDFS Issue Type: Bug Reporter: ZanderXu Assignee: ZanderXu
org.apache.hadoop.hdfs.testPipelineRecoveryOnRestartFailure failed with the stack like: {code:java} java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:57448,DS-1b5f7e33-a2bf-4edc-9122-a74c995a99f5,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1667) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineInternal(DataStreamer.java:1601) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1587) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeOrExternalError(DataStreamer.java:1371) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:674) {code} After tracing the root cause, this bug was introduced by [HDFS-16534|https://issues.apache.org/jira/browse/HDFS-16534]. Because the block GS of client may be smaller than DN when pipeline recovery failed. -- This message was sent by Atlassian Jira (v8.20.7#820007) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org