[ https://issues.apache.org/jira/browse/HDFS-14450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16958500#comment-16958500 ]
Wei-Chiu Chuang commented on HDFS-14450: ---------------------------------------- [~ferhui] [~marvelrock] [~ayushtkn] this looks quite similar to HDFS-14450. Could you confirm? I'd like to close this as a duplicate if you can confirm. Thanks. > Erasure Coding: decommissioning datanodes cause replicate a large number of > duplicate EC internal blocks > -------------------------------------------------------------------------------------------------------- > > Key: HDFS-14450 > URL: https://issues.apache.org/jira/browse/HDFS-14450 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ec > Reporter: Wu Weiwei > Assignee: Wu Weiwei > Priority: Major > Attachments: HDFS-14450-000.patch > > > {code:java} > // [WARN] [RedundancyMonitor] : Failed to place enough replicas, still in > need of 2 to reach 167 (unavailableStorages=[DISK, ARCHIVE], > storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], > creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=false) All > required storage types are unavailable: unavailableStorages=[DISK, ARCHIVE], > storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], > creationFallbacks=[], replicationFallbacks=[ARCHIVE]} > {code} > In a large-scale cluster, decommissioning large-scale datanodes cause EC > block groups to replicate a large number of duplicate internal blocks. -- This message was sent by Atlassian Jira (v8.3.4#803005) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org