[ https://issues.apache.org/jira/browse/HDFS-7645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14348756#comment-14348756 ]
Hadoop QA commented on HDFS-7645: --------------------------------- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12702780/HDFS-7645.03.patch against trunk revision 5e9b814. {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 2 new or modified test files. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javadoc{color}. There were no new javadoc warning messages. {color:green}+1 eclipse:eclipse{color}. The patch built with eclipse:eclipse. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 2.0.3) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:red}-1 core tests{color}. The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs: org.apache.hadoop.hdfs.server.datanode.TestDataNodeRollingUpgrade The following test timeouts occurred in hadoop-hdfs-project/hadoop-hdfs: org.apache.hadoop.hdfs.TestAppendSnapshotTruncate Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/9747//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9747//console This message is automatically generated. > Rolling upgrade is restoring blocks from trash multiple times > ------------------------------------------------------------- > > Key: HDFS-7645 > URL: https://issues.apache.org/jira/browse/HDFS-7645 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode > Affects Versions: 2.6.0 > Reporter: Nathan Roberts > Assignee: Keisuke Ogiwara > Attachments: HDFS-7645.01.patch, HDFS-7645.02.patch, > HDFS-7645.03.patch > > > When performing an HDFS rolling upgrade, the trash directory is getting > restored twice when under normal circumstances it shouldn't need to be > restored at all. iiuc, the only time these blocks should be restored is if we > need to rollback a rolling upgrade. > On a busy cluster, this can cause significant and unnecessary block churn > both on the datanodes, and more importantly in the namenode. > The two times this happens are: > 1) restart of DN onto new software > {code} > private void doTransition(DataNode datanode, StorageDirectory sd, > NamespaceInfo nsInfo, StartupOption startOpt) throws IOException { > if (startOpt == StartupOption.ROLLBACK && sd.getPreviousDir().exists()) { > Preconditions.checkState(!getTrashRootDir(sd).exists(), > sd.getPreviousDir() + " and " + getTrashRootDir(sd) + " should not > " + > " both be present."); > doRollback(sd, nsInfo); // rollback if applicable > } else { > // Restore all the files in the trash. The restored files are retained > // during rolling upgrade rollback. They are deleted during rolling > // upgrade downgrade. > int restored = restoreBlockFilesFromTrash(getTrashRootDir(sd)); > LOG.info("Restored " + restored + " block files from trash."); > } > {code} > 2) When heartbeat response no longer indicates a rollingupgrade is in progress > {code} > /** > * Signal the current rolling upgrade status as indicated by the NN. > * @param inProgress true if a rolling upgrade is in progress > */ > void signalRollingUpgrade(boolean inProgress) throws IOException { > String bpid = getBlockPoolId(); > if (inProgress) { > dn.getFSDataset().enableTrash(bpid); > dn.getFSDataset().setRollingUpgradeMarker(bpid); > } else { > dn.getFSDataset().restoreTrash(bpid); > dn.getFSDataset().clearRollingUpgradeMarker(bpid); > } > } > {code} > HDFS-6800 and HDFS-6981 were modifying this behavior making it not completely > clear whether this is somehow intentional. -- This message was sent by Atlassian JIRA (v6.3.4#6332)