[ https://issues.apache.org/jira/browse/HDFS-7395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14210155#comment-14210155 ]
Yongjun Zhang commented on HDFS-7395: ------------------------------------- HI [~wheat9], Thanks for addressing this issue quickly. Since I was able to reproduce it locally, I tested your patch and saw it fixed the issue. I reviewed the patch, and have a small suggestion here. Can we introduce a new private method {{resetGenerationStampV1Limit()}} (or initGe...) with no argument, which does {code} generationStampV1Limit = GenerationStamp.GRANDFATHER_GENERATION_STAMP; {code} so it can be shared by the constructor of BlockIDManager and methods {{clear()}}? Thanks. > TestDFSUpgradeFromImage.testUpgradeFromCorruptRel22Image failed in latest > Hadoop-Hdfs-trunk runs > ------------------------------------------------------------------------------------------------- > > Key: HDFS-7395 > URL: https://issues.apache.org/jira/browse/HDFS-7395 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode > Reporter: Yongjun Zhang > Assignee: Haohui Mai > Attachments: HDFS-7395.000.patch > > > In latest jenkins job > https://builds.apache.org/job/Hadoop-Hdfs-trunk/1932/ > https://builds.apache.org/job/Hadoop-Hdfs-trunk/1931/ > but not > https://builds.apache.org/job/Hadoop-Hdfs-trunk/1930/ > The following test failed the same way: > {code} > Failed > org.apache.hadoop.hdfs.TestDFSUpgradeFromImage.testUpgradeFromCorruptRel22Image > Failing for the past 2 builds (Since Failed#1931 ) > Took 0.54 sec. > Stacktrace > java.lang.IllegalStateException: null > at > com.google.common.base.Preconditions.checkState(Preconditions.java:129) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockIdManager.setGenerationStampV1Limit(BlockIdManager.java:85) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockIdManager.clear(BlockIdManager.java:206) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.clear(FSNamesystem.java:622) > at > org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:667) > at > org.apache.hadoop.hdfs.server.namenode.FSImage.doUpgrade(FSImage.java:376) > at > org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:268) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:991) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:714) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:537) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:596) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:763) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:747) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1443) > at > org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1104) > at > org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:975) > at > org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:804) > at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:465) > at > org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:424) > at > org.apache.hadoop.hdfs.TestDFSUpgradeFromImage.upgradeAndVerify(TestDFSUpgradeFromImage.java:582) > at > org.apache.hadoop.hdfs.TestDFSUpgradeFromImage.testUpgradeFromCorruptRel22Image(TestDFSUpgradeFromImage.java:318) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)