[ 
https://issues.apache.org/jira/browse/HDFS-6130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13947552#comment-13947552
 ] 

Hadoop QA commented on HDFS-6130:
---------------------------------

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12636772/HDFS-6130.000.patch
  against trunk revision .

    {color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

    {color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
                        Please justify why no new tests are needed for this 
patch.
                        Also please list what manual steps were performed to 
verify this patch.

    {color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

    {color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

    {color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

    {color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

    {color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

    {color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

                  org.apache.hadoop.hdfs.server.namenode.ha.TestHASafeMode

    {color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/6506//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/6506//console

This message is automatically generated.

> NPE when upgrading namenode from fsimages older than -32
> --------------------------------------------------------
>
>                 Key: HDFS-6130
>                 URL: https://issues.apache.org/jira/browse/HDFS-6130
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: namenode
>    Affects Versions: 2.4.0
>            Reporter: Fengdong Yu
>            Assignee: Haohui Mai
>            Priority: Blocker
>         Attachments: HDFS-6130.000.patch, fsimage.tar.gz
>
>
> I want upgrade an old cluster(0.20.2-cdh3u1) to trunk instance, 
> I can upgrade successfully if I don't configurage HA, but if HA enabled,
> there is NPE when I run ' hdfs namenode -initializeSharedEdits'
> {code}
> 14/03/20 15:06:41 INFO namenode.FSNamesystem: Retry cache on namenode is 
> enabled
> 14/03/20 15:06:41 INFO namenode.FSNamesystem: Retry cache will use 0.03 of 
> total heap and retry cache entry expiry time is 600000 millis
> 14/03/20 15:06:41 INFO util.GSet: Computing capacity for map 
> NameNodeRetryCache
> 14/03/20 15:06:41 INFO util.GSet: VM type       = 64-bit
> 14/03/20 15:06:41 INFO util.GSet: 0.029999999329447746% max memory 896 MB = 
> 275.3 KB
> 14/03/20 15:06:41 INFO util.GSet: capacity      = 2^15 = 32768 entries
> 14/03/20 15:06:41 INFO namenode.AclConfigFlag: ACLs enabled? false
> 14/03/20 15:06:41 INFO common.Storage: Lock on 
> /data/hadoop/data1/dfs/name/in_use.lock acquired by nodename 
> 7326@10-150-170-176
> 14/03/20 15:06:42 INFO common.Storage: Lock on 
> /data/hadoop/data2/dfs/name/in_use.lock acquired by nodename 
> 7326@10-150-170-176
> 14/03/20 15:06:42 INFO namenode.FSImage: No edit log streams selected.
> 14/03/20 15:06:42 INFO namenode.FSImageFormatPBINode: Loading 1 INodes.
> 14/03/20 15:06:42 FATAL namenode.NameNode: Exception in namenode join
> java.lang.NullPointerException
>       at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.isReservedName(FSDirectory.java:2984)
>       at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatPBINode$Loader.addToParent(FSImageFormatPBINode.java:205)
>       at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatPBINode$Loader.loadINodeDirectorySection(FSImageFormatPBINode.java:162)
>       at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf$Loader.loadInternal(FSImageFormatProtobuf.java:243)
>       at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf$Loader.load(FSImageFormatProtobuf.java:168)
>       at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormat$LoaderDelegator.load(FSImageFormat.java:120)
>       at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:895)
>       at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:881)
>       at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImageFile(FSImage.java:704)
>       at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:642)
>       at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:271)
>       at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:894)
>       at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:653)
>       at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initializeSharedEdits(NameNode.java:912)
>       at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1276)
>       at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1360)
> 14/03/20 15:06:42 INFO util.ExitUtil: Exiting with status 1
> 14/03/20 15:06:42 INFO namenode.NameNode: SHUTDOWN_MSG: 
> /************************************************************
> SHUTDOWN_MSG: Shutting down NameNode at 10-150-170-176/10.150.170.176
> ************************************************************/
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to