I am converting a secure non-HA cluster into a secure HA cluster. After the 
configuration and started all the journalnodes, I executed the following 
commands on the original NameNode:
1. hdfs name -initializeSharedEdits #this step succeeded
2. hadoop-daemon.sh start namenode # this step failed.

The namenode did not start successfully. I verified that my principals are 
right. And I checked the DNS is configured correctly so that I could use the 
nslookup command to lookup and reverse-lookup the Namenode and JournalNodes.

I also checked the logs. The JournalNodes did not report any ERROR. The 
Namenode Log report some ERRORs, but I still could not find the reason 
according to these ERRORS.

In the following I listed the main part of my hdfs-site.xml and the error log 
from my Namenode.  Could anyone help me to figure it out?


Many Thanks!

**************The main part of my hdfs-site.xml*************************

<property>
<name>dfs.nameservices</name>
<value>bgdt-dev-hrb</value>
</property>

<property>
<name>dfs.ha.namenodes.bgdt-dev-hrb</name>
<value>nn1,nn2</value>
</property>

<property>
<name>dfs.namenode.rpc-address.bgdt-dev-hrb.nn1</name>
<value>bgdt01.dev.hrb:9000</value>
</property>

<property>
<name>dfs.namenode.rpc-address.bgdt-dev-hrb.nn2</name>
<value>bgdt02.dev.hrb:9000</value>
</property>

<property>
<name>dfs.namenode.shared.edits.dir</name>
<value>qjournal://bgdt01.dev.hrb:8485;bgdt03.dev.hrb:8485;bgdt04.dev.hrb:8485/bgdt-dev-hrb</value>
</property>

<property>
<name>dfs.client.failover.proxy.provider.bgdt-dev-hrb</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>

<property>
<name>dfs.ha.fencing.methods</name>
<value>sshfence
shell(/bin/true)
</value>
</property>

<property>
<name>dfs.ha.fencing.ssh.private-key-files</name>
<value>/home/hadoop/.ssh/id_rsa</value>
</property>

<property>
<name>dfs.journalnode.edits.dir</name>
<value>/bgdt/hadoop/hdfs/jn</value>
</property>

<property>
<name>dfs.permissions.enabled</name>
<value>true</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:///bgdt/hadoop/hdfs/nn</value>
<final>true</final>
</property>
<property>
<name>dfs.datanode.name.dir</name>
<value>file:///bgdt/hadoop/hdfs/dn</value>
</property>

<property>
<name>dfs.namenode.http-address.bgdt-dev-hrb.nn1</name>
<value>bgdt01.dev.hrb:50070</value>
</property>

<property>
<name>dfs.namenode.http-address.bgdt-dev-hrb.nn2</name>
<value>bgdt02.dev.hrb:50070</value>
</property>

<property>
<name>dfs.permissions.superusergroup</name>
<value>bgdtgrp</value>
</property>

<property>
<name>dfs.block.access.token.enable</name>
<value>true</value>
</property>

<property>
<name>dfs.http.policy</name>
<value>HTTP_ONLY</value>
</property>

<property>
<name>dfs.namenode.https-address.bgdt-dev-hrb.nn1</name>
<value>bgdt01.dev.hrb:50470</value>
</property>

<property>
<name>dfs.namenode.https-address.bgdt-dev-hrb.nn2</name>
<value>bgdt02.dev.hrb:50470</value>
</property>

<property>
<name>dfs.namenode.keytab.file</name>
<value>/etc/hadoop/keytab/hadoop.service.keytab</value>
</property>
<property>
<name>dfs.namenode.kerberos.principal</name>
<value>hdfs/_h...@bgdt.dev.hrb</value>
</property>
<property>
<name>dfs.namenode.kerberos.https.principal</name>
<value>host/_h...@bgdt.dev.hrb</value>
</property>

<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>

<property>
<name>dfs.web.authentication.kerberos.principal</name>
<value>http/_h...@bgdt.dev.hrb</value>
</property>

<property>
<name>dfs.web.authentication.kerberos.keytab</name>
<value>/etc/hadoop/keytab/hadoop.service.keytab</value>
</property>

<property>
<name>dfs.journalnode.kerberos.principal</name>
<value>hdfs/_h...@bgdt.dev.hrb</value>
</property>

<property>
<name>dfs.journalnode.kerberos.https.principal</name>
<value>host/_h...@bgdt.dev.hrb</value>
</property>

<property>
<name>dfs.journalnode.kerberos.internal.spnego.principal</name>
<value>http/_h...@bgdt.dev.hrb</value>
</property>

<property>
<name>dfs.journalnode.keytab.file</name>
<value>/etc/hadoop/keytab/hadoop.service.keytab</value>
</property>

*********************The Error Log from the 
Namenode******************************


2015-02-03 17:42:06,020 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: 
Start loading edits file 
http://bgdt04.dev.hrb:8480/getJournal?jid=bgdt-dev-hrb&segmentTxId=68994&storageInfo=-57%3A876630880%3A0%3ACID-ea4c77aa-882d-4adf-a347-42f1344421f3,
 
http://bgdt01.dev.hrb:8480/getJournal?jid=bgdt-dev-hrb&segmentTxId=68994&storageInfo=-57%3A876630880%3A0%3ACID-ea4c77aa-882d-4adf-a347-42f1344421f3
2015-02-03 17:42:06,024 INFO 
org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: Fast-forwarding 
stream 
'http://bgdt04.dev.hrb:8480/getJournal?jid=bgdt-dev-hrb&segmentTxId=68994&storageInfo=-57%3A876630880%3A0%3ACID-ea4c77aa-882d-4adf-a347-42f1344421f3,
 
http://bgdt01.dev.hrb:8480/getJournal?jid=bgdt-dev-hrb&segmentTxId=68994&storageInfo=-57%3A876630880%3A0%3ACID-ea4c77aa-882d-4adf-a347-42f1344421f3'
 to transaction ID 68994
2015-02-03 17:42:06,024 INFO 
org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: Fast-forwarding 
stream 
'http://bgdt04.dev.hrb:8480/getJournal?jid=bgdt-dev-hrb&segmentTxId=68994&storageInfo=-57%3A876630880%3A0%3ACID-ea4c77aa-882d-4adf-a347-42f1344421f3'
 to transaction ID 68994
2015-02-03 17:42:06,154 ERROR 
org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: caught exception 
initializing 
http://bgdt04.dev.hrb:8480/getJournal?jid=bgdt-dev-hrb&segmentTxId=68994&storageInfo=-57%3A876630880%3A0%3ACID-ea4c77aa-882d-4adf-a347-42f1344421f3
java.io.IOException: 
org.apache.hadoop.security.authentication.client.AuthenticationException: 
GSSException: No valid credentials provided (Mechanism level: Server not found 
in Kerberos database (7) - UNKNOWN_SERVER)
at 
org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream$URLLog$1.run(EditLogFileInputStream.java:464)
at 
org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream$URLLog$1.run(EditLogFileInputStream.java:456)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
at org.apache.hadoop.security.SecurityUtil.doAsUser(SecurityUtil.java:444)
at 
org.apache.hadoop.security.SecurityUtil.doAsCurrentUser(SecurityUtil.java:438)
at 
org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream$URLLog.getInputStream(EditLogFileInputStream.java:455)
at 
org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream.init(EditLogFileInputStream.java:141)
at 
org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream.nextOpImpl(EditLogFileInputStream.java:192)
at 
org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream.nextOp(EditLogFileInputStream.java:250)
at 
org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.readOp(EditLogInputStream.java:85)
at 
org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.skipUntil(EditLogInputStream.java:151)
at 
org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream.nextOp(RedundantEditLogInputStream.java:178)
at 
org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.readOp(EditLogInputStream.java:85)
at 
org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.skipUntil(EditLogInputStream.java:151)
at 
org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream.nextOp(RedundantEditLogInputStream.java:178)
at 
org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.readOp(EditLogInputStream.java:85)
at 
org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:184)
at 
org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:137)
at org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:816)
at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:676)
at 
org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:279)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:955)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:700)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:529)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:585)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:751)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:735)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1407)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1473)

Reply via email to