Re: hdfs2.7.3 kerberos can not startup

2016-09-20 Thread Wei-Chiu Chuang
You need to run kinit command to authenticate before running hdfs dfs -ls 
command.

Wei-Chiu Chuang

> On Sep 20, 2016, at 6:59 PM, kevin  wrote:
> 
> Thank you Brahma Reddy Battula.
> It's because of my problerm of the hdfs-site config file and https ca 
> configuration.
> now I can startup namenode and I can see the datanodes from the web.
> but When I try hdfs dfs -ls /:
> 
> [hadoop@dmp1 hadoop-2.7.3]$ hdfs dfs -ls /
> 16/09/20 07:56:48 WARN ipc.Client: Exception encountered while connecting to 
> the server : javax.security.sasl.SaslException: GSS initiate failed [Caused 
> by GSSException: No valid credentials provided (Mechanism level: Failed to 
> find any Kerberos tgt)]
> ls: Failed on local exception: java.io.IOException: 
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)]; Host Details : local host is: 
> "dmp1.example.com/192.168.249.129 "; 
> destination host is: "dmp1.example.com":9000; 
> 
> current user is hadoop which startup hdfs , and I have add addprinc hadoop 
> with commond :
> kadmin.local -q "addprinc hadoop" 
> 
> 
> 2016-09-20 17:33 GMT+08:00 Brahma Reddy Battula 
> >:
> Seems to be property problem.. it should be principal ( “l” is missed).
> 
>  
> 
> 
> 
>   dfs.secondary.namenode.kerberos.principa
> 
>   hadoop/_h...@example.com 
> 
> 
> 
>  
> 
>  
> 
> For namenode httpserver start fail, please check rakesh comments..
> 
>  
> 
> This is probably due to some missing configuration. 
> 
> Could you please re-check the ssl-server.xml, keystore and truststore 
> properties:
> 
>  
> 
> ssl.server.keystore.location
> 
> ssl.server.keystore.keypassword
> 
> ssl.client.truststore.location
> 
> ssl.client.truststore.password
> 
>  
> 
>  
> 
> --Brahma Reddy Battula
> 
>  
> 
> From: kevin [mailto:kiss.kevin...@gmail.com ] 
> Sent: 20 September 2016 16:53
> To: Rakesh Radhakrishnan
> Cc: user.hadoop
> Subject: Re: hdfs2.7.3 kerberos can not startup
> 
>  
> 
> thanks, but my issue is name node could  Login successful,but second namenode 
> couldn't. and name node got a HttpServer.start() threw a non Bind IOException:
> 
>  
> 
> hdfs-site.xml:
> 
>  
> 
> 
> 
> dfs.webhdfs.enabled
> 
> true
> 
> 
> 
>  
> 
> 
> 
>   dfs.block.access.token.enable
> 
>   true
> 
> 
> 
>  
> 
> 
> 
> 
> 
>   dfs.namenode.kerberos.principal
> 
>   hadoop/_h...@example.com 
> 
> 
> 
> 
> 
>   dfs.namenode.keytab.file
> 
>   /etc/hadoop/conf/hdfs.keytab
> 
> 
> 
> 
> 
>   dfs.https.port
> 
>   50470
> 
> 
> 
> 
> 
>   dfs.namenode.https-address
> 
>   dmp1.example.com:50470 
> 
> 
> 
> 
> 
>   dfs.namenode.kerberos.internal.spnego.principa
> 
>   HTTP/_h...@example.com 
> 
> 
> 
> 
> 
>   dfs.web.authentication.kerberos.keytab
> 
>   /etc/hadoop/conf/hdfs.keytab
> 
> 
> 
> 
> 
>   dfs.http.policy
> 
>   HTTPS_ONLY
> 
> 
> 
> 
> 
>   dfs.https.enable
> 
>   true
> 
> 
> 
>  
> 
>  
> 
> 
> 
> 
> 
>   dfs.namenode.secondary.http-address
> 
>   dmp1.example.com:50090 
> 
> 
> 
> 
> 
>   dfs.secondary.namenode.keytab.file
> 
>   /etc/hadoop/conf/hdfs.keytab
> 
> 
> 
> 
> 
>   dfs.secondary.namenode.kerberos.principa
> 
>   hadoop/_h...@example.com 
> 
>  
> 
> 
> 
>   dfs.secondary.namenode.kerberos.internal.spnego.principal
> 
>   HTTP/_h...@example.com 
> 
> 
> 
> 
> 
>   dfs.namenode.secondary.https-port
> 
>   50470
> 
> 
> 
>  
> 
>  
> 
> 
> 
>  
> 
> 
> 
>   dfs.journalnode.keytab.file
> 
>   /etc/hadoop/conf/hdfs.keytab
> 
> 
> 
> 
> 
>   dfs.journalnode.kerberos.principa
> 
>   hadoop/_h...@example.com 
> 
>  
> 
> 
> 
>   dfs.journalnode.kerberos.internal.spnego.principa
> 
>   HTTP/_h...@example.com 
> 
> 
> 
> 
> 
>   dfs.web.authentication.kerberos.keytab
> 
>   /etc/hadoop/conf/hdfs.keytab
> 
> 
> 
>  
> 
>  
> 
> 
> 
> 
> 
>   dfs.datanode.kerberos.principal
> 
>   hadoop/_h...@example.com 
> 
> 
> 
> 
> 
>   dfs.datanode.keytab.file
> 
>   /etc/hadoop/conf/hdfs.keytab
> 
> 
> 
> 
> 
>   dfs.datanode.data.dir.perm
> 
>   700
> 
> 
> 
>  
> 
> 
> 
> 
> 
>   dfs.datanode.address
> 
>   0.0.0.0:61004 
> 
> 
> 
> 
> 
>   dfs.datanode.http.address
> 
>   0.0.0.0:61006 
> 
> 
> 
> 
> 
>   dfs.datanode.https.address
> 
>   0.0.0.0:50470 
> 
> 
> 
>  
> 
> 
> 
>   dfs.data.transfer.protection
> 
>   integrity
> 
> 
> 
>  
> 
> 
> 
>  dfs.web.authentication.kerberos.principal
> 
>  HTTP/_h...@example.com 
> 
> 
> 
> 
> 
>  

Re: hdfs2.7.3 kerberos can not startup

2016-09-20 Thread kevin
Thank you Brahma Reddy Battula.
It's because of my problerm of the hdfs-site config file and https
ca configuration.
now I can startup namenode and I can see the datanodes from the web.
but When I try hdfs dfs -ls /:

*[hadoop@dmp1 hadoop-2.7.3]$ hdfs dfs -ls /*
*16/09/20 07:56:48 WARN ipc.Client: Exception encountered while connecting
to the server : javax.security.sasl.SaslException: GSS initiate failed
[Caused by GSSException: No valid credentials provided (Mechanism level:
Failed to find any Kerberos tgt)]*
*ls: Failed on local exception: java.io.IOException:
javax.security.sasl.SaslException: GSS initiate failed [Caused by
GSSException: No valid credentials provided (Mechanism level: Failed to
find any Kerberos tgt)]; Host Details : local host is:
"dmp1.example.com/192.168.249.129
"; destination host is: "dmp1.*
*example**.com":9000; *

current user is hadoop which startup hdfs , and I have add addprinc hadoop
with commond :
kadmin.local -q "addprinc hadoop"


2016-09-20 17:33 GMT+08:00 Brahma Reddy Battula <
brahmareddy.batt...@huawei.com>:

> Seems to be property problem.. it should be *principal* ( “l” is missed).
>
>
>
> **
>
> *  dfs.secondary.namenode.kerberos.principa*
>
> *  hadoop/_h...@example.com *
>
> **
>
>
>
>
>
> For namenode httpserver start fail, please check rakesh comments..
>
>
>
> This is probably due to some missing configuration.
>
> Could you please re-check the ssl-server.xml, keystore and truststore
> properties:
>
>
>
> ssl.server.keystore.location
>
> ssl.server.keystore.keypassword
>
> ssl.client.truststore.location
>
> ssl.client.truststore.password
>
>
>
>
>
> --Brahma Reddy Battula
>
>
>
> *From:* kevin [mailto:kiss.kevin...@gmail.com]
> *Sent:* 20 September 2016 16:53
> *To:* Rakesh Radhakrishnan
> *Cc:* user.hadoop
> *Subject:* Re: hdfs2.7.3 kerberos can not startup
>
>
>
> thanks, but my issue is name node could  *Login successful,but second
> namenode couldn't. and name node got a HttpServer.start() threw a non Bind
> IOException:*
>
>
>
> hdfs-site.xml:
>
>
>
> **
>
> *dfs.webhdfs.enabled*
>
> *true*
>
> **
>
>
>
> **
>
> *  dfs.block.access.token.enable*
>
> *  true*
>
> **
>
>
>
> **
>
> **
>
> *  dfs.namenode.kerberos.principal*
>
> *  hadoop/_h...@example.com *
>
> **
>
> **
>
> *  dfs.namenode.keytab.file*
>
> *  /etc/hadoop/conf/hdfs.keytab*
>
> **
>
> **
>
> *  dfs.https.port*
>
> *  50470*
>
> **
>
> **
>
> *  dfs.namenode.https-address*
>
> *  dmp1.example.com:50470 *
>
> **
>
> **
>
> *  dfs.namenode.kerberos.internal.spnego.principa*
>
> *  HTTP/_h...@example.com *
>
> **
>
> **
>
> *  dfs.web.authentication.kerberos.keytab*
>
> *  /etc/hadoop/conf/hdfs.keytab*
>
> **
>
> **
>
> *  dfs.http.policy*
>
> *  HTTPS_ONLY*
>
> **
>
> **
>
> *  dfs.https.enable*
>
> *  true*
>
> **
>
>
>
>
>
> **
>
> **
>
> *  dfs.namenode.secondary.http-address*
>
> *  dmp1.example.com:50090 *
>
> **
>
> **
>
> *  dfs.secondary.namenode.keytab.file*
>
> *  /etc/hadoop/conf/hdfs.keytab*
>
> **
>
> **
>
> *  dfs.secondary.namenode.kerberos.principa*
>
> *  hadoop/_h...@example.com *
>
> *  *
>
> **
>
> *  dfs.secondary.namenode.kerberos.internal.spnego.principal*
>
> *  HTTP/_h...@example.com *
>
> **
>
> **
>
> *  dfs.namenode.secondary.https-port*
>
> *  50470*
>
> **
>
>
>
>
>
> **
>
>
>
> **
>
> *  dfs.journalnode.keytab.file*
>
> *  /etc/hadoop/conf/hdfs.keytab*
>
> **
>
> **
>
> *  dfs.journalnode.kerberos.principa*
>
> *  hadoop/_h...@example.com *
>
> *  *
>
> **
>
> *  dfs.journalnode.kerberos.internal.spnego.principa*
>
> *  HTTP/_h...@example.com *
>
> **
>
> **
>
> *  dfs.web.authentication.kerberos.keytab*
>
> *  /etc/hadoop/conf/hdfs.keytab*
>
> **
>
>
>
>
>
> **
>
> **
>
> *  dfs.datanode.kerberos.principal*
>
> *  hadoop/_h...@example.com *
>
> **
>
> **
>
> *  dfs.datanode.keytab.file*
>
> *  /etc/hadoop/conf/hdfs.keytab*
>
> **
>
> **
>
> *  dfs.datanode.data.dir.perm*
>
> *  700*
>
> **
>
>
>
> **
>
> **
>
> *  dfs.datanode.address*
>
> *  0.0.0.0:61004 *
>
> **
>
> **
>
> *  dfs.datanode.http.address*
>
> *  0.0.0.0:61006 *
>
> **
>
> **
>
> *  dfs.datanode.https.address*
>
> *  0.0.0.0:50470 *
>
> **
>
>
>
> **
>
> *  dfs.data.transfer.protection*
>
> *  integrity*
>
> **
>
>
>
> **
>
> * dfs.web.authentication.kerberos.principal*
>
> * HTTP/_h...@example.com *
>
> **
>
> **
>
> * dfs.web.authentication.kerberos.keytab*
>
> * /etc/hadoop/conf/hdfs.keytab*
>
> **
>
>
>
> *and [hadoop@dmp1 hadoop-2.7.3]$ klist -ket /etc/hadoop/conf/hdfs.keytab*
>
>
>
>
>
> *Keytab name: FILE:/etc/hadoop/conf/hdfs.keytab*
>
> *KVNO Timestamp   Principal*
>
> * ---
> 

RE: hdfs2.7.3 kerberos can not startup

2016-09-20 Thread Brahma Reddy Battula
Seems to be property problem.. it should be principal ( “l” is missed).


  dfs.secondary.namenode.kerberos.principa
  hadoop/_h...@example.com



For namenode httpserver start fail, please check rakesh comments..

This is probably due to some missing configuration.
Could you please re-check the ssl-server.xml, keystore and truststore 
properties:

ssl.server.keystore.location
ssl.server.keystore.keypassword
ssl.client.truststore.location
ssl.client.truststore.password


--Brahma Reddy Battula

From: kevin [mailto:kiss.kevin...@gmail.com]
Sent: 20 September 2016 16:53
To: Rakesh Radhakrishnan
Cc: user.hadoop
Subject: Re: hdfs2.7.3 kerberos can not startup

thanks, but my issue is name node could  Login successful,but second namenode 
couldn't. and name node got a HttpServer.start() threw a non Bind IOException:

hdfs-site.xml:


dfs.webhdfs.enabled
true



  dfs.block.access.token.enable
  true




  dfs.namenode.kerberos.principal
  hadoop/_h...@example.com


  dfs.namenode.keytab.file
  /etc/hadoop/conf/hdfs.keytab


  dfs.https.port
  50470


  dfs.namenode.https-address
  dmp1.example.com:50470


  dfs.namenode.kerberos.internal.spnego.principa
  HTTP/_h...@example.com


  dfs.web.authentication.kerberos.keytab
  /etc/hadoop/conf/hdfs.keytab


  dfs.http.policy
  HTTPS_ONLY


  dfs.https.enable
  true





  dfs.namenode.secondary.http-address
  dmp1.example.com:50090


  dfs.secondary.namenode.keytab.file
  /etc/hadoop/conf/hdfs.keytab


  dfs.secondary.namenode.kerberos.principa
  hadoop/_h...@example.com


  dfs.secondary.namenode.kerberos.internal.spnego.principal
  HTTP/_h...@example.com


  dfs.namenode.secondary.https-port
  50470






  dfs.journalnode.keytab.file
  /etc/hadoop/conf/hdfs.keytab


  dfs.journalnode.kerberos.principa
  hadoop/_h...@example.com


  dfs.journalnode.kerberos.internal.spnego.principa
  HTTP/_h...@example.com


  dfs.web.authentication.kerberos.keytab
  /etc/hadoop/conf/hdfs.keytab





  dfs.datanode.kerberos.principal
  hadoop/_h...@example.com


  dfs.datanode.keytab.file
  /etc/hadoop/conf/hdfs.keytab


  dfs.datanode.data.dir.perm
  700




  dfs.datanode.address
  0.0.0.0:61004


  dfs.datanode.http.address
  0.0.0.0:61006


  dfs.datanode.https.address
  0.0.0.0:50470



  dfs.data.transfer.protection
  integrity



 dfs.web.authentication.kerberos.principal
 HTTP/_h...@example.com


 dfs.web.authentication.kerberos.keytab
 /etc/hadoop/conf/hdfs.keytab


and [hadoop@dmp1 hadoop-2.7.3]$ klist -ket /etc/hadoop/conf/hdfs.keytab


Keytab name: FILE:/etc/hadoop/conf/hdfs.keytab
KVNO Timestamp   Principal
 --- --
   2 09/19/2016 16:00:41 
hdfs/dmp1.example@example.com 
(aes256-cts-hmac-sha1-96)
   2 09/19/2016 16:00:41 
hdfs/dmp1.example@example.com 
(aes128-cts-hmac-sha1-96)
   2 09/19/2016 16:00:41 
hdfs/dmp1.example@example.com 
(des3-cbc-sha1)
   2 09/19/2016 16:00:41 
hdfs/dmp1.example@example.com 
(arcfour-hmac)
   2 09/19/2016 16:00:41 
hdfs/dmp2.example@example.com 
(aes256-cts-hmac-sha1-96)
   2 09/19/2016 16:00:41 
hdfs/dmp2.example@example.com 
(aes128-cts-hmac-sha1-96)
   2 09/19/2016 16:00:41 
hdfs/dmp2.example@example.com 
(des3-cbc-sha1)
   2 09/19/2016 16:00:41 
hdfs/dmp2.example@example.com 
(arcfour-hmac)
   2 09/19/2016 16:00:41 
hdfs/dmp3.example@example.com 
(aes256-cts-hmac-sha1-96)
   2 09/19/2016 16:00:41 
hdfs/dmp3.example@example.com 
(aes128-cts-hmac-sha1-96)
   2 09/19/2016 16:00:41 
hdfs/dmp3.example@example.com 
(des3-cbc-sha1)
   2 09/19/2016 16:00:41 
hdfs/dmp3.example@example.com 
(arcfour-hmac)
   2 09/19/2016 16:00:41 
HTTP/dmp1.example@example.com 
(aes256-cts-hmac-sha1-96)
   2 09/19/2016 16:00:41 
HTTP/dmp1.example@example.com 
(aes128-cts-hmac-sha1-96)
   2 09/19/2016 16:00:41 
HTTP/dmp1.example@example.com 
(des3-cbc-sha1)
   2 09/19/2016 16:00:41 
HTTP/dmp1.example@example.com 
(arcfour-hmac)
   2 09/19/2016 16:00:41 

Re: hdfs2.7.3 kerberos can not startup

2016-09-20 Thread kevin
thanks, but my issue is name node could  *Login successful,but second
namenode couldn't. and name node got a **HttpServer.start() threw a non
Bind IOException:*

hdfs-site.xml:

**
*dfs.webhdfs.enabled*
*true*
**

**
*  dfs.block.access.token.enable*
*  true*
**

**
**
*  dfs.namenode.kerberos.principal*
*  hadoop/_h...@example.com *
**
**
*  dfs.namenode.keytab.file*
*  /etc/hadoop/conf/hdfs.keytab*
**
**
*  dfs.https.port*
*  50470*
**
**
*  dfs.namenode.https-address*
*  dmp1.example.com:50470 *
**
**
*  dfs.namenode.kerberos.internal.spnego.principa*
*  HTTP/_h...@example.com *
**
**
*  dfs.web.authentication.kerberos.keytab*
*  /etc/hadoop/conf/hdfs.keytab*
**
**
*  dfs.http.policy*
*  HTTPS_ONLY*
**
**
*  dfs.https.enable*
*  true*
**


**
**
*  dfs.namenode.secondary.http-address*
*  dmp1.example.com:50090 *
**
**
*  dfs.secondary.namenode.keytab.file*
*  /etc/hadoop/conf/hdfs.keytab*
**
**
*  dfs.secondary.namenode.kerberos.principa*
*  hadoop/_h...@example.com *
* *
**
*  dfs.secondary.namenode.kerberos.internal.spnego.principal*
*  HTTP/_h...@example.com *
**
**
*  dfs.namenode.secondary.https-port*
*  50470*
**


**

**
*  dfs.journalnode.keytab.file*
*  /etc/hadoop/conf/hdfs.keytab*
**
**
*  dfs.journalnode.kerberos.principa*
*  hadoop/_h...@example.com *
* *
**
*  dfs.journalnode.kerberos.internal.spnego.principa*
*  HTTP/_h...@example.com *
**
**
*  dfs.web.authentication.kerberos.keytab*
*  /etc/hadoop/conf/hdfs.keytab*
**


**
**
*  dfs.datanode.kerberos.principal*
*  hadoop/_h...@example.com *
**
**
*  dfs.datanode.keytab.file*
*  /etc/hadoop/conf/hdfs.keytab*
**
**
*  dfs.datanode.data.dir.perm*
*  700*
**

**
**
*  dfs.datanode.address*
*  0.0.0.0:61004 *
**
**
*  dfs.datanode.http.address*
*  0.0.0.0:61006 *
**
**
*  dfs.datanode.https.address*
*  0.0.0.0:50470 *
**

**
*  dfs.data.transfer.protection*
*  integrity*
**

**
* dfs.web.authentication.kerberos.principal*
* HTTP/_h...@example.com *
**
**
* dfs.web.authentication.kerberos.keytab*
* /etc/hadoop/conf/hdfs.keytab*
**

*and [hadoop@dmp1 hadoop-2.7.3]$ klist -ket /etc/hadoop/conf/hdfs.keytab*


*Keytab name: FILE:/etc/hadoop/conf/hdfs.keytabKVNO Timestamp
Principal ---
--   2 09/19/2016
16:00:41 hdfs/dmp1.example@example.com 
(aes256-cts-hmac-sha1-96)2 09/19/2016 16:00:41
hdfs/dmp1.example@example.com 
(aes128-cts-hmac-sha1-96)2 09/19/2016 16:00:41
hdfs/dmp1.example@example.com 
(des3-cbc-sha1)2 09/19/2016 16:00:41 hdfs/dmp1.example@example.com
 (arcfour-hmac)2 09/19/2016 16:00:41
hdfs/dmp2.example@example.com 
(aes256-cts-hmac-sha1-96)2 09/19/2016 16:00:41
hdfs/dmp2.example@example.com 
(aes128-cts-hmac-sha1-96)2 09/19/2016 16:00:41
hdfs/dmp2.example@example.com 
(des3-cbc-sha1)2 09/19/2016 16:00:41 hdfs/dmp2.example@example.com
 (arcfour-hmac)2 09/19/2016 16:00:41
hdfs/dmp3.example@example.com 
(aes256-cts-hmac-sha1-96)2 09/19/2016 16:00:41
hdfs/dmp3.example@example.com 
(aes128-cts-hmac-sha1-96)2 09/19/2016 16:00:41
hdfs/dmp3.example@example.com 
(des3-cbc-sha1)2 09/19/2016 16:00:41 hdfs/dmp3.example@example.com
 (arcfour-hmac)2 09/19/2016 16:00:41
HTTP/dmp1.example@example.com 
(aes256-cts-hmac-sha1-96)2 09/19/2016 16:00:41
HTTP/dmp1.example@example.com 
(aes128-cts-hmac-sha1-96)2 09/19/2016 16:00:41
HTTP/dmp1.example@example.com 
(des3-cbc-sha1)2 09/19/2016 16:00:41 HTTP/dmp1.example@example.com
 (arcfour-hmac)2 09/19/2016 16:00:41
HTTP/dmp2.example@example.com 
(aes256-cts-hmac-sha1-96)2 09/19/2016 16:00:41
HTTP/dmp2.example@example.com 
(aes128-cts-hmac-sha1-96)2 09/19/2016 16:00:41
HTTP/dmp2.example@example.com 
(des3-cbc-sha1)2 09/19/2016 16:00:41 HTTP/dmp2.example@example.com
 (arcfour-hmac)2 09/19/2016 16:00:41
HTTP/dmp3.example@example.com 
(aes256-cts-hmac-sha1-96)2 09/19/2016 16:00:41
HTTP/dmp3.example@example.com 

Re: hdfs2.7.3 kerberos can not startup

2016-09-20 Thread Rakesh Radhakrishnan
>>Caused by: javax.security.auth.login.LoginException: Unable to obtain
password from user

Could you please check kerberos principal name is specified correctly in
"hdfs-site.xml", which is used to authenticate against Kerberos.

If keytab file defined in "hdfs-site.xml" and doesn't exists or wrong path,
you will see
this error. So, please verify the path and the keytab filename correctly
configured.

I hope hadoop discussion thread, https://goo.gl/M6l3vv may help you.


>>>2016-09-20 00:54:06,665 INFO org.apache.hadoop.http.HttpServer2:
HttpServer.start() threw a non Bind IOException
java.io.IOException: !JsseListener: java.lang.NullPointerException

This is probably due to some missing configuration.
Could you please re-check the ssl-server.xml, keystore and truststore
properties:

ssl.server.keystore.location
ssl.server.keystore.keypassword
ssl.client.truststore.location
ssl.client.truststore.password

Rakesh

On Tue, Sep 20, 2016 at 10:53 AM, kevin  wrote:

> *hi,all:*
> *My environment : Centos7.2 hadoop2.7.3 jdk1.8*
> *after I config hdfs with kerberos ,I can't start up with
> sbin/start-dfs.sh*
>
> *::namenode log as below  *
>
> *STARTUP_MSG:   build = Unknown -r Unknown; compiled by 'root' on
> 2016-09-18T09:05Z*
> *STARTUP_MSG:   java = 1.8.0_102*
> */*
> *2016-09-20 00:54:05,822 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: registered UNIX signal
> handlers for [TERM, HUP, INT]*
> *2016-09-20 00:54:05,825 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: createNameNode []*
> *2016-09-20 00:54:06,078 INFO
> org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
> hadoop-metrics2.properties*
> *2016-09-20 00:54:06,149 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
> period at 10 second(s).*
> *2016-09-20 00:54:06,149 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
> started*
> *2016-09-20 00:54:06,151 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: fs.defaultFS is
> hdfs://dmp1.example.com:9000 *
> *2016-09-20 00:54:06,152 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: Clients are to use
> dmp1.example.com:9000  to access this
> namenode/service.*
> *2016-09-20 00:54:06,446 INFO
> org.apache.hadoop.security.UserGroupInformation: Login successful for user
> hadoop/dmp1.example@example.com  using
> keytab file /etc/hadoop/conf/hdfs.keytab*
> *2016-09-20 00:54:06,472 INFO org.apache.hadoop.hdfs.DFSUtil: Starting web
> server as: HTTP/dmp1.example@example.com *
> *2016-09-20 00:54:06,475 INFO org.apache.hadoop.hdfs.DFSUtil: Starting
> Web-server for hdfs at: https://dmp1.example.com:50470
> *
> *2016-09-20 00:54:06,517 INFO org.mortbay.log: Logging to
> org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
> org.mortbay.log.Slf4jLog*
> *2016-09-20 00:54:06,533 INFO
> org.apache.hadoop.security.authentication.server.AuthenticationFilter:
> Unable to initialize FileSignerSecretProvider, falling back to use random
> secrets.*
> *2016-09-20 00:54:06,542 INFO org.apache.hadoop.http.HttpRequestLog: Http
> request log for http.requests.namenode is not defined*
> *2016-09-20 00:54:06,546 INFO org.apache.hadoop.http.HttpServer2: Added
> global filter 'safety'
> (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)*
> *2016-09-20 00:54:06,548 INFO org.apache.hadoop.http.HttpServer2: Added
> filter static_user_filter
> (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
> context hdfs*
> *2016-09-20 00:54:06,548 INFO org.apache.hadoop.http.HttpServer2: Added
> filter static_user_filter
> (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
> context static*
> *2016-09-20 00:54:06,548 INFO org.apache.hadoop.http.HttpServer2: Added
> filter static_user_filter
> (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
> context logs*
> *2016-09-20 00:54:06,653 INFO org.apache.hadoop.http.HttpServer2: Added
> filter 'org.apache.hadoop.hdfs.web.Au
> thFilter'
> (class=org.apache.hadoop.hdfs.web.AuthFilter)*
> *2016-09-20 00:54:06,654 INFO org.apache.hadoop.http.HttpServer2:
> addJerseyResourcePackage:
> packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources,
> pathSpec=/webhdfs/v1/**
> *2016-09-20 00:54:06,657 INFO org.apache.hadoop.http.HttpServer2: Adding
> Kerberos (SPNEGO) filter to getDelegationToken*
> *2016-09-20 00:54:06,658 INFO org.apache.hadoop.http.HttpServer2: Adding
> Kerberos (SPNEGO) filter to renewDelegationToken*
> *2016-09-20 00:54:06,658 INFO org.apache.hadoop.http.HttpServer2: Adding
> Kerberos (SPNEGO) filter to cancelDelegationToken*
> *2016-09-20 00:54:06,659 INFO