Sent from my iPhone

Begin forwarded message:

From: Aneela Saleem <ane...@platalytics.com>
Date: 29 June 2016 at 10:16:36 GMT+5
To: "sreebalineni ." <sreebalin...@gmail.com>
Subject: Re: datanode is unable to connect to namenode

Attached are the log files for datanode and namenode. Also i have attached hdfs-site.xml for namenode please check if there are any issues in configuration file.

I have following two Kerberos Principals:

nn/hadoop-master
dn/hadoop-slave

i have copied kdc.conf and krb5.conf on both nodes. Also i copied keytab file on datanode. And i have starting services with principal nn/hadoop-master.

On Wed, Jun 29, 2016 at 9:35 AM, sreebalineni . <sreebalin...@gmail.com> wrote:
Probably sharing both Name node and datanode logs may help. 

On Wed, Jun 29, 2016 at 10:02 AM, Aneela Saleem <ane...@platalytics.com> wrote:
Following is the result of telnet 

Trying 192.168.23.206...
Connected to hadoop-master.
Escape character is '^]'.

On Wed, Jun 29, 2016 at 3:57 AM, Aneela Saleem <ane...@platalytics.com> wrote:
Thanks Sreebalineni for the response.

This is the result of the netstat -a | grep 8020 command

tcp        0      0 hadoop-master:8020      *:*                     LISTEN
tcp        0      0 hadoop-master:33356     hadoop-master:8020      ESTABLISHED
tcp        0      0 hadoop-master:8020      hadoop-master:33356     ESTABLISHED
tcp        0      0 hadoop-master:55135     hadoop-master:8020      TIME_WAIT

And this is my /etc/hosts file

#127.0.0.1      localhost
#127.0.1.1      vm6-VirtualBox
192.168.23.206  hadoop-master platalytics.com vm6-VirtualBox
192.168.23.207  hadoop-slave
# The following lines are desirable for IPv6 capable hosts
::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters


Can you please tell me what's wrong with above configuration and how can i check whether it is firewall issue?

Thanks

On Wed, Jun 29, 2016 at 12:11 AM, sreebalineni . <sreebalin...@gmail.com> wrote:

Are you able to telnet ping. Check the firewalls as well

On Jun 29, 2016 12:39 AM, "Aneela Saleem" <ane...@platalytics.com> wrote:
Hi all,

I have setup two nodes cluster with security enabled. I have everything running successful like namenode, datanode, resourcemanager, nodemanager, jobhistoryserver etc. But datanode is unable to connect to namenode, as i can see only one node on the web UI. checking logs of datanode gives following warning:

WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Problem connecting to server: hadoop-master/192.168.23.206:8020

Rest of the things look fine. Please help me in this regard, what could be the issue?




2016-06-29 09:37:09,628 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
registered UNIX signal handlers for [TERM, HUP, INT]
2016-06-29 09:37:10,096 INFO org.apache.hadoop.security.UserGroupInformation: 
Login successful for user dn/hadoop-slave@platalyticsrealm using keytab file 
/etc/hadoop/conf/dn.keytab
2016-06-29 09:37:10,307 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: 
loaded properties from hadoop-metrics2.properties
2016-06-29 09:37:10,442 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: 
Scheduled snapshot period at 10 second(s).
2016-06-29 09:37:10,442 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: 
DataNode metrics system started
2016-06-29 09:37:10,453 INFO 
org.apache.hadoop.hdfs.server.datanode.BlockScanner: Initialized block scanner 
with targetBytesPerSec 1048576
2016-06-29 09:37:10,456 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
Configured hostname is vm7-web
2016-06-29 09:37:10,468 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
Starting DataNode with maxLockedMemory = 0
2016-06-29 09:37:10,503 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
Opened streaming server at /192.168.23.207:1004
2016-06-29 09:37:10,508 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
Balancing bandwith is 1048576 bytes/s
2016-06-29 09:37:10,508 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
Number threads for balancing is 5
2016-06-29 09:37:10,659 INFO org.mortbay.log: Logging to 
org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2016-06-29 09:37:10,678 INFO 
org.apache.hadoop.security.authentication.server.AuthenticationFilter: Unable 
to initialize FileSignerSecretProvider, falling back to use random secrets.
2016-06-29 09:37:10,689 INFO org.apache.hadoop.http.HttpRequestLog: Http 
request log for http.requests.datanode is not defined
2016-06-29 09:37:10,698 INFO org.apache.hadoop.http.HttpServer2: Added global 
filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2016-06-29 09:37:10,704 INFO org.apache.hadoop.http.HttpServer2: Added filter 
static_user_filter 
(class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to 
context datanode
2016-06-29 09:37:10,704 INFO org.apache.hadoop.http.HttpServer2: Added filter 
static_user_filter 
(class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to 
context static
2016-06-29 09:37:10,705 INFO org.apache.hadoop.http.HttpServer2: Added filter 
static_user_filter 
(class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to 
context logs
2016-06-29 09:37:10,883 INFO org.apache.hadoop.http.HttpServer2: Jetty bound to 
port 59469
2016-06-29 09:37:10,883 INFO org.mortbay.log: jetty-6.1.26
2016-06-29 09:37:11,246 INFO org.mortbay.log: Started 
HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:59469
2016-06-29 09:37:11,468 INFO 
org.apache.hadoop.hdfs.server.datanode.web.DatanodeHttpServer: Listening HTTP 
traffic on /192.168.23.207:1006
2016-06-29 09:37:11,478 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
dnUserName = dn
2016-06-29 09:37:11,478 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
supergroup = supergroup
2016-06-29 09:37:11,551 INFO org.apache.hadoop.ipc.CallQueueManager: Using 
callQueue class java.util.concurrent.LinkedBlockingQueue
2016-06-29 09:37:11,581 INFO org.apache.hadoop.ipc.Server: Starting Socket 
Reader #1 for port 50020
2016-06-29 09:37:11,651 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
Opened IPC server at /0.0.0.0:50020
2016-06-29 09:37:11,810 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
Refresh request received for nameservices: null
2016-06-29 09:37:11,835 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
Starting BPOfferServices for nameservices: <default>
2016-06-29 09:37:11,857 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
Block pool <registering> (Datanode Uuid unassigned) service to 
hadoop-master/192.168.23.206:8020 starting to offer service
2016-06-29 09:37:11,873 INFO org.apache.hadoop.ipc.Server: IPC Server 
Responder: starting
2016-06-29 09:37:11,878 INFO org.apache.hadoop.ipc.Server: IPC Server listener 
on 50020: starting
2016-06-29 09:37:12,383 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
Problem connecting to server: hadoop-master/192.168.23.206:8020
2016-06-29 09:37:17,460 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
Problem connecting to server: hadoop-master/192.168.23.206:8020
2016-06-29 09:37:22,519 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
Problem connecting to server: hadoop-master/192.168.23.206:8020
2016-06-29 09:59:48,938 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: 
registered UNIX signal handlers for [TERM, HUP, INT]
2016-06-29 09:59:48,948 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: 
createNameNode []
2016-06-29 09:59:49,655 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: 
loaded properties from hadoop-metrics2.properties
2016-06-29 09:59:49,877 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: 
Scheduled snapshot period at 10 second(s).
2016-06-29 09:59:49,877 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: 
NameNode metrics system started
2016-06-29 09:59:49,884 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: 
fs.defaultFS is hdfs://192.168.23.206:8020
2016-06-29 09:59:49,884 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: 
Clients are to use 192.168.23.206:8020 to access this namenode/service.
2016-06-29 09:59:50,803 INFO org.apache.hadoop.security.UserGroupInformation: 
Login successful for user nn/hadoop-master@platalyticsrealm using keytab file 
/etc/hadoop/conf/hdfs.keytab
2016-06-29 09:59:50,867 INFO org.apache.hadoop.hdfs.DFSUtil: Starting web 
server as: HTTP/hadoop-master@platalyticsrealm
2016-06-29 09:59:50,867 INFO org.apache.hadoop.hdfs.DFSUtil: Starting 
Web-server for hdfs at: http://0.0.0.0:50070
2016-06-29 09:59:50,990 INFO org.mortbay.log: Logging to 
org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2016-06-29 09:59:51,010 INFO 
org.apache.hadoop.security.authentication.server.AuthenticationFilter: Unable 
to initialize FileSignerSecretProvider, falling back to use random secrets.
2016-06-29 09:59:51,021 INFO org.apache.hadoop.http.HttpRequestLog: Http 
request log for http.requests.namenode is not defined
2016-06-29 09:59:51,032 INFO org.apache.hadoop.http.HttpServer2: Added global 
filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2016-06-29 09:59:51,042 INFO org.apache.hadoop.http.HttpServer2: Added filter 
static_user_filter 
(class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to 
context hdfs
2016-06-29 09:59:51,042 INFO org.apache.hadoop.http.HttpServer2: Added filter 
static_user_filter 
(class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to 
context static
2016-06-29 09:59:51,043 INFO org.apache.hadoop.http.HttpServer2: Added filter 
static_user_filter 
(class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to 
context logs
2016-06-29 09:59:51,084 INFO org.apache.hadoop.http.HttpServer2: Added filter 
'org.apache.hadoop.hdfs.web.AuthFilter' 
(class=org.apache.hadoop.hdfs.web.AuthFilter)
2016-06-29 09:59:51,086 INFO org.apache.hadoop.http.HttpServer2: 
addJerseyResourcePackage: 
packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources,
 pathSpec=/webhdfs/v1/*
2016-06-29 09:59:51,092 INFO org.apache.hadoop.http.HttpServer2: Adding 
Kerberos (SPNEGO) filter to getDelegationToken
2016-06-29 09:59:51,093 INFO org.apache.hadoop.http.HttpServer2: Adding 
Kerberos (SPNEGO) filter to renewDelegationToken
2016-06-29 09:59:51,095 INFO org.apache.hadoop.http.HttpServer2: Adding 
Kerberos (SPNEGO) filter to cancelDelegationToken
2016-06-29 09:59:51,096 INFO org.apache.hadoop.http.HttpServer2: Adding 
Kerberos (SPNEGO) filter to fsck
2016-06-29 09:59:51,098 INFO org.apache.hadoop.http.HttpServer2: Adding 
Kerberos (SPNEGO) filter to imagetransfer
2016-06-29 09:59:51,115 INFO org.apache.hadoop.http.HttpServer2: Jetty bound to 
port 50070
2016-06-29 09:59:51,115 INFO org.mortbay.log: jetty-6.1.26
2016-06-29 09:59:51,425 INFO 
org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler: 
Login using keytab /etc/hadoop/conf/hdfs.keytab, for principal 
HTTP/hadoop-master@platalyticsrealm
2016-06-29 09:59:51,440 INFO 
org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler: 
Login using keytab /etc/hadoop/conf/hdfs.keytab, for principal 
HTTP/hadoop-master@platalyticsrealm
2016-06-29 09:59:51,454 INFO org.mortbay.log: Started 
HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:50070
2016-06-29 09:59:51,511 WARN 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one image storage 
directory (dfs.namenode.name.dir) configured. Beware of data loss due to lack 
of redundant storage directories!
2016-06-29 09:59:51,511 WARN 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one namespace edits 
storage directory (dfs.namenode.edits.dir) configured. Beware of data loss due 
to lack of redundant storage directories!
2016-06-29 09:59:51,584 INFO 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No KeyProvider found.
2016-06-29 09:59:51,585 INFO 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsLock is fair:true
2016-06-29 09:59:51,653 INFO 
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: 
dfs.block.invalidate.limit=1000
2016-06-29 09:59:51,653 INFO 
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: 
dfs.namenode.datanode.registration.ip-hostname-check=true
2016-06-29 09:59:51,655 INFO 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: 
dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
2016-06-29 09:59:51,656 INFO 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: The block deletion 
will start around 2016 Jun 29 09:59:51
2016-06-29 09:59:51,659 INFO org.apache.hadoop.util.GSet: Computing capacity 
for map BlocksMap
2016-06-29 09:59:51,659 INFO org.apache.hadoop.util.GSet: VM type       = 64-bit
2016-06-29 09:59:51,662 INFO org.apache.hadoop.util.GSet: 2.0% max memory 889 
MB = 17.8 MB
2016-06-29 09:59:51,662 INFO org.apache.hadoop.util.GSet: capacity      = 2^21 
= 2097152 entries
2016-06-29 09:59:51,673 INFO 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: 
dfs.block.access.token.enable=true
2016-06-29 09:59:51,673 INFO 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: 
dfs.block.access.key.update.interval=600 min(s), 
dfs.block.access.token.lifetime=600 min(s), 
dfs.encrypt.data.transfer.algorithm=null
2016-06-29 09:59:51,683 INFO 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: defaultReplication  
       = 2
2016-06-29 09:59:51,684 INFO 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplication      
       = 512
2016-06-29 09:59:51,684 INFO 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: minReplication      
       = 1
2016-06-29 09:59:51,684 INFO 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: 
maxReplicationStreams      = 2
2016-06-29 09:59:51,684 INFO 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: 
replicationRecheckInterval = 3000
2016-06-29 09:59:51,684 INFO 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: encryptDataTransfer 
       = false
2016-06-29 09:59:51,684 INFO 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxNumBlocksToLog   
       = 1000
2016-06-29 09:59:51,688 INFO 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner             = 
nn/hadoop-master@platalyticsrealm (auth:KERBEROS)
2016-06-29 09:59:51,688 INFO 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup          = 
supergroup
2016-06-29 09:59:51,688 INFO 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled = true
2016-06-29 09:59:51,689 INFO 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: HA Enabled: false
2016-06-29 09:59:51,691 INFO 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Append Enabled: true
2016-06-29 09:59:51,745 INFO org.apache.hadoop.util.GSet: Computing capacity 
for map INodeMap
2016-06-29 09:59:51,746 INFO org.apache.hadoop.util.GSet: VM type       = 64-bit
2016-06-29 09:59:51,746 INFO org.apache.hadoop.util.GSet: 1.0% max memory 889 
MB = 8.9 MB
2016-06-29 09:59:51,746 INFO org.apache.hadoop.util.GSet: capacity      = 2^20 
= 1048576 entries
2016-06-29 09:59:51,748 INFO 
org.apache.hadoop.hdfs.server.namenode.FSDirectory: ACLs enabled? false
2016-06-29 09:59:51,748 INFO 
org.apache.hadoop.hdfs.server.namenode.FSDirectory: XAttrs enabled? true
2016-06-29 09:59:51,748 INFO 
org.apache.hadoop.hdfs.server.namenode.FSDirectory: Maximum size of an xattr: 
16384
2016-06-29 09:59:51,748 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: 
Caching file names occuring more than 10 times
2016-06-29 09:59:51,760 INFO org.apache.hadoop.util.GSet: Computing capacity 
for map cachedBlocks
2016-06-29 09:59:51,760 INFO org.apache.hadoop.util.GSet: VM type       = 64-bit
2016-06-29 09:59:51,760 INFO org.apache.hadoop.util.GSet: 0.25% max memory 889 
MB = 2.2 MB
2016-06-29 09:59:51,760 INFO org.apache.hadoop.util.GSet: capacity      = 2^18 
= 262144 entries
2016-06-29 09:59:51,762 INFO 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: 
dfs.namenode.safemode.threshold-pct = 0.9990000128746033
2016-06-29 09:59:51,763 INFO 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: 
dfs.namenode.safemode.min.datanodes = 0
2016-06-29 09:59:51,763 INFO 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: 
dfs.namenode.safemode.extension     = 30000
2016-06-29 09:59:51,768 INFO 
org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics: NNTop conf: 
dfs.namenode.top.window.num.buckets = 10
2016-06-29 09:59:51,768 INFO 
org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics: NNTop conf: 
dfs.namenode.top.num.users = 10
2016-06-29 09:59:51,768 INFO 
org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics: NNTop conf: 
dfs.namenode.top.windows.minutes = 1,5,25
2016-06-29 09:59:51,770 INFO 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache on namenode is 
enabled
2016-06-29 09:59:51,771 INFO 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache will use 0.03 
of total heap and retry cache entry expiry time is 600000 millis
2016-06-29 09:59:51,774 INFO org.apache.hadoop.util.GSet: Computing capacity 
for map NameNodeRetryCache
2016-06-29 09:59:51,774 INFO org.apache.hadoop.util.GSet: VM type       = 64-bit
2016-06-29 09:59:51,774 INFO org.apache.hadoop.util.GSet: 0.029999999329447746% 
max memory 889 MB = 273.1 KB
2016-06-29 09:59:51,774 INFO org.apache.hadoop.util.GSet: capacity      = 2^15 
= 32768 entries
2016-06-29 09:59:51,796 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock 
on /app/hadoop/tmp/dfs/name/in_use.lock acquired by nodename 9456@vm6-VirtualBox
2016-06-29 09:59:52,041 INFO 
org.apache.hadoop.hdfs.server.namenode.FileJournalManager: Recovering 
unfinalized segments in /app/hadoop/tmp/dfs/name/current
2016-06-29 09:59:52,489 INFO 
org.apache.hadoop.hdfs.server.namenode.FileJournalManager: Finalizing edits 
file /app/hadoop/tmp/dfs/name/current/edits_inprogress_0000000000000002794 -> 
/app/hadoop/tmp/dfs/name/current/edits_0000000000000002794-0000000000000002794
2016-06-29 09:59:52,969 INFO 
org.apache.hadoop.hdfs.server.namenode.FSImageFormatPBINode: Loading 7 INodes.
2016-06-29 09:59:53,035 INFO 
org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf: Loaded FSImage in 
0 seconds.
2016-06-29 09:59:53,035 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: 
Loaded image for txid 2768 from 
/app/hadoop/tmp/dfs/name/current/fsimage_0000000000000002768
2016-06-29 09:59:53,036 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: 
Reading 
org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream@4087a061 
expecting start txid #2769
2016-06-29 09:59:53,037 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: 
Start loading edits file 
/app/hadoop/tmp/dfs/name/current/edits_0000000000000002769-0000000000000002772
2016-06-29 09:59:53,040 INFO 
org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: Fast-forwarding 
stream 
'/app/hadoop/tmp/dfs/name/current/edits_0000000000000002769-0000000000000002772'
 to transaction ID 2769
2016-06-29 09:59:53,045 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: 
Edits file 
/app/hadoop/tmp/dfs/name/current/edits_0000000000000002769-0000000000000002772 
of size 110 edits # 4 loaded in 0 seconds
2016-06-29 09:59:53,046 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: 
Reading 
org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream@8326160 
expecting start txid #2773
2016-06-29 09:59:53,046 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: 
Start loading edits file 
/app/hadoop/tmp/dfs/name/current/edits_0000000000000002773-0000000000000002774
2016-06-29 09:59:53,046 INFO 
org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: Fast-forwarding 
stream 
'/app/hadoop/tmp/dfs/name/current/edits_0000000000000002773-0000000000000002774'
 to transaction ID 2769
2016-06-29 09:59:53,047 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: 
Edits file 
/app/hadoop/tmp/dfs/name/current/edits_0000000000000002773-0000000000000002774 
of size 42 edits # 2 loaded in 0 seconds
2016-06-29 09:59:53,047 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: 
Reading 
org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream@2396e9d4 
expecting start txid #2775
2016-06-29 09:59:53,048 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: 
Start loading edits file 
/app/hadoop/tmp/dfs/name/current/edits_0000000000000002775-0000000000000002776
2016-06-29 09:59:53,048 INFO 
org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: Fast-forwarding 
stream 
'/app/hadoop/tmp/dfs/name/current/edits_0000000000000002775-0000000000000002776'
 to transaction ID 2769
2016-06-29 09:59:53,049 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: 
Edits file 
/app/hadoop/tmp/dfs/name/current/edits_0000000000000002775-0000000000000002776 
of size 42 edits # 2 loaded in 0 seconds
2016-06-29 09:59:53,049 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: 
Reading 
org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream@8d16f8d 
expecting start txid #2777
2016-06-29 09:59:53,049 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: 
Start loading edits file 
/app/hadoop/tmp/dfs/name/current/edits_0000000000000002777-0000000000000002778
2016-06-29 09:59:53,050 INFO 
org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: Fast-forwarding 
stream 
'/app/hadoop/tmp/dfs/name/current/edits_0000000000000002777-0000000000000002778'
 to transaction ID 2769
2016-06-29 09:59:53,050 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: 
Edits file 
/app/hadoop/tmp/dfs/name/current/edits_0000000000000002777-0000000000000002778 
of size 42 edits # 2 loaded in 0 seconds
2016-06-29 09:59:53,051 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: 
Reading 
org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream@6df29680 
expecting start txid #2779
2016-06-29 09:59:53,051 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: 
Start loading edits file 
/app/hadoop/tmp/dfs/name/current/edits_0000000000000002779-0000000000000002780
2016-06-29 09:59:53,051 INFO 
org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: Fast-forwarding 
stream 
'/app/hadoop/tmp/dfs/name/current/edits_0000000000000002779-0000000000000002780'
 to transaction ID 2769
2016-06-29 09:59:53,052 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: 
Edits file 
/app/hadoop/tmp/dfs/name/current/edits_0000000000000002779-0000000000000002780 
of size 42 edits # 2 loaded in 0 seconds
2016-06-29 09:59:53,052 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: 
Reading 
org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream@5176e5e4 
expecting start txid #2781
2016-06-29 09:59:53,053 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: 
Start loading edits file 
/app/hadoop/tmp/dfs/name/current/edits_0000000000000002781-0000000000000002782
2016-06-29 09:59:53,053 INFO 
org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: Fast-forwarding 
stream 
'/app/hadoop/tmp/dfs/name/current/edits_0000000000000002781-0000000000000002782'
 to transaction ID 2769
2016-06-29 09:59:53,054 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: 
Edits file 
/app/hadoop/tmp/dfs/name/current/edits_0000000000000002781-0000000000000002782 
of size 42 edits # 2 loaded in 0 seconds
2016-06-29 09:59:53,054 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: 
Reading 
org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream@54ef0584 
expecting start txid #2783
2016-06-29 09:59:53,054 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: 
Start loading edits file 
/app/hadoop/tmp/dfs/name/current/edits_0000000000000002783-0000000000000002784
2016-06-29 09:59:53,055 INFO 
org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: Fast-forwarding 
stream 
'/app/hadoop/tmp/dfs/name/current/edits_0000000000000002783-0000000000000002784'
 to transaction ID 2769
2016-06-29 09:59:53,055 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: 
Edits file 
/app/hadoop/tmp/dfs/name/current/edits_0000000000000002783-0000000000000002784 
of size 42 edits # 2 loaded in 0 seconds
2016-06-29 09:59:53,055 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: 
Reading 
org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream@185348ac 
expecting start txid #2785
2016-06-29 09:59:53,056 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: 
Start loading edits file 
/app/hadoop/tmp/dfs/name/current/edits_0000000000000002785-0000000000000002786
2016-06-29 09:59:53,056 INFO 
org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: Fast-forwarding 
stream 
'/app/hadoop/tmp/dfs/name/current/edits_0000000000000002785-0000000000000002786'
 to transaction ID 2769
2016-06-29 09:59:53,057 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: 
Edits file 
/app/hadoop/tmp/dfs/name/current/edits_0000000000000002785-0000000000000002786 
of size 42 edits # 2 loaded in 0 seconds
2016-06-29 09:59:53,057 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: 
Reading 
org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream@3c820ae 
expecting start txid #2787
2016-06-29 09:59:53,057 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: 
Start loading edits file 
/app/hadoop/tmp/dfs/name/current/edits_0000000000000002787-0000000000000002787
2016-06-29 09:59:53,057 INFO 
org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: Fast-forwarding 
stream 
'/app/hadoop/tmp/dfs/name/current/edits_0000000000000002787-0000000000000002787'
 to transaction ID 2769
2016-06-29 09:59:53,080 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: 
Edits file 
/app/hadoop/tmp/dfs/name/current/edits_0000000000000002787-0000000000000002787 
of size 1048576 edits # 1 loaded in 0 seconds
2016-06-29 09:59:53,081 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: 
Reading 
org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream@47d98172 
expecting start txid #2788
2016-06-29 09:59:53,081 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: 
Start loading edits file 
/app/hadoop/tmp/dfs/name/current/edits_0000000000000002788-0000000000000002791
2016-06-29 09:59:53,081 INFO 
org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: Fast-forwarding 
stream 
'/app/hadoop/tmp/dfs/name/current/edits_0000000000000002788-0000000000000002791'
 to transaction ID 2769
2016-06-29 09:59:53,082 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: 
Edits file 
/app/hadoop/tmp/dfs/name/current/edits_0000000000000002788-0000000000000002791 
of size 110 edits # 4 loaded in 0 seconds
2016-06-29 09:59:53,083 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: 
Reading 
org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream@18c18838 
expecting start txid #2792
2016-06-29 09:59:53,083 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: 
Start loading edits file 
/app/hadoop/tmp/dfs/name/current/edits_0000000000000002792-0000000000000002793
2016-06-29 09:59:53,083 INFO 
org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: Fast-forwarding 
stream 
'/app/hadoop/tmp/dfs/name/current/edits_0000000000000002792-0000000000000002793'
 to transaction ID 2769
2016-06-29 09:59:53,084 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: 
Edits file 
/app/hadoop/tmp/dfs/name/current/edits_0000000000000002792-0000000000000002793 
of size 42 edits # 2 loaded in 0 seconds
2016-06-29 09:59:53,084 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: 
Reading 
org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream@49d6213a 
expecting start txid #2794
2016-06-29 09:59:53,084 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: 
Start loading edits file 
/app/hadoop/tmp/dfs/name/current/edits_0000000000000002794-0000000000000002794
2016-06-29 09:59:53,085 INFO 
org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: Fast-forwarding 
stream 
'/app/hadoop/tmp/dfs/name/current/edits_0000000000000002794-0000000000000002794'
 to transaction ID 2769
2016-06-29 09:59:53,087 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: 
Edits file 
/app/hadoop/tmp/dfs/name/current/edits_0000000000000002794-0000000000000002794 
of size 1048576 edits # 1 loaded in 0 seconds
2016-06-29 09:59:53,096 INFO 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Need to save fs image? 
false (staleImage=false, haEnabled=false, isRollingUpgrade=false)
2016-06-29 09:59:53,263 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: 
Starting log segment at 2795
2016-06-29 09:59:53,334 INFO org.apache.hadoop.hdfs.server.namenode.NameCache: 
initialized with 0 entries 0 lookups
2016-06-29 09:59:53,335 INFO 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading FSImage 
in 1553 msecs
2016-06-29 09:59:53,647 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: 
RPC server is binding to hadoop-master:8020
2016-06-29 09:59:53,659 INFO org.apache.hadoop.ipc.CallQueueManager: Using 
callQueue class java.util.concurrent.LinkedBlockingQueue
2016-06-29 09:59:53,680 INFO org.apache.hadoop.ipc.Server: Starting Socket 
Reader #1 for port 8020
2016-06-29 09:59:53,929 INFO 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered 
FSNamesystemState MBean
2016-06-29 09:59:53,968 INFO 
org.apache.hadoop.hdfs.server.namenode.LeaseManager: Number of blocks under 
construction: 0
2016-06-29 09:59:53,968 INFO 
org.apache.hadoop.hdfs.server.namenode.LeaseManager: Number of blocks under 
construction: 0
2016-06-29 09:59:53,969 INFO 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: initializing replication 
queues
2016-06-29 09:59:53,974 INFO org.apache.hadoop.hdfs.StateChange: STATE* Leaving 
safe mode after 2 secs
2016-06-29 09:59:53,974 INFO org.apache.hadoop.hdfs.StateChange: STATE* Network 
topology has 0 racks and 0 datanodes
2016-06-29 09:59:53,975 INFO org.apache.hadoop.hdfs.StateChange: STATE* 
UnderReplicatedBlocks has 0 blocks
2016-06-29 09:59:53,975 INFO 
org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager:
 Updating the current master key for generating delegation tokens
2016-06-29 09:59:53,987 INFO 
org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager:
 Starting expired delegation token remover thread, tokenRemoverScanInterval=60 
min(s)
2016-06-29 09:59:53,988 INFO 
org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager:
 Updating the current master key for generating delegation tokens
2016-06-29 09:59:54,017 INFO 
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor: Number of 
failed storage changes from 0 to 0
2016-06-29 09:59:54,027 INFO 
org.apache.hadoop.hdfs.security.token.block.BlockTokenSecretManager: Updating 
block keys
2016-06-29 09:59:54,042 INFO 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Total number of 
blocks            = 0
2016-06-29 09:59:54,042 INFO 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Number of invalid 
blocks          = 0
2016-06-29 09:59:54,042 INFO 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Number of 
under-replicated blocks = 0
2016-06-29 09:59:54,043 INFO 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Number of  
over-replicated blocks = 0
2016-06-29 09:59:54,043 INFO 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Number of blocks 
being written    = 0
2016-06-29 09:59:54,043 INFO org.apache.hadoop.hdfs.StateChange: STATE* 
Replication Queue initialization scan for invalid, over- and under-replicated 
blocks completed in 67 msec
2016-06-29 09:59:54,179 INFO org.apache.hadoop.ipc.Server: IPC Server 
Responder: starting
2016-06-29 09:59:54,236 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: 
NameNode RPC up at: hadoop-master/192.168.23.206:8020
2016-06-29 09:59:54,238 INFO 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Starting services required 
for active state
2016-06-29 09:59:54,188 INFO org.apache.hadoop.ipc.Server: IPC Server listener 
on 8020: starting
2016-06-29 09:59:54,288 INFO 
org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Starting 
CacheReplicationMonitor with interval 30000 milliseconds



2016-06-29 10:01:35,700 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth 
successful for nn/hadoop-master@platalyticsrealm (auth:KERBEROS)
2016-06-29 10:01:35,744 INFO 
SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager:
 Authorization successful for nn/hadoop-master@platalyticsrealm (auth:KERBEROS) 
for protocol=interface org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol
2016-06-29 10:01:35,955 WARN org.apache.hadoop.security.UserGroupInformation: 
No groups available for user nn
2016-06-29 10:01:36,843 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
registerDatanode: from DatanodeRegistration(192.168.23.206:1004, 
datanodeUuid=2915016b-a9ac-45bd-9b4d-2950dd1eef83, infoPort=1006, 
infoSecurePort=0, ipcPort=50020, 
storageInfo=lv=-56;cid=CID-9e3dfb66-24f4-4c8b-a23c-1bc859347503;nsid=356370123;c=0)
 storage 2915016b-a9ac-45bd-9b4d-2950dd1eef83
2016-06-29 10:01:36,844 INFO 
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor: Number of 
failed storage changes from 0 to 0
2016-06-29 10:01:36,845 INFO org.apache.hadoop.net.NetworkTopology: Adding a 
new node: /default-rack/192.168.23.206:1004
2016-06-29 10:01:36,983 INFO 
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor: Number of 
failed storage changes from 0 to 0
2016-06-29 10:01:36,983 INFO 
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor: Adding new 
storage ID DS-f969acfe-9062-484b-aba8-ffee100bf01b for DN 192.168.23.206:1004
2016-06-29 10:01:37,054 INFO BlockStateChange: BLOCK* processReport: from 
storage DS-f969acfe-9062-484b-aba8-ffee100bf01b node 
DatanodeRegistration(192.168.23.206:1004, 
datanodeUuid=2915016b-a9ac-45bd-9b4d-2950dd1eef83, infoPort=1006, 
infoSecurePort=0, ipcPort=50020, 
storageInfo=lv=-56;cid=CID-9e3dfb66-24f4-4c8b-a23c-1bc859347503;nsid=356370123;c=0),
 blocks: 0, hasStaleStorage: false, processing time: 3 msecs
2016-06-29 10:01:37,474 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth 
successful for dn/hadoop-slave@platalyticsrealm (auth:KERBEROS)
2016-06-29 10:01:37,512 WARN 
SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager:
 Authorization failed for dn/hadoop-slave@platalyticsrealm (auth:KERBEROS) for 
protocol=interface org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol, 
expected client Kerberos principal is nn/hadoop-slave@platalyticsrealm
2016-06-29 10:01:37,514 INFO org.apache.hadoop.ipc.Server: Connection from 
192.168.23.207:32807 for protocol 
org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol is unauthorized for 
user dn/hadoop-slave@platalyticsrealm (auth:KERBEROS)
2016-06-29 10:01:37,516 INFO org.apache.hadoop.ipc.Server: Socket Reader #1 for 
port 8020: readAndProcess from client 192.168.23.207 threw exception 
[org.apache.hadoop.security.authorize.AuthorizationException: User 
dn/hadoop-slave@platalyticsrealm (auth:KERBEROS) is not authorized for protocol 
interface org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol, expected 
client Kerberos principal is nn/hadoop-slave@platalyticsrealm]
2016-06-29 10:01:42,578 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth 
successful for dn/hadoop-slave@platalyticsrealm (auth:KERBEROS)
2016-06-29 10:01:42,586 WARN 
SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager:
 Authorization failed for dn/hadoop-slave@platalyticsrealm (auth:KERBEROS) for 
protocol=interface org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol, 
expected client Kerberos principal is nn/hadoop-slave@platalyticsrealm
2016-06-29 10:01:42,586 INFO org.apache.hadoop.ipc.Server: Connection from 
192.168.23.207:56672 for protocol 
org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol is unauthorized for 
user dn/hadoop-slave@platalyticsrealm (auth:KERBEROS)
2016-06-29 10:01:42,588 INFO org.apache.hadoop.ipc.Server: Socket Reader #1 for 
port 8020: readAndProcess from client 192.168.23.207 threw exception 
[org.apache.hadoop.security.authorize.AuthorizationException: User 
dn/hadoop-slave@platalyticsrealm (auth:KERBEROS) is not authorized for protocol 
interface org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol, expected 
client Kerberos principal is nn/hadoop-slave@platalyticsrealm]
2016-06-29 10:01:47,636 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth 
successful for dn/hadoop-slave@platalyticsrealm (auth:KERBEROS)
2016-06-29 10:01:47,640 WARN 
SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager:
 Authorization failed for dn/hadoop-slave@platalyticsrealm (auth:KERBEROS) for 
protocol=interface org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol, 
expected client Kerberos principal is nn/hadoop-slave@platalyticsrealm
2016-06-29 10:01:47,640 INFO org.apache.hadoop.ipc.Server: Connection from 
192.168.23.207:54455 for protocol 
org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol is unauthorized for 
user dn/hadoop-slave@platalyticsrealm (auth:KERBEROS)
2016-06-29 10:01:47,641 INFO org.apache.hadoop.ipc.Server: Socket Reader #1 for 
port 8020: readAndProcess from client 192.168.23.207 threw exception 
[org.apache.hadoop.security.authorize.AuthorizationException: User 
dn/hadoop-slave@platalyticsrealm (auth:KERBEROS) is not authorized for protocol 
interface org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol, expected 
client Kerberos principal is nn/hadoop-slave@platalyticsrealm]
2016-06-29 10:01:52,680 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth 
successful for dn/hadoop-slave@platalyticsrealm (auth:KERBEROS)
2016-06-29 10:01:52,686 WARN 
SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager:
 Authorization failed for dn/hadoop-slave@platalyticsrealm (auth:KERBEROS) for 
protocol=interface org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol, 
expected client Kerberos principal is nn/hadoop-slave@platalyticsrealm
2016-06-29 10:01:52,686 INFO org.apache.hadoop.ipc.Server: Connection from 
192.168.23.207:56744 for protocol 
org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol is unauthorized for 
user dn/hadoop-slave@platalyticsrealm (auth:KERBEROS)
2016-06-29 10:01:52,687 INFO org.apache.hadoop.ipc.Server: Socket Reader #1 for 
port 8020: readAndProcess from client 192.168.23.207 threw exception 
[org.apache.hadoop.security.authorize.AuthorizationException: User 
dn/hadoop-slave@platalyticsrealm (auth:KERBEROS) is not authorized for protocol 
interface org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol, expected 
client Kerberos principal is nn/hadoop-slave@platalyticsrealm]
2016-06-29 10:01:57,727 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth 
successful for dn/hadoop-slave@platalyticsrealm (auth:KERBEROS)
2016-06-29 10:01:57,733 WARN 
SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager:
 Authorization failed for dn/hadoop-slave@platalyticsrealm (auth:KERBEROS) for 
protocol=interface org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol, 
expected client Kerberos principal is nn/hadoop-slave@platalyticsrealm
2016-06-29 10:01:57,733 INFO org.apache.hadoop.ipc.Server: Connection from 
192.168.23.207:36398 for protocol 
org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol is unauthorized for 
user dn/hadoop-slave@platalyticsrealm (auth:KERBEROS)
2016-06-29 10:01:57,734 INFO org.apache.hadoop.ipc.Server: Socket Reader #1 for 
port 8020: readAndProcess from client 192.168.23.207 threw exception 
[org.apache.hadoop.security.authorize.AuthorizationException: User 
dn/hadoop-slave@platalyticsrealm (auth:KERBEROS) is not authorized for protocol 
interface org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol, expected 
client Kerberos principal is nn/hadoop-slave@platalyticsrealm]
2016-06-29 10:02:00,990 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth 
successful for nn/hadoop-master@platalyticsrealm (auth:KERBEROS)
2016-06-29 10:02:01,010 INFO 
SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager:
 Authorization successful for nn/hadoop-master@platalyticsrealm (auth:KERBEROS) 
for protocol=interface org.apache.hadoop.hdfs.server.protocol.NamenodeProtocol
2016-06-29 10:02:01,024 WARN org.apache.hadoop.security.UserGroupInformation: 
No groups available for user nn
2016-06-29 10:02:01,041 WARN org.apache.hadoop.security.UserGroupInformation: 
No groups available for user nn
2016-06-29 10:02:01,041 WARN org.apache.hadoop.security.UserGroupInformation: 
No groups available for user nn
2016-06-29 10:02:01,041 INFO 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Roll Edit Log from 
192.168.23.206
2016-06-29 10:02:01,041 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: 
Rolling edit logs
2016-06-29 10:02:01,042 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: 
Ending log segment 2795
2016-06-29 10:02:01,043 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: 
Number of transactions: 4 Total time for transactions(ms): 22 Number of 
transactions batched in Syncs: 0 Number of syncs: 4 SyncTimes(ms): 22
2016-06-29 10:02:01,046 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: 
Number of transactions: 4 Total time for transactions(ms): 22 Number of 
transactions batched in Syncs: 0 Number of syncs: 5 SyncTimes(ms): 25
2016-06-29 10:02:01,049 INFO 
org.apache.hadoop.hdfs.server.namenode.FileJournalManager: Finalizing edits 
file /app/hadoop/tmp/dfs/name/current/edits_inprogress_0000000000000002795 -> 
/app/hadoop/tmp/dfs/name/current/edits_0000000000000002795-0000000000000002798
2016-06-29 10:02:01,050 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: 
Starting log segment at 2799
2016-06-29 10:02:01,141 WARN org.apache.hadoop.security.UserGroupInformation: 
No groups available for user nn
2016-06-29 10:02:02,777 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth 
successful for dn/hadoop-slave@platalyticsrealm (auth:KERBEROS)
2016-06-29 10:02:02,782 WARN 
SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager:
 Authorization failed for dn/hadoop-slave@platalyticsrealm (auth:KERBEROS) for 
protocol=interface org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol, 
expected client Kerberos principal is nn/hadoop-slave@platalyticsrealm
2016-06-29 10:02:02,783 INFO org.apache.hadoop.ipc.Server: Connection from 
192.168.23.207:42447 for protocol 
org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol is unauthorized for 
user dn/hadoop-slave@platalyticsrealm (auth:KERBEROS)
2016-06-29 10:02:02,784 INFO org.apache.hadoop.ipc.Server: Socket Reader #1 for 
port 8020: readAndProcess from client 192.168.23.207 threw exception 
[org.apache.hadoop.security.authorize.AuthorizationException: User 
dn/hadoop-slave@platalyticsrealm (auth:KERBEROS) is not authorized for protocol 
interface org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol, expected 
client Kerberos principal is nn/hadoop-slave@platalyticsrealm]
2016-06-29 10:02:07,821 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth 
successful for dn/hadoop-slave@platalyticsrealm (auth:KERBEROS)
2016-06-29 10:02:07,832 WARN 
SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager:
 Authorization failed for dn/hadoop-slave@platalyticsrealm (auth:KERBEROS) for 
protocol=interface org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol, 
expected client Kerberos principal is nn/hadoop-slave@platalyticsrealm
2016-06-29 10:02:07,832 INFO org.apache.hadoop.ipc.Server: Connection from 
192.168.23.207:45369 for protocol 
org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol is unauthorized for 
user dn/hadoop-slave@platalyticsrealm (auth:KERBEROS)
2016-06-29 10:02:07,833 INFO org.apache.hadoop.ipc.Server: Socket Reader #1 for 
port 8020: readAndProcess from client 192.168.23.207 threw exception 
[org.apache.hadoop.security.authorize.AuthorizationException: User 
dn/hadoop-slave@platalyticsrealm (auth:KERBEROS) is not authorized for protocol 
interface org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol, expected 
client Kerberos principal is nn/hadoop-slave@platalyticsrealm]
2016-06-29 10:02:12,869 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth 
successful for dn/hadoop-slave@platalyticsrealm (auth:KERBEROS)
2016-06-29 10:02:12,874 WARN 
SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager:
 Authorization failed for dn/hadoop-slave@platalyticsrealm (auth:KERBEROS) for 
protocol=interface org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol, 
expected client Kerberos principal is nn/hadoop-slave@platalyticsrealm
2016-06-29 10:02:12,874 INFO org.apache.hadoop.ipc.Server: Connection from 
192.168.23.207:47455 for protocol 
org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol is unauthorized for 
user dn/hadoop-slave@platalyticsrealm (auth:KERBEROS)
2016-06-29 10:02:12,874 INFO org.apache.hadoop.ipc.Server: Socket Reader #1 for 
port 8020: readAndProcess from client 192.168.23.207 threw exception 
[org.apache.hadoop.security.authorize.AuthorizationException: User 
dn/hadoop-slave@platalyticsrealm (auth:KERBEROS) is not authorized for protocol 
interface org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol, expected 
client Kerberos principal is nn/hadoop-slave@platalyticsrealm]
2016-06-29 10:02:17,928 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth 
successful for dn/hadoop-slave@platalyticsrealm (auth:KERBEROS)
2016-06-29 10:02:17,934 WARN 
SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager:
 Authorization failed for dn/hadoop-slave@platalyticsrealm (auth:KERBEROS) for 
protocol=interface org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol, 
expected client Kerberos principal is nn/hadoop-slave@platalyticsrealm
2016-06-29 10:02:17,935 INFO org.apache.hadoop.ipc.Server: Connection from 
192.168.23.207:54760 for protocol 
org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol is unauthorized for 
user dn/hadoop-slave@platalyticsrealm (auth:KERBEROS)
2016-06-29 10:02:17,935 INFO org.apache.hadoop.ipc.Server: Socket Reader #1 for 
port 8020: readAndProcess from client 192.168.23.207 threw exception 
[org.apache.hadoop.security.authorize.AuthorizationException: User 
dn/hadoop-slave@platalyticsrealm (auth:KERBEROS) is not authorized for protocol 
interface org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol, expected 
client Kerberos principal is nn/hadoop-slave@platalyticsrealm]
2016-06-29 10:02:22,987 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth 
successful for dn/hadoop-slave@platalyticsrealm (auth:KERBEROS)
2016-06-29 10:02:22,992 WARN 
SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager:
 Authorization failed for dn/hadoop-slave@platalyticsrealm (auth:KERBEROS) for 
protocol=interface org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol, 
expected client Kerberos principal is nn/hadoop-slave@platalyticsrealm
2016-06-29 10:02:22,993 INFO org.apache.hadoop.ipc.Server: Connection from 
192.168.23.207:37729 for protocol 
org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol is unauthorized for 
user dn/hadoop-slave@platalyticsrealm (auth:KERBEROS)
2016-06-29 10:02:22,994 INFO org.apache.hadoop.ipc.Server: Socket Reader #1 for 
port 8020: readAndProcess from client 192.168.23.207 threw exception 
[org.apache.hadoop.security.authorize.AuthorizationException: User 
dn/hadoop-slave@platalyticsrealm (auth:KERBEROS) is not authorized for protocol 
interface org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol, expected 
client Kerberos principal is nn/hadoop-slave@platalyticsrealm]
2016-06-29 10:02:28,037 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth 
successful for dn/hadoop-slave@platalyticsrealm (auth:KERBEROS)
2016-06-29 10:02:28,041 WARN 
SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager:
 Authorization failed for dn/hadoop-slave@platalyticsrealm (auth:KERBEROS) for 
protocol=interface org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol, 
expected client Kerberos principal is nn/hadoop-slave@platalyticsrealm
2016-06-29 10:02:28,041 INFO org.apache.hadoop.ipc.Server: Connection from 
192.168.23.207:36156 for protocol 
org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol is unauthorized for 
user dn/hadoop-slave@platalyticsrealm (auth:KERBEROS)
2016-06-29 10:02:28,041 INFO org.apache.hadoop.ipc.Server: Socket Reader #1 for 
port 8020: readAndProcess from client 192.168.23.207 threw exception 
[org.apache.hadoop.security.authorize.AuthorizationException: User 
dn/hadoop-slave@platalyticsrealm (auth:KERBEROS) is not authorized for protocol 
interface org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol, expected 
client Kerberos principal is nn/hadoop-slave@platalyticsrealm]
2016-06-29 10:02:17,934 WARN 
SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager:
 Authorization failed for dn/hadoop-slave@platalyticsrealm (auth:KERBEROS) for 
protocol=interface org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol, 
expected client Kerberos principal is nn/hadoop-slave@platalyticsrealm
2016-06-29 10:02:17,935 INFO org.apache.hadoop.ipc.Server: Connection from 
192.168.23.207:54760 for protocol 
org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol is unauthorized for 
user dn/hadoop-slave@platalyticsrealm (auth:KERBEROS)
2016-06-29 10:02:17,935 INFO org.apache.hadoop.ipc.Server: Socket Reader #1 for 
port 8020: readAndProcess from client 192.168.23.207 threw exception 
[org.apache.hadoop.security.authorize.AuthorizationException: User 
dn/hadoop-slave@platalyticsrealm (auth:KERBEROS) is not authorized for protocol 
interface org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol, expected 
client Kerberos principal is nn/hadoop-slave@platalyticsrealm]
2016-06-29 10:02:22,987 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth 
successful for dn/hadoop-slave@platalyticsrealm (auth:KERBEROS)
2016-06-29 10:02:22,992 WARN 
SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager:
 Authorization failed for dn/hadoop-slave@platalyticsrealm (auth:KERBEROS) for 
protocol=interface org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol, 
expected client Kerberos principal is nn/hadoop-slave@platalyticsrealm
2016-06-29 10:02:22,993 INFO org.apache.hadoop.ipc.Server: Connection from 
192.168.23.207:37729 for protocol 
org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol is unauthorized for 
user dn/hadoop-slave@platalyticsrealm (auth:KERBEROS)
2016-06-29 10:02:22,994 INFO org.apache.hadoop.ipc.Server: Socket Reader #1 for 
port 8020: readAndProcess from client 192.168.23.207 threw exception 
[org.apache.hadoop.security.authorize.AuthorizationException: User 
dn/hadoop-slave@platalyticsrealm (auth:KERBEROS) is not authorized for protocol 
interface org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol, expected 
client Kerberos principal is nn/hadoop-slave@platalyticsrealm]
2016-06-29 10:02:28,037 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth 
successful for dn/hadoop-slave@platalyticsrealm (auth:KERBEROS)
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
	<property>
    		<name>dfs.replication</name>
		<value>2</value>
	</property>
 	<property>
                <name>fs.permissions.umask-mode</name>
                <value>000</value>
        </property>

        <!-- General HDFS security config -->
        <property>
                <name>dfs.block.access.token.enable</name>
                <value>true</value>
        </property>
        <!-- NameNode security config -->
        <property>
                <name>dfs.https.address</name>
                <value>192.168.23.206:8020</value>
        </property>
        <property>
                <name>dfs.https.port</name>
                <value>8020</value>
        </property>
        <property>
                <name>dfs.namenode.keytab.file</name>
                <value>/etc/hadoop/conf/hdfs.keytab</value> <!-- path to the HDFS keytab -->
        </property>
        <property>
                <name>dfs.namenode.kerberos.principal</name>
                <value>nn/_HOST@platalyticsrealm</value>
        </property>
        <property>
                <name>dfs.namenode.kerberos.https.principal</name>
                <value>nn/_HOST@platalyticsrealm</value>
        </property>
        <!-- Secondary NameNode security config -->
        <property>
                <name>dfs.secondary.http.address</name>
                <value>192.168.23.206:50495</value>
        </property>
        <property>
                <name>dfs.secondary.http.port</name>
         <value>50495</value>
        </property>
        <property>
                <name>dfs.secondary.namenode.keytab.file</name>
                <value>/etc/hadoop/conf/hdfs.keytab</value> <!-- path to the HDFS keytab -->
        </property>
        <property>
                <name>dfs.secondary.namenode.kerberos.principal</name>
                <value>nn/_HOST@platalyticsrealm</value>
        </property>
        <property>
                <name>dfs.secondary.namenode.kerberos.https.principal</name>
                <value>HTTP/_HOST@palatlyticsrealm</value>
        </property>
        <!-- DataNode security config -->
        <property>
                <name>dfs.datanode.data.dir.perm</name>
                <value>777</value>
        </property>
        <property>
                <name>dfs.datanode.address</name>
                <value>192.168.23.206:1004</value>
        </property>
        <property>
                <name>dfs.datanode.http.address</name>
                <value>192.168.23.206:1006</value>
        </property>
        <property>
                <name>dfs.datanode.keytab.file</name>
                <value>/etc/hadoop/conf/hdfs.keytab</value> <!-- path to the HDFS keytab -->
        </property>
        <property>
                <name>dfs.datanode.kerberos.principal</name>
                <value>nn/_HOST@platalyticsrealm</value>
        </property>
        <property>
                <name>dfs.datanode.kerberos.https.principal</name>
		<value>HTTP/_HOST@platalyticsrealm</value>
        </property>
        <property>
                <name>dfs.web.authentication.kerberos.principal</name>
                <value>HTTP/_HOST@platalyticsrealm</value>
        </property>
        <property>
		<name>dfs.namenode.acls.enabled</name>
		<value>false</value>
	</property>
	<property>
        	<name>dfs.permissions.enabled</name>
        	<value>true</value>
    	</property>
    	<property>
        	<name>dfs.permissions</name>
        	<value>true</value>
    	</property>
</configuration>

Reply via email to