Hi Mathu,

Please find the attached NN log.

i have copied all jar to /usr/hdp/current/hadoop-hdfs-namenode/lib location.

please provide me the right solution for this issue.

Thanks,
Shaik

On 6 March 2015 at 15:48, Muthu Pandi <[email protected]> wrote:

> Could you post the logs of your Active NN or the NN where you deployed
> your Ranger
>
> Also Make sure you have copied your JARS to respective folders and
> restarted the cluster.
>
>
>
> *RegardsMuthupandi.K*
>
>  Think before you print.
>
>
>
> On Fri, Mar 6, 2015 at 1:08 PM, Hadoop Solutions <[email protected]>
> wrote:
>
>> Hi Amithsha,
>>
>> I have deployed ranger-hdfs-plugin again with HA NN url.
>>
>> But, i am agents are not listed in Ranger Agents. I am using HDP 2.2.
>>
>> Please advise to resolve this issue.
>>
>> Thanks,
>> Shaik
>>
>> On 6 March 2015 at 14:48, Amith sha <[email protected]> wrote:
>>
>>> Hi Shail,
>>>
>>> Below mentioned steps are  mentioned in Ranger Guide to enable Ranger
>>> plugin In Hadoop HA cluster
>>>
>>>
>>> To enable Ranger in the HDFS HA environment, an HDFS plugin must be
>>> set up in each NameNode, and then pointed to the same HDFS repository
>>> set up in the Security Manager. Any policies created within that HDFS
>>> repository are automatically synchronized to the primary and secondary
>>> NameNodes through the installed Apache Ranger plugin. That way, if the
>>> primary NameNode fails, the secondary namenode takes over and the
>>> Ranger plugin at that NameNode begins to enforce the same policies for
>>> access control.
>>> When creating the repository, you must include the fs.default.name for
>>> the primary NameNode. If the primary NameNode fails during policy
>>> creation, you can then temporarily use the fs.default.name of the
>>> secondary NameNode in the repository details to enable directory
>>> lookup for policy creation.
>>>
>>> Thanks & Regards
>>> Amithsha
>>>
>>>
>>> On Fri, Mar 6, 2015 at 12:00 PM, Hadoop Solutions
>>> <[email protected]> wrote:
>>> > Hi,
>>> >
>>> > I have installed Ranger from Git repo and I have started Ranger
>>> console.
>>> >
>>> > I am trying to deploy ranger-hdfs plugin on active NN. But, plugin
>>> agent
>>> > unable to contact with Ranger.
>>> >
>>> > Can you please let me know the right procedure for ranger-hdfs plugin
>>> > deployment on HA NN cluster.
>>> >
>>> >
>>> > Regards,
>>> > Shaik
>>>
>>
>>
>
2015-03-06 09:09:28,520 INFO  namenode.FSNamesystem 
(FSNamesystem.java:listCorruptFileBlocks(7220)) - there are no corrupt file 
blocks.
2015-03-06 09:09:28,658 INFO  namenode.FSNamesystem 
(FSNamesystem.java:listCorruptFileBlocks(7220)) - there are no corrupt file 
blocks.
2015-03-06 09:09:30,230 INFO  namenode.FSNamesystem 
(FSNamesystem.java:listCorruptFileBlocks(7220)) - there are no corrupt file 
blocks.
2015-03-06 09:09:30,654 INFO  namenode.FSNamesystem 
(FSNamesystem.java:listCorruptFileBlocks(7220)) - there are no corrupt file 
blocks.
2015-03-06 09:09:31,683 INFO  ipc.Server (Server.java:saslProcess(1306)) - Auth 
successful for hive/[email protected] (auth:KERBEROS)
2015-03-06 09:09:31,689 INFO  authorize.ServiceAuthorizationManager 
(ServiceAuthorizationManager.java:authorize(118)) - Authorization successful 
for hive/[email protected] (auth:KERBEROS) for 
protocol=interface org.apache.hadoop.hdfs.protocol.ClientProtocol
2015-03-06 09:09:31,721 FATAL conf.Configuration 
(Configuration.java:loadResource(2512)) - error parsing conf 
file:/etc/hadoop/conf.empty/xasecure-audit.xml
java.io.FileNotFoundException: /etc/hadoop/conf.empty/xasecure-audit.xml (No 
such file or directory)
        at java.io.FileInputStream.open(Native Method)
        at java.io.FileInputStream.<init>(FileInputStream.java:146)
        at java.io.FileInputStream.<init>(FileInputStream.java:101)
        at 
sun.net.www.protocol.file.FileURLConnection.connect(FileURLConnection.java:90)
        at 
sun.net.www.protocol.file.FileURLConnection.getInputStream(FileURLConnection.java:188)
        at java.net.URL.openStream(URL.java:1037)
        at org.apache.hadoop.conf.Configuration.parse(Configuration.java:2342)
        at 
org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2410)
        at 
org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2376)
        at 
org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2283)
        at org.apache.hadoop.conf.Configuration.get(Configuration.java:1110)
        at 
org.apache.hadoop.hdfs.server.namenode.XaSecureFSPermissionChecker.<clinit>(XaSecureFSPermissionChecker.java:57)
        at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java)
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6515)
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:4143)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:838)
        at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:821)
        at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
                at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
2015-03-06 09:09:31,722 WARN  ipc.Server (Server.java:run(2058)) - IPC Server 
handler 27 on 8020, call 
org.apache.hadoop.hdfs.protocol.ClientProtocol.getFileInfo from 
10.193.153.223:48019 Call#999 Retry#6
java.lang.ExceptionInInitializerError
        at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java)
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6515)
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:4143)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:838)
        at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:821)
        at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
Caused by: java.lang.RuntimeException: java.io.FileNotFoundException: 
/etc/hadoop/conf.empty/xasecure-audit.xml (No such file or directory)
        at 
org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2513)
        at 
org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2376)
        at 
org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2283)
        at org.apache.hadoop.conf.Configuration.get(Configuration.java:1110)
        at 
org.apache.hadoop.hdfs.server.namenode.XaSecureFSPermissionChecker.<clinit>(XaSecureFSPermissionChecker.java:57)
        ... 14 more
Caused by: java.io.FileNotFoundException: 
/etc/hadoop/conf.empty/xasecure-audit.xml (No such file or directory)
        at java.io.FileInputStream.open(Native Method)
        at java.io.FileInputStream.<init>(FileInputStream.java:146)
        at java.io.FileInputStream.<init>(FileInputStream.java:101)
        at 
sun.net.www.protocol.file.FileURLConnection.connect(FileURLConnection.java:90)
        at 
sun.net.www.protocol.file.FileURLConnection.getInputStream(FileURLConnection.java:188)
        at java.net.URL.openStream(URL.java:1037)
        at org.apache.hadoop.conf.Configuration.parse(Configuration.java:2342)
        at 
org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2410)
        ... 18 more
                2015-03-06 09:09:38,402 WARN  ipc.Server 
(Server.java:run(2058)) - IPC Server handler 28 on 8020, call 
org.apache.hadoop.hdfs.protocol.ClientProtocol.getFileInfo from 
10.193.153.223:48019 Call#1000 Retry#0
java.lang.NoClassDefFoundError: Could not initialize class 
org.apache.hadoop.hdfs.server.namenode.XaSecureFSPermissionChecker
        at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java)
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6515)
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:4143)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:838)
        at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:821)
        at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
2015-03-06 09:09:38,970 INFO  namenode.FSNamesystem 
(FSNamesystem.java:listCorruptFileBlocks(7220)) - there are no corrupt file 
blocks.
2015-03-06 09:09:39,410 INFO  namenode.FSNamesystem 
(FSNamesystem.java:listCorruptFileBlocks(7220)) - there are no corrupt file 
blocks.
2015-03-06 09:09:40,522 INFO  ipc.Server (Server.java:saslProcess(1306)) - Auth 
successful for oozie/[email protected] (auth:KERBEROS)
2015-03-06 09:09:40,529 INFO  authorize.ServiceAuthorizationManager 
(ServiceAuthorizationManager.java:authorize(118)) - Authorization successful 
for oozie/[email protected] (auth:KERBEROS) for 
protocol=interface org.apache.hadoop.hdfs.protocol.ClientProtocol
2015-03-06 09:09:40,531 WARN  ipc.Server (Server.java:run(2058)) - IPC Server 
handler 18 on 8020, call 
org.apache.hadoop.hdfs.protocol.ClientProtocol.getListing from 
10.193.153.223:33177 Call#4718 Retry#6
java.lang.NoClassDefFoundError: Could not initialize class 
org.apache.hadoop.hdfs.server.namenode.XaSecureFSPermissionChecker
        at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java)
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6515)
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6497)
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPathAccess(FSNamesystem.java:6422)
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListingInt(FSNamesystem.java:4957)
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListing(FSNamesystem.java:4918)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getListing(NameNodeRpcServer.java:826)
        at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getListing(ClientNamenodeProtocolServerSideTranslatorPB.java:612)
        at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
2015-03-06 09:09:48,074 INFO  namenode.FSNamesystem 
(FSNamesystem.java:listCorruptFileBlocks(7220)) - there are no corrupt file 
blocks.
2015-03-06 09:09:48,318 INFO  namenode.FSNamesystem 
(FSNamesystem.java:listCorruptFileBlocks(7220)) - there are no corrupt file 
blocks.
2015-03-06 09:09:50,267 WARN  ipc.Server (Server.java:run(2058)) - IPC Server 
handler 33 on 8020, call 
org.apache.hadoop.hdfs.protocol.ClientProtocol.getListing from 
10.193.153.220:58994 Call#4737 Retry#8
java.lang.NoClassDefFoundError: Could not initialize class 
org.apache.hadoop.hdfs.server.namenode.XaSecureFSPermissionChecker
        at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java)
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6515)
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6497)
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPathAccess(FSNamesystem.java:6422)
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListingInt(FSNamesystem.java:4957)
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListing(FSNamesystem.java:4918)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getListing(NameNodeRpcServer.java:826)
        at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getListing(ClientNamenodeProtocolServerSideTranslatorPB.java:612)
        at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
2015-03-06 09:09:57,016 WARN  ipc.Server (Server.java:run(2058)) - IPC Server 
handler 15 on 8020, call 
org.apache.hadoop.hdfs.protocol.ClientProtocol.getListing from 
10.193.153.220:58994 Call#4738 Retry#0
java.lang.NoClassDefFoundError: Could not initialize class 
org.apache.hadoop.hdfs.server.namenode.XaSecureFSPermissionChecker
        at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java)
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6515)
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6497)
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPathAccess(FSNamesystem.java:6422)
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListingInt(FSNamesystem.java:4957)
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListing(FSNamesystem.java:4918)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getListing(NameNodeRpcServer.java:826)
        at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getListing(ClientNamenodeProtocolServerSideTranslatorPB.java:612)
        at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
        at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:415)
        at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
2015-03-06 09:09:57,070 INFO  namenode.FSNamesystem 
(FSNamesystem.java:listCorruptFileBlocks(7220)) - there are no corrupt file 
blocks.
2015-03-06 09:09:57,121 INFO  namenode.FSNamesystem 
(FSNamesystem.java:listCorruptFileBlocks(7220)) - there are no corrupt file 
blocks.
2015-03-06 09:09:57,286 INFO  blockmanagement.CacheReplicationMonitor 
(CacheReplicationMonitor.java:run(178)) - Rescanning after 30001 milliseconds
2015-03-06 09:09:57,287 INFO  blockmanagement.CacheReplicationMonitor 
(CacheReplicationMonitor.java:run(201)) - Scanned 0 directive(s) and 0 block(s) 
in 1 millisecond(s).
2015-03-06 09:09:57,403 INFO  namenode.FSNamesystem 
(FSNamesystem.java:listCorruptFileBlocks(7220)) - there are no corrupt file 
blocks.
2015-03-06 09:10:05,963 INFO  namenode.FSNamesystem 
(FSNamesystem.java:listCorruptFileBlocks(7220)) - there are no corrupt file 
blocks.
2015-03-06 09:10:06,393 INFO  namenode.FSNamesystem 
(FSNamesystem.java:listCorruptFileBlocks(7220)) - there are no corrupt file 
blocks.
2015-03-06 09:10:09,047 INFO  ipc.Server (Server.java:saslProcess(1306)) - Auth 
successful for hive/[email protected] (auth:KERBEROS)
2015-03-06 09:10:09,051 INFO  authorize.ServiceAuthorizationManager 
(ServiceAuthorizationManager.java:authorize(118)) - Authorization successful 
for hive/[email protected] (auth:KERBEROS) for 
protocol=interface org.apache.hadoop.hdfs.protocol.ClientProtocol
2015-03-06 09:10:09,053 WARN  ipc.Server (Server.java:run(2058)) - IPC Server 
handler 36 on 8020, call 
org.apache.hadoop.hdfs.protocol.ClientProtocol.getFileInfo from 
10.193.153.223:36094 Call#1001 Retry#0
java.lang.NoClassDefFoundError: Could not initialize class 
org.apache.hadoop.hdfs.server.namenode.XaSecureFSPermissionChecker
        at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java)
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6515)
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:4143)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:838)
        at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:821)
        at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
2015-03-06 09:10:14,947 INFO  namenode.FSNamesystem 
(FSNamesystem.java:listCorruptFileBlocks(7220)) - there are no corrupt file 
blocks.
2015-03-06 09:10:15,363 INFO  namenode.FSNamesystem 
(FSNamesystem.java:listCorruptFileBlocks(7220)) - there are no corrupt file 
blocks.
2015-03-06 09:10:23,964 INFO  namenode.FSNamesystem 
(FSNamesystem.java:listCorruptFileBlocks(7220)) - there are no corrupt file 
blocks.
2015-03-06 09:10:24,351 INFO  namenode.FSNamesystem 
(FSNamesystem.java:listCorruptFileBlocks(7220)) - there are no corrupt file 
blocks.
2015-03-06 09:10:26,563 INFO  namenode.FSNamesystem 
(FSNamesystem.java:listCorruptFileBlocks(7220)) - there are no corrupt file 
blocks.
2015-03-06 09:10:27,282 INFO  ipc.Server (Server.java:saslProcess(1306)) - Auth 
successful for jhs/[email protected] (auth:KERBEROS)
2015-03-06 09:10:27,286 INFO  blockmanagement.CacheReplicationMonitor 
(CacheReplicationMonitor.java:run(178)) - Rescanning after 30000 milliseconds
2015-03-06 09:10:27,287 INFO  blockmanagement.CacheReplicationMonitor 
(CacheReplicationMonitor.java:run(201)) - Scanned 0 directive(s) and 0 block(s) 
in 1 millisecond(s).
.
2015-03-06 09:10:27,288 INFO  authorize.ServiceAuthorizationManager 
(ServiceAuthorizationManager.java:authorize(118)) - Authorization successful 
for jhs/[email protected] (auth:KERBEROS) for 
protocol=interface org.apache.hadoop.hdfs.protocol.ClientProtocol
2015-03-06 09:10:27,290 WARN  ipc.Server (Server.java:run(2058)) - IPC Server 
handler 26 on 8020, call 
org.apache.hadoop.hdfs.protocol.ClientProtocol.getListing from 
10.193.153.220:49460 Call#4739 Retry#0
java.lang.NoClassDefFoundError: Could not initialize class 
org.apache.hadoop.hdfs.server.namenode.XaSecureFSPermissionChecker
        at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java)
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6515)
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6497)
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPathAccess(FSNamesystem.java:6422)
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListingInt(FSNamesystem.java:4957)
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListing(FSNamesystem.java:4918)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getListing(NameNodeRpcServer.java:826)
        at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getListing(ClientNamenodeProtocolServerSideTranslatorPB.java:612)
        at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
2015-03-06 09:10:34,094 INFO  namenode.FSNamesystem 
(FSNamesystem.java:listCorruptFileBlocks(7220)) - there are no corrupt file 
blocks.
2015-03-06 09:10:34,518 INFO  namenode.FSNamesystem 
(FSNamesystem.java:listCorruptFileBlocks(7220)) - there are no corrupt file 
blocks.
2015-03-06 09:10:39,037 WARN  ipc.Server (Server.java:run(2058)) - IPC Server 
handler 7 on 8020, call 
org.apache.hadoop.hdfs.protocol.ClientProtocol.getFileInfo from 
10.193.153.223:36094 Call#1002 Retry#0
java.lang.NoClassDefFoundError: Could not initialize class 
org.apache.hadoop.hdfs.server.namenode.XaSecureFSPermissionChecker
        at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java)
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6515)
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:4143)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:838)
        at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:821)
        at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
                at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
2015-03-06 09:10:40,562 INFO  ipc.Server (Server.java:saslProcess(1306)) - Auth 
successful for oozie/[email protected] (auth:KERBEROS)
2015-03-06 09:10:40,565 INFO  authorize.ServiceAuthorizationManager 
(ServiceAuthorizationManager.java:authorize(118)) - Authorization successful 
for oozie/[email protected] (auth:KERBEROS) for 
protocol=interface org.apache.hadoop.hdfs.protocol.ClientProtocol
2015-03-06 09:10:40,567 WARN  ipc.Server (Server.java:run(2058)) - IPC Server 
handler 18 on 8020, call 
org.apache.hadoop.hdfs.protocol.ClientProtocol.getListing from 
10.193.153.223:34070 Call#4720 Retry#0
java.lang.NoClassDefFoundError: Could not initialize class 
org.apache.hadoop.hdfs.server.namenode.XaSecureFSPermissionChecker
        at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java)
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6515)
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6497)
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPathAccess(FSNamesystem.java:6422)
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListingInt(FSNamesystem.java:4957)
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListing(FSNamesystem.java:4918)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getListing(NameNodeRpcServer.java:826)
        at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getListing(ClientNamenodeProtocolServerSideTranslatorPB.java:612)
        at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
2015-03-06 09:10:42,933 INFO  namenode.FSNamesystem 
(FSNamesystem.java:listCorruptFileBlocks(7220)) - there are no corrupt file 
blocks.
2015-03-06 09:10:43,314 INFO  namenode.FSNamesystem 
(FSNamesystem.java:listCorruptFileBlocks(7220)) - there are no corrupt file 
blocks.
2015-03-06 09:10:50,920 INFO  namenode.FSNamesystem 
(FSNamesystem.java:listCorruptFileBlocks(7220)) - there are no corrupt file 
blocks.
2015-03-06 09:10:51,314 INFO  namenode.FSNamesystem 
(FSNamesystem.java:listCorruptFileBlocks(7220)) - there are no corrupt file 
blocks.
2015-03-06 09:10:56,820 WARN  ipc.Server (Server.java:run(2058)) - IPC Server 
handler 28 on 8020, call 
org.apache.hadoop.hdfs.protocol.ClientProtocol.getListing from 
10.193.153.220:49460 Call#4740 Retry#0
java.lang.NoClassDefFoundError: Could not initialize class 
org.apache.hadoop.hdfs.server.namenode.XaSecureFSPermissionChecker
        at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java)
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6515)
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6497)
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPathAccess(FSNamesystem.java:6422)
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListingInt(FSNamesystem.java:4957)
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListing(FSNamesystem.java:4918)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getListing(NameNodeRpcServer.java:826)
        at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getListing(ClientNamenodeProtocolServerSideTranslatorPB.java:612)
        at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
2015-03-06 09:10:56,846 INFO  namenode.FSNamesystem 
(FSNamesystem.java:listCorruptFileBlocks(7220)) - there are no corrupt file 
blocks.
2015-03-06 09:10:57,286 INFO  blockmanagement.CacheReplicationMonitor 
(CacheReplicationMonitor.java:run(178)) - Rescanning after 30000 milliseconds
2015-03-06 09:10:57,287 INFO  blockmanagement.CacheReplicationMonitor 
(CacheReplicationMonitor.java:run(201)) - Scanned 0 directive(s) and 0 block(s) 
in 2 millisecond(s).
2015-03-06 09:10:58,891 INFO  namenode.FSNamesystem 
(FSNamesystem.java:listCorruptFileBlocks(7220)) - there are no corrupt file 
blocks.
2015-03-06 09:10:59,356 INFO  namenode.FSNamesystem 
(FSNamesystem.java:listCorruptFileBlocks(7220)) - there are no corrupt file 
blocks.
2015-03-06 09:11:00,695 INFO  hdfs.StateChange 
(FSNamesystem.java:reportStatus(6006)) - STATE* Safe mode ON.
The reported blocks 1 needs additional 68748 blocks to reach the threshold 
0.9900 of total blocks 69443.
The number of live datanodes 3 has reached the minimum number 0. Safe mode will 
be turned off automatically once the thresholds have been reached.
2015-03-06 09:11:01,498 INFO  hdfs.StateChange 
(FSNamesystem.java:reportStatus(6006)) - STATE* Safe mode extension entered.
The reported blocks 68748 has reached the threshold 0.9900 of total blocks 
69443. The number of live datanodes 3 has reached the minimum number 0. In safe 
mode extension. Safe mode will be turned off automatically in 29 seconds.
2015-03-06 09:11:01,501 INFO  blockmanagement.BlockManager 
(BlockManager.java:processReport(1815)) - BLOCK* processReport: Received first 
block report from 
DatanodeStorage[DS-afa7887c-eff9-48c3-9720-ca7b5a099571,DISK,NORMAL] after 
starting up or becoming active. Its block contents are no longer considered 
stale
2015-03-06 09:11:01,501 INFO  BlockStateChange 
(BlockManager.java:processReport(1831)) - BLOCK* processReport: from storage 
DS-afa7887c-eff9-48c3-9720-ca7b5a099571 node 
DatanodeRegistration(10.193.153.213, 
datanodeUuid=c7d9b403-7ddf-45f1-9838-2f372be8794e, infoPort=1022, ipcPort=8010, 
storageInfo=lv=-56;cid=CID-8f491729-74b1-4696-8a8b-36c6fe066568;nsid=1403566432;c=0),
 blocks: 69443, hasStaleStorages: false, processing time: 818 msecs
2015-03-06 09:11:03,890 INFO  blockmanagement.BlockManager 
(BlockManager.java:processReport(1815)) - BLOCK* processReport: Received first 
block report from 
DatanodeStorage[DS-725c7ef8-21f2-44f1-9397-f461ebf00e34,DISK,NORMAL] after 
starting up or becoming active. Its block contents are no longer considered 
stale
2015-03-06 09:11:03,891 INFO  BlockStateChange 
(BlockManager.java:processReport(1831)) - BLOCK* processReport: from storage 
DS-725c7ef8-21f2-44f1-9397-f461ebf00e34 node 
DatanodeRegistration(10.193.153.214, 
datanodeUuid=605e1d48-41d0-4dce-a89e-0f2404d26a8c, infoPort=1022, ipcPort=8010, 
storageInfo=lv=-56;cid=CID-8f491729-74b1-4696-8a8b-36c6fe066568;nsid=1403566432;c=0),
 blocks: 69443, hasStaleStorages: false, processing time: 273 msecs
2015-03-06 09:11:07,890 INFO  namenode.FSNamesystem 
(FSNamesystem.java:listCorruptFileBlocks(7220)) - there are no corrupt file 
blocks.
2015-03-06 09:11:08,329 INFO  namenode.FSNamesystem 
(FSNamesystem.java:listCorruptFileBlocks(7220)) - there are no corrupt file 
blocks.
2015-03-06 09:11:09,077 INFO  ipc.Server (Server.java:saslProcess(1306)) - Auth 
successful for hive/[email protected] (auth:KERBEROS)
2015-03-06 09:11:09,080 INFO  authorize.ServiceAuthorizationManager 
(ServiceAuthorizationManager.java:authorize(118)) - Authorization successful 
for hive/[email protected] (auth:KERBEROS) for 
protocol=interface org.apache.hadoop.hdfs.protocol.ClientProtocol
2015-03-06 09:11:09,082 WARN  ipc.Server (Server.java:run(2058)) - IPC Server 
handler 32 on 8020, call 
org.apache.hadoop.hdfs.protocol.ClientProtocol.getFileInfo from 
10.193.153.223:60694 Call#1003 Retry#0
java.lang.NoClassDefFoundError: Could not initialize class 
org.apache.hadoop.hdfs.server.namenode.XaSecureFSPermissionChecker
        at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java)
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6515)
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:4143)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:838)
                2015-03-06 09:11:40,603 WARN  ipc.Server 
(Server.java:run(2058)) - IPC Server handler 36 on 8020, call 
org.apache.hadoop.hdfs.protocol.ClientProtocol.getListing from 
10.193.153.223:42246 Call#4721 Retry#0
java.lang.NoClassDefFoundError: Could not initialize class 
org.apache.hadoop.hdfs.server.namenode.XaSecureFSPermissionChecker
        at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java)
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6515)
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6497)
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPathAccess(FSNamesystem.java:6422)
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListingInt(FSNamesystem.java:4957)
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListing(FSNamesystem.java:4918)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getListing(NameNodeRpcServer.java:826)
        at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getListing(ClientNamenodeProtocolServerSideTranslatorPB.java:612)
        at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
2015-03-06 09:11:41,908 INFO  namenode.FSNamesystem 
(FSNamesystem.java:listCorruptFileBlocks(7220)) - there are no corrupt file 
blocks.
2015-03-06 09:11:42,279 INFO  namenode.FSNamesystem 
(FSNamesystem.java:listCorruptFileBlocks(7220)) - there are no corrupt file 
blocks.
2015-03-06 09:11:49,008 INFO  namenode.FSNamesystem 
(FSNamesystem.java:listCorruptFileBlocks(7220)) - there are no corrupt file 
blocks.
2015-03-06 09:11:49,338 INFO  namenode.FSNamesystem 
(FSNamesystem.java:listCorruptFileBlocks(7220)) - there are no corrupt file 
blocks.
2015-03-06 09:11:56,031 INFO  namenode.FSNamesystem 
(FSNamesystem.java:listCorruptFileBlocks(7220)) - there are no corrupt file 
blocks.
2015-03-06 09:11:56,867 INFO  ipc.Server (Server.java:saslProcess(1306)) - Auth 
successful for jhs/[email protected] (auth:KERBEROS)
2015-03-06 09:12:03,709 INFO  namenode.FSNamesystem 
(FSNamesystem.java:listCorruptFileBlocks(7220)) - there are no corrupt file 
blocks.
2015-03-06 09:12:03,887 INFO  namenode.FSNamesystem 
(FSNamesystem.java:listCorruptFileBlocks(7220)) - there are no corrupt file 
blocks.
2015-03-06 09:12:09,051 WARN  ipc.Server (Server.java:run(2058)) - IPC Server 
handler 25 on 8020, call 
org.apache.hadoop.hdfs.protocol.ClientProtocol.getFileInfo from 
10.193.153.223:42963 Call#1005 Retry#0
java.lang.NoClassDefFoundError: Could not initialize class 
org.apache.hadoop.hdfs.server.namenode.XaSecureFSPermissionChecker
        at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java)
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6515)
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:4143)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:838)
        at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:821)
        at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
2015-03-06 09:12:11,034 INFO  namenode.FSNamesystem 
(FSNamesystem.java:listCorruptFileBlocks(7220)) - there are no corrupt file 
blocks.
2015-03-06 09:12:11,501 INFO  namenode.FSNamesystem 
(FSNamesystem.java:listCorruptFileBlocks(7220)) - there are no corrupt file 
blocks.
2015-03-06 09:12:18,248 INFO  namenode.FSNamesystem 
(FSNamesystem.java:listCorruptFileBlocks(7220)) - there are no corrupt file 
blocks.
2015-03-06 09:12:18,614 INFO  namenode.FSNamesystem 
(FSNamesystem.java:listCorruptFileBlocks(7220)) - there are no corrupt file 
blocks.
2015-03-06 09:12:25,353 INFO  namenode.FSNamesystem 
(FSNamesystem.java:listCorruptFileBlocks(7220)) - there are no corrupt file 
blocks.
2015-03-06 09:12:25,712 INFO  namenode.FSNamesystem 
(FSNamesystem.java:listCorruptFileBlocks(7220)) - there are no corrupt file 
blocks.
2015-03-06 09:12:26,727 INFO  namenode.FSNamesystem 
(FSNamesystem.java:listCorruptFileBlocks(7220)) - there are no corrupt file 
blocks.
2015-03-06 09:12:27,286 INFO  blockmanagement.CacheReplicationMonitor 
(CacheReplicationMonitor.java:run(178)) - Rescanning after 30000 milliseconds
2015-03-06 09:12:27,287 INFO  blockmanagement.CacheReplicationMonitor 
(CacheReplicationMonitor.java:run(201)) - Scanned 0 directive(s) and 0 block(s) 
in 1 millisecond(s).
2015-03-06 09:12:31,161 INFO  ipc.Server (Server.java:saslProcess(1306)) - Auth 
successful for nn/[email protected] (auth:KERBEROS)
2015-03-06 09:12:31,169 INFO  authorize.ServiceAuthorizationManager 
(ServiceAuthorizationManager.java:authorize(118)) - Authorization successful 
for nn/[email protected] (auth:KERBEROS) for 
protocol=interface org.apache.hadoop.hdfs.server.protocol.NamenodeProtocol
2015-03-06 09:12:31,170 INFO  namenode.FSNamesystem 
(FSNamesystem.java:rollEditLog(6345)) - Roll Edit Log from 10.193.153.225
2015-03-06 09:12:31,171 INFO  namenode.FSEditLog 
(FSEditLog.java:rollEditLog(1157)) - Rolling edit logs
2015-03-06 09:12:31,171 INFO  namenode.FSEditLog 
(FSEditLog.java:endCurrentLogSegment(1214)) - Ending log segment 50023226
2015-03-06 09:12:31,215 INFO  namenode.FSEditLog 
(FSEditLog.java:printStatistics(691)) - Number of transactions: 4 Total time 
for transactions(ms): 5 Number of transactions batched in Syncs: 0 Number of 
syncs: 4 SyncTimes(ms): 127 22
2015-03-06 09:12:31,249 INFO  namenode.FileJournalManager 
(FileJournalManager.java:finalizeLogSegment(133)) - Finalizing edits file 
/opt/data/hadoop/namenode/current/edits_inprogress_0000000000050023226 -> 
/opt/data/hadoop/namenode/current/edits_0000000000050023226-0000000000050023229
2015-03-06 09:12:31,250 INFO  namenode.FSEditLog 
(FSEditLog.java:startLogSegment(1173)) - Starting log segment at 50023230
2015-03-06 09:12:33,704 INFO  namenode.FSNamesystem 
(FSNamesystem.java:listCorruptFileBlocks(7220)) - there are no corrupt file 
blocks.


                                                                                
                                               
                                                                                
                                                                                
                                                                                
        
                

                                                                   

Reply via email to