[ 
https://issues.apache.org/jira/browse/AMBARI-18706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15623343#comment-15623343
 ] 

Hadoop QA commented on AMBARI-18706:
------------------------------------

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12836203/AMBARI-18706.v2.patch
  against trunk revision .

    {color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

    {color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

    {color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

    {color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

    {color:green}+1 core tests{color}.  The patch passed unit tests in 
ambari-web.

Test results: 
https://builds.apache.org/job/Ambari-trunk-test-patch/9066//testReport/
Console output: 
https://builds.apache.org/job/Ambari-trunk-test-patch/9066//console

This message is automatically generated.

> Ranger Audit Handler not working as expected as NN HA wizard does not set a 
> few properties correctly
> ----------------------------------------------------------------------------------------------------
>
>                 Key: AMBARI-18706
>                 URL: https://issues.apache.org/jira/browse/AMBARI-18706
>             Project: Ambari
>          Issue Type: Bug
>          Components: ambari-web
>    Affects Versions: 2.5.0
>            Reporter: Vivek Ratnavel Subramanian
>            Assignee: Vivek Ratnavel Subramanian
>            Priority: Critical
>             Fix For: 2.5.0
>
>         Attachments: AMBARI-18706.v1.patch, AMBARI-18706.v2.patch
>
>
> When Ranger is installed and the user tries to enable NamenodeHA in HDFS, the 
> following exception is encountered since a property is not configured 
> properly in all the dependent services.
> {code}
> 2016-08-23 00:24:42,923 INFO  hdfs.StateChange 
> (FSNamesystem.java:completeFile(3503)) - DIR* completeFile: 
> /spark-history/.7927fb59-c4fe-4328-9b7a-0d435df56690 is closed by 
> DFSClient_NONMAPREDUCE_-1097091097_1
> 2016-08-23 00:24:43,801 INFO  BlockStateChange 
> (UnderReplicatedBlocks.java:chooseUnderReplicatedBlocks(394)) - 
> chooseUnderReplicatedBlocks selected  Total=0 Reset bookmarks? true
> 2016-08-23 00:24:43,802 INFO  BlockStateChange 
> (BlockManager.java:computeReplicationWorkForBlocks(1527)) - BLOCK* 
> neededReplications = 0, pendingReplications = 0.
> 2016-08-23 00:24:46,802 INFO  BlockStateChange 
> (UnderReplicatedBlocks.java:chooseUnderReplicatedBlocks(394)) - 
> chooseUnderReplicatedBlocks selected  Total=0 Reset bookmarks? true
> 2016-08-23 00:24:46,802 INFO  BlockStateChange 
> (BlockManager.java:computeReplicationWorkForBlocks(1527)) - BLOCK* 
> neededReplications = 0, pendingReplications = 0.
> 2016-08-23 00:24:49,498 INFO  ipc.Server (Server.java:saslProcess(1386)) - 
> Auth successful for 
> hive/natr76-swxs-dgtoeriesecha-r7-11.openstacklo...@example.com 
> (auth:KERBEROS)
> 2016-08-23 00:24:49,533 INFO  authorize.ServiceAuthorizationManager 
> (ServiceAuthorizationManager.java:authorize(135)) - Authorization successful 
> for hive/natr76-swxs-dgtoeriesecha-r7-11.openstacklo...@example.com 
> (auth:KERBEROS) for protocol=interface 
> org.apache.hadoop.hdfs.protocol.ClientProtocol
> 2016-08-23 00:24:49,804 INFO  BlockStateChange 
> (UnderReplicatedBlocks.java:chooseUnderReplicatedBlocks(394)) - 
> chooseUnderReplicatedBlocks selected  Total=0 Reset bookmarks? true
> 2016-08-23 00:24:49,805 INFO  BlockStateChange 
> (BlockManager.java:computeReplicationWorkForBlocks(1527)) - BLOCK* 
> neededReplications = 0, pendingReplications = 0.
> 2016-08-23 00:24:50,362 INFO  provider.BaseAuditHandler 
> (BaseAuditHandler.java:logStatus(312)) - Audit Status Log: 
> name=hdfs.async.multi_dest.batch.hdfs, interval=01:00.130 minutes, events=1, 
> deferredCount=1, totalEvents=4, totalDeferredCount=4
> 2016-08-23 00:24:50,363 INFO  destination.HDFSAuditDestination 
> (HDFSAuditDestination.java:createConfiguration(263)) - Returning HDFS 
> Filesystem Config: Configuration: core-default.xml, core-site.xml, 
> hdfs-default.xml, hdfs-site.xml, mapred-default.xml, mapred-site.xml, 
> yarn-default.xml, yarn-site.xml
> 2016-08-23 00:24:50,444 INFO  destination.HDFSAuditDestination 
> (HDFSAuditDestination.java:getLogFileStream(224)) - Checking whether log file 
> exists. 
> hdfPath=hdfs://natr76-swxs-dgtoeriesecha-r7-14.openstacklocal:8020/ranger/audit/hdfs/20160823/hdfs_ranger_audit_natr76-swxs-dgtoeriesecha-r7-10.openstacklocal.log,
>  UGI=nn/natr76-swxs-dgtoeriesecha-r7-10.openstacklo...@example.com 
> (auth:KERBEROS)
> 2016-08-23 00:24:50,622 ERROR provider.BaseAuditHandler 
> (BaseAuditHandler.java:logError(329)) - Error writing to log file.
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException):
>  Operation category READ is not supported in state standby
>       at 
> org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:87)
>       at 
> org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.checkOperation(NameNode.java:1932)
>       at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1313)
>       at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3861)
>       at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1076)
>       at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:843)
>       at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>       at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
>       at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
>       at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2151)
>       at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2147)
>       at java.security.AccessController.doPrivileged(Native Method)
>       at javax.security.auth.Subject.doAs(Subject.java:415)
>       at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
>       at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2145)
>       at org.apache.hadoop.ipc.Client.call(Client.java:1427)
>       at org.apache.hadoop.ipc.Client.call(Client.java:1358)
>       at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
>       at com.sun.proxy.$Proxy29.getFileInfo(Unknown Source)
>       at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:771)
>       at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>       at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>       at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>       at java.lang.reflect.Method.invoke(Method.java:606)
>       at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:252)
>       at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104)
>       at com.sun.proxy.$Proxy30.getFileInfo(Unknown Source)
>       at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2116)
>       at 
> org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1315)
>       at 
> org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1311)
>       at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>       at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1311)
>       at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1424)
>       at 
> org.apache.ranger.audit.destination.HDFSAuditDestination.getLogFileStream(HDFSAuditDestination.java:226)
>       at 
> org.apache.ranger.audit.destination.HDFSAuditDestination.logJSON(HDFSAuditDestination.java:123)
>       at 
> org.apache.ranger.audit.queue.AuditFileSpool.sendEvent(AuditFileSpool.java:890)
>       at 
> org.apache.ranger.audit.queue.AuditFileSpool.runDoAs(AuditFileSpool.java:838)
>       at 
> org.apache.ranger.audit.queue.AuditFileSpool$2.run(AuditFileSpool.java:759)
>       at 
> org.apache.ranger.audit.queue.AuditFileSpool$2.run(AuditFileSpool.java:757)
>       at java.security.AccessController.doPrivileged(Native Method)
>       at javax.security.auth.Subject.doAs(Subject.java:356)
>       at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1637)
>       at 
> org.apache.ranger.audit.queue.AuditFileSpool.run(AuditFileSpool.java:765)
>       at java.lang.Thread.run(Thread.java:745)
> 2016-08-23 00:24:50,623 ERROR queue.AuditFileSpool 
> (AuditFileSpool.java:logError(710)) - Error sending logs to consumer. 
> provider=hdfs.async.multi_dest.batch, 
> consumer=hdfs.async.multi_dest.batch.hdfs
> 2016-08-23 00:24:50,625 INFO  queue.AuditFileSpool 
> (AuditFileSpool.java:runDoAs(780)) - Destination is down. sleeping for 30000 
> milli seconds. indexQueue=0, queueName=hdfs.async.multi_dest.batch, 
> consumer=hdfs.async.multi_dest.batch.hdfs
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to