[jira] [Updated] (AMBARI-18706) Ranger Audit Handler not working as expected as NN HA wizard does not set a few properties correctly

2016-10-31 Thread Jaimin Jetly (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jaimin Jetly updated AMBARI-18706:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Patch received +1 on ReviewBoard.
Patch has been committed to branch-2.5 and trunk.

> Ranger Audit Handler not working as expected as NN HA wizard does not set a 
> few properties correctly
> 
>
> Key: AMBARI-18706
> URL: https://issues.apache.org/jira/browse/AMBARI-18706
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-web
>Affects Versions: 2.5.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Critical
> Fix For: 2.5.0
>
> Attachments: AMBARI-18706.v1.patch, AMBARI-18706.v2.patch
>
>
> When Ranger is installed and the user tries to enable NamenodeHA in HDFS, the 
> following exception is encountered since a property is not configured 
> properly in all the dependent services.
> {code}
> 2016-08-23 00:24:42,923 INFO  hdfs.StateChange 
> (FSNamesystem.java:completeFile(3503)) - DIR* completeFile: 
> /spark-history/.7927fb59-c4fe-4328-9b7a-0d435df56690 is closed by 
> DFSClient_NONMAPREDUCE_-1097091097_1
> 2016-08-23 00:24:43,801 INFO  BlockStateChange 
> (UnderReplicatedBlocks.java:chooseUnderReplicatedBlocks(394)) - 
> chooseUnderReplicatedBlocks selected  Total=0 Reset bookmarks? true
> 2016-08-23 00:24:43,802 INFO  BlockStateChange 
> (BlockManager.java:computeReplicationWorkForBlocks(1527)) - BLOCK* 
> neededReplications = 0, pendingReplications = 0.
> 2016-08-23 00:24:46,802 INFO  BlockStateChange 
> (UnderReplicatedBlocks.java:chooseUnderReplicatedBlocks(394)) - 
> chooseUnderReplicatedBlocks selected  Total=0 Reset bookmarks? true
> 2016-08-23 00:24:46,802 INFO  BlockStateChange 
> (BlockManager.java:computeReplicationWorkForBlocks(1527)) - BLOCK* 
> neededReplications = 0, pendingReplications = 0.
> 2016-08-23 00:24:49,498 INFO  ipc.Server (Server.java:saslProcess(1386)) - 
> Auth successful for 
> hive/natr76-swxs-dgtoeriesecha-r7-11.openstacklo...@example.com 
> (auth:KERBEROS)
> 2016-08-23 00:24:49,533 INFO  authorize.ServiceAuthorizationManager 
> (ServiceAuthorizationManager.java:authorize(135)) - Authorization successful 
> for hive/natr76-swxs-dgtoeriesecha-r7-11.openstacklo...@example.com 
> (auth:KERBEROS) for protocol=interface 
> org.apache.hadoop.hdfs.protocol.ClientProtocol
> 2016-08-23 00:24:49,804 INFO  BlockStateChange 
> (UnderReplicatedBlocks.java:chooseUnderReplicatedBlocks(394)) - 
> chooseUnderReplicatedBlocks selected  Total=0 Reset bookmarks? true
> 2016-08-23 00:24:49,805 INFO  BlockStateChange 
> (BlockManager.java:computeReplicationWorkForBlocks(1527)) - BLOCK* 
> neededReplications = 0, pendingReplications = 0.
> 2016-08-23 00:24:50,362 INFO  provider.BaseAuditHandler 
> (BaseAuditHandler.java:logStatus(312)) - Audit Status Log: 
> name=hdfs.async.multi_dest.batch.hdfs, interval=01:00.130 minutes, events=1, 
> deferredCount=1, totalEvents=4, totalDeferredCount=4
> 2016-08-23 00:24:50,363 INFO  destination.HDFSAuditDestination 
> (HDFSAuditDestination.java:createConfiguration(263)) - Returning HDFS 
> Filesystem Config: Configuration: core-default.xml, core-site.xml, 
> hdfs-default.xml, hdfs-site.xml, mapred-default.xml, mapred-site.xml, 
> yarn-default.xml, yarn-site.xml
> 2016-08-23 00:24:50,444 INFO  destination.HDFSAuditDestination 
> (HDFSAuditDestination.java:getLogFileStream(224)) - Checking whether log file 
> exists. 
> hdfPath=hdfs://natr76-swxs-dgtoeriesecha-r7-14.openstacklocal:8020/ranger/audit/hdfs/20160823/hdfs_ranger_audit_natr76-swxs-dgtoeriesecha-r7-10.openstacklocal.log,
>  UGI=nn/natr76-swxs-dgtoeriesecha-r7-10.openstacklo...@example.com 
> (auth:KERBEROS)
> 2016-08-23 00:24:50,622 ERROR provider.BaseAuditHandler 
> (BaseAuditHandler.java:logError(329)) - Error writing to log file.
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException):
>  Operation category READ is not supported in state standby
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:87)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.checkOperation(NameNode.java:1932)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1313)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3861)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1076)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:843)
>   at 
>

[jira] [Updated] (AMBARI-18706) Ranger Audit Handler not working as expected as NN HA wizard does not set a few properties correctly

2016-10-31 Thread Vivek Ratnavel Subramanian (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vivek Ratnavel Subramanian updated AMBARI-18706:

Status: Patch Available  (was: Open)

> Ranger Audit Handler not working as expected as NN HA wizard does not set a 
> few properties correctly
> 
>
> Key: AMBARI-18706
> URL: https://issues.apache.org/jira/browse/AMBARI-18706
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-web
>Affects Versions: 2.5.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Critical
> Fix For: 2.5.0
>
> Attachments: AMBARI-18706.v1.patch, AMBARI-18706.v2.patch
>
>
> When Ranger is installed and the user tries to enable NamenodeHA in HDFS, the 
> following exception is encountered since a property is not configured 
> properly in all the dependent services.
> {code}
> 2016-08-23 00:24:42,923 INFO  hdfs.StateChange 
> (FSNamesystem.java:completeFile(3503)) - DIR* completeFile: 
> /spark-history/.7927fb59-c4fe-4328-9b7a-0d435df56690 is closed by 
> DFSClient_NONMAPREDUCE_-1097091097_1
> 2016-08-23 00:24:43,801 INFO  BlockStateChange 
> (UnderReplicatedBlocks.java:chooseUnderReplicatedBlocks(394)) - 
> chooseUnderReplicatedBlocks selected  Total=0 Reset bookmarks? true
> 2016-08-23 00:24:43,802 INFO  BlockStateChange 
> (BlockManager.java:computeReplicationWorkForBlocks(1527)) - BLOCK* 
> neededReplications = 0, pendingReplications = 0.
> 2016-08-23 00:24:46,802 INFO  BlockStateChange 
> (UnderReplicatedBlocks.java:chooseUnderReplicatedBlocks(394)) - 
> chooseUnderReplicatedBlocks selected  Total=0 Reset bookmarks? true
> 2016-08-23 00:24:46,802 INFO  BlockStateChange 
> (BlockManager.java:computeReplicationWorkForBlocks(1527)) - BLOCK* 
> neededReplications = 0, pendingReplications = 0.
> 2016-08-23 00:24:49,498 INFO  ipc.Server (Server.java:saslProcess(1386)) - 
> Auth successful for 
> hive/natr76-swxs-dgtoeriesecha-r7-11.openstacklo...@example.com 
> (auth:KERBEROS)
> 2016-08-23 00:24:49,533 INFO  authorize.ServiceAuthorizationManager 
> (ServiceAuthorizationManager.java:authorize(135)) - Authorization successful 
> for hive/natr76-swxs-dgtoeriesecha-r7-11.openstacklo...@example.com 
> (auth:KERBEROS) for protocol=interface 
> org.apache.hadoop.hdfs.protocol.ClientProtocol
> 2016-08-23 00:24:49,804 INFO  BlockStateChange 
> (UnderReplicatedBlocks.java:chooseUnderReplicatedBlocks(394)) - 
> chooseUnderReplicatedBlocks selected  Total=0 Reset bookmarks? true
> 2016-08-23 00:24:49,805 INFO  BlockStateChange 
> (BlockManager.java:computeReplicationWorkForBlocks(1527)) - BLOCK* 
> neededReplications = 0, pendingReplications = 0.
> 2016-08-23 00:24:50,362 INFO  provider.BaseAuditHandler 
> (BaseAuditHandler.java:logStatus(312)) - Audit Status Log: 
> name=hdfs.async.multi_dest.batch.hdfs, interval=01:00.130 minutes, events=1, 
> deferredCount=1, totalEvents=4, totalDeferredCount=4
> 2016-08-23 00:24:50,363 INFO  destination.HDFSAuditDestination 
> (HDFSAuditDestination.java:createConfiguration(263)) - Returning HDFS 
> Filesystem Config: Configuration: core-default.xml, core-site.xml, 
> hdfs-default.xml, hdfs-site.xml, mapred-default.xml, mapred-site.xml, 
> yarn-default.xml, yarn-site.xml
> 2016-08-23 00:24:50,444 INFO  destination.HDFSAuditDestination 
> (HDFSAuditDestination.java:getLogFileStream(224)) - Checking whether log file 
> exists. 
> hdfPath=hdfs://natr76-swxs-dgtoeriesecha-r7-14.openstacklocal:8020/ranger/audit/hdfs/20160823/hdfs_ranger_audit_natr76-swxs-dgtoeriesecha-r7-10.openstacklocal.log,
>  UGI=nn/natr76-swxs-dgtoeriesecha-r7-10.openstacklo...@example.com 
> (auth:KERBEROS)
> 2016-08-23 00:24:50,622 ERROR provider.BaseAuditHandler 
> (BaseAuditHandler.java:logError(329)) - Error writing to log file.
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException):
>  Operation category READ is not supported in state standby
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:87)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.checkOperation(NameNode.java:1932)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1313)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3861)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1076)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:843)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProto

[jira] [Updated] (AMBARI-18706) Ranger Audit Handler not working as expected as NN HA wizard does not set a few properties correctly

2016-10-31 Thread Vivek Ratnavel Subramanian (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vivek Ratnavel Subramanian updated AMBARI-18706:

Attachment: AMBARI-18706.v2.patch

> Ranger Audit Handler not working as expected as NN HA wizard does not set a 
> few properties correctly
> 
>
> Key: AMBARI-18706
> URL: https://issues.apache.org/jira/browse/AMBARI-18706
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-web
>Affects Versions: 2.5.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Critical
> Fix For: 2.5.0
>
> Attachments: AMBARI-18706.v1.patch, AMBARI-18706.v2.patch
>
>
> When Ranger is installed and the user tries to enable NamenodeHA in HDFS, the 
> following exception is encountered since a property is not configured 
> properly in all the dependent services.
> {code}
> 2016-08-23 00:24:42,923 INFO  hdfs.StateChange 
> (FSNamesystem.java:completeFile(3503)) - DIR* completeFile: 
> /spark-history/.7927fb59-c4fe-4328-9b7a-0d435df56690 is closed by 
> DFSClient_NONMAPREDUCE_-1097091097_1
> 2016-08-23 00:24:43,801 INFO  BlockStateChange 
> (UnderReplicatedBlocks.java:chooseUnderReplicatedBlocks(394)) - 
> chooseUnderReplicatedBlocks selected  Total=0 Reset bookmarks? true
> 2016-08-23 00:24:43,802 INFO  BlockStateChange 
> (BlockManager.java:computeReplicationWorkForBlocks(1527)) - BLOCK* 
> neededReplications = 0, pendingReplications = 0.
> 2016-08-23 00:24:46,802 INFO  BlockStateChange 
> (UnderReplicatedBlocks.java:chooseUnderReplicatedBlocks(394)) - 
> chooseUnderReplicatedBlocks selected  Total=0 Reset bookmarks? true
> 2016-08-23 00:24:46,802 INFO  BlockStateChange 
> (BlockManager.java:computeReplicationWorkForBlocks(1527)) - BLOCK* 
> neededReplications = 0, pendingReplications = 0.
> 2016-08-23 00:24:49,498 INFO  ipc.Server (Server.java:saslProcess(1386)) - 
> Auth successful for 
> hive/natr76-swxs-dgtoeriesecha-r7-11.openstacklo...@example.com 
> (auth:KERBEROS)
> 2016-08-23 00:24:49,533 INFO  authorize.ServiceAuthorizationManager 
> (ServiceAuthorizationManager.java:authorize(135)) - Authorization successful 
> for hive/natr76-swxs-dgtoeriesecha-r7-11.openstacklo...@example.com 
> (auth:KERBEROS) for protocol=interface 
> org.apache.hadoop.hdfs.protocol.ClientProtocol
> 2016-08-23 00:24:49,804 INFO  BlockStateChange 
> (UnderReplicatedBlocks.java:chooseUnderReplicatedBlocks(394)) - 
> chooseUnderReplicatedBlocks selected  Total=0 Reset bookmarks? true
> 2016-08-23 00:24:49,805 INFO  BlockStateChange 
> (BlockManager.java:computeReplicationWorkForBlocks(1527)) - BLOCK* 
> neededReplications = 0, pendingReplications = 0.
> 2016-08-23 00:24:50,362 INFO  provider.BaseAuditHandler 
> (BaseAuditHandler.java:logStatus(312)) - Audit Status Log: 
> name=hdfs.async.multi_dest.batch.hdfs, interval=01:00.130 minutes, events=1, 
> deferredCount=1, totalEvents=4, totalDeferredCount=4
> 2016-08-23 00:24:50,363 INFO  destination.HDFSAuditDestination 
> (HDFSAuditDestination.java:createConfiguration(263)) - Returning HDFS 
> Filesystem Config: Configuration: core-default.xml, core-site.xml, 
> hdfs-default.xml, hdfs-site.xml, mapred-default.xml, mapred-site.xml, 
> yarn-default.xml, yarn-site.xml
> 2016-08-23 00:24:50,444 INFO  destination.HDFSAuditDestination 
> (HDFSAuditDestination.java:getLogFileStream(224)) - Checking whether log file 
> exists. 
> hdfPath=hdfs://natr76-swxs-dgtoeriesecha-r7-14.openstacklocal:8020/ranger/audit/hdfs/20160823/hdfs_ranger_audit_natr76-swxs-dgtoeriesecha-r7-10.openstacklocal.log,
>  UGI=nn/natr76-swxs-dgtoeriesecha-r7-10.openstacklo...@example.com 
> (auth:KERBEROS)
> 2016-08-23 00:24:50,622 ERROR provider.BaseAuditHandler 
> (BaseAuditHandler.java:logError(329)) - Error writing to log file.
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException):
>  Operation category READ is not supported in state standby
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:87)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.checkOperation(NameNode.java:1932)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1313)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3861)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1076)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:843)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol

[jira] [Updated] (AMBARI-18706) Ranger Audit Handler not working as expected as NN HA wizard does not set a few properties correctly

2016-10-31 Thread Vivek Ratnavel Subramanian (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vivek Ratnavel Subramanian updated AMBARI-18706:

Status: Open  (was: Patch Available)

> Ranger Audit Handler not working as expected as NN HA wizard does not set a 
> few properties correctly
> 
>
> Key: AMBARI-18706
> URL: https://issues.apache.org/jira/browse/AMBARI-18706
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-web
>Affects Versions: 2.5.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Critical
> Fix For: 2.5.0
>
> Attachments: AMBARI-18706.v1.patch, AMBARI-18706.v2.patch
>
>
> When Ranger is installed and the user tries to enable NamenodeHA in HDFS, the 
> following exception is encountered since a property is not configured 
> properly in all the dependent services.
> {code}
> 2016-08-23 00:24:42,923 INFO  hdfs.StateChange 
> (FSNamesystem.java:completeFile(3503)) - DIR* completeFile: 
> /spark-history/.7927fb59-c4fe-4328-9b7a-0d435df56690 is closed by 
> DFSClient_NONMAPREDUCE_-1097091097_1
> 2016-08-23 00:24:43,801 INFO  BlockStateChange 
> (UnderReplicatedBlocks.java:chooseUnderReplicatedBlocks(394)) - 
> chooseUnderReplicatedBlocks selected  Total=0 Reset bookmarks? true
> 2016-08-23 00:24:43,802 INFO  BlockStateChange 
> (BlockManager.java:computeReplicationWorkForBlocks(1527)) - BLOCK* 
> neededReplications = 0, pendingReplications = 0.
> 2016-08-23 00:24:46,802 INFO  BlockStateChange 
> (UnderReplicatedBlocks.java:chooseUnderReplicatedBlocks(394)) - 
> chooseUnderReplicatedBlocks selected  Total=0 Reset bookmarks? true
> 2016-08-23 00:24:46,802 INFO  BlockStateChange 
> (BlockManager.java:computeReplicationWorkForBlocks(1527)) - BLOCK* 
> neededReplications = 0, pendingReplications = 0.
> 2016-08-23 00:24:49,498 INFO  ipc.Server (Server.java:saslProcess(1386)) - 
> Auth successful for 
> hive/natr76-swxs-dgtoeriesecha-r7-11.openstacklo...@example.com 
> (auth:KERBEROS)
> 2016-08-23 00:24:49,533 INFO  authorize.ServiceAuthorizationManager 
> (ServiceAuthorizationManager.java:authorize(135)) - Authorization successful 
> for hive/natr76-swxs-dgtoeriesecha-r7-11.openstacklo...@example.com 
> (auth:KERBEROS) for protocol=interface 
> org.apache.hadoop.hdfs.protocol.ClientProtocol
> 2016-08-23 00:24:49,804 INFO  BlockStateChange 
> (UnderReplicatedBlocks.java:chooseUnderReplicatedBlocks(394)) - 
> chooseUnderReplicatedBlocks selected  Total=0 Reset bookmarks? true
> 2016-08-23 00:24:49,805 INFO  BlockStateChange 
> (BlockManager.java:computeReplicationWorkForBlocks(1527)) - BLOCK* 
> neededReplications = 0, pendingReplications = 0.
> 2016-08-23 00:24:50,362 INFO  provider.BaseAuditHandler 
> (BaseAuditHandler.java:logStatus(312)) - Audit Status Log: 
> name=hdfs.async.multi_dest.batch.hdfs, interval=01:00.130 minutes, events=1, 
> deferredCount=1, totalEvents=4, totalDeferredCount=4
> 2016-08-23 00:24:50,363 INFO  destination.HDFSAuditDestination 
> (HDFSAuditDestination.java:createConfiguration(263)) - Returning HDFS 
> Filesystem Config: Configuration: core-default.xml, core-site.xml, 
> hdfs-default.xml, hdfs-site.xml, mapred-default.xml, mapred-site.xml, 
> yarn-default.xml, yarn-site.xml
> 2016-08-23 00:24:50,444 INFO  destination.HDFSAuditDestination 
> (HDFSAuditDestination.java:getLogFileStream(224)) - Checking whether log file 
> exists. 
> hdfPath=hdfs://natr76-swxs-dgtoeriesecha-r7-14.openstacklocal:8020/ranger/audit/hdfs/20160823/hdfs_ranger_audit_natr76-swxs-dgtoeriesecha-r7-10.openstacklocal.log,
>  UGI=nn/natr76-swxs-dgtoeriesecha-r7-10.openstacklo...@example.com 
> (auth:KERBEROS)
> 2016-08-23 00:24:50,622 ERROR provider.BaseAuditHandler 
> (BaseAuditHandler.java:logError(329)) - Error writing to log file.
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException):
>  Operation category READ is not supported in state standby
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:87)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.checkOperation(NameNode.java:1932)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1313)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3861)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1076)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:843)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProto

[jira] [Updated] (AMBARI-18706) Ranger Audit Handler not working as expected as NN HA wizard does not set a few properties correctly

2016-10-26 Thread Vivek Ratnavel Subramanian (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vivek Ratnavel Subramanian updated AMBARI-18706:

Attachment: AMBARI-18706.v1.patch

> Ranger Audit Handler not working as expected as NN HA wizard does not set a 
> few properties correctly
> 
>
> Key: AMBARI-18706
> URL: https://issues.apache.org/jira/browse/AMBARI-18706
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-web
>Affects Versions: 2.5.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Critical
> Fix For: 2.5.0
>
> Attachments: AMBARI-18706.v1.patch
>
>
> When Ranger is installed and the user tries to enable NamenodeHA in HDFS, the 
> following exception is encountered since a property is not configured 
> properly in all the dependent services.
> {code}
> 2016-08-23 00:24:42,923 INFO  hdfs.StateChange 
> (FSNamesystem.java:completeFile(3503)) - DIR* completeFile: 
> /spark-history/.7927fb59-c4fe-4328-9b7a-0d435df56690 is closed by 
> DFSClient_NONMAPREDUCE_-1097091097_1
> 2016-08-23 00:24:43,801 INFO  BlockStateChange 
> (UnderReplicatedBlocks.java:chooseUnderReplicatedBlocks(394)) - 
> chooseUnderReplicatedBlocks selected  Total=0 Reset bookmarks? true
> 2016-08-23 00:24:43,802 INFO  BlockStateChange 
> (BlockManager.java:computeReplicationWorkForBlocks(1527)) - BLOCK* 
> neededReplications = 0, pendingReplications = 0.
> 2016-08-23 00:24:46,802 INFO  BlockStateChange 
> (UnderReplicatedBlocks.java:chooseUnderReplicatedBlocks(394)) - 
> chooseUnderReplicatedBlocks selected  Total=0 Reset bookmarks? true
> 2016-08-23 00:24:46,802 INFO  BlockStateChange 
> (BlockManager.java:computeReplicationWorkForBlocks(1527)) - BLOCK* 
> neededReplications = 0, pendingReplications = 0.
> 2016-08-23 00:24:49,498 INFO  ipc.Server (Server.java:saslProcess(1386)) - 
> Auth successful for 
> hive/natr76-swxs-dgtoeriesecha-r7-11.openstacklo...@example.com 
> (auth:KERBEROS)
> 2016-08-23 00:24:49,533 INFO  authorize.ServiceAuthorizationManager 
> (ServiceAuthorizationManager.java:authorize(135)) - Authorization successful 
> for hive/natr76-swxs-dgtoeriesecha-r7-11.openstacklo...@example.com 
> (auth:KERBEROS) for protocol=interface 
> org.apache.hadoop.hdfs.protocol.ClientProtocol
> 2016-08-23 00:24:49,804 INFO  BlockStateChange 
> (UnderReplicatedBlocks.java:chooseUnderReplicatedBlocks(394)) - 
> chooseUnderReplicatedBlocks selected  Total=0 Reset bookmarks? true
> 2016-08-23 00:24:49,805 INFO  BlockStateChange 
> (BlockManager.java:computeReplicationWorkForBlocks(1527)) - BLOCK* 
> neededReplications = 0, pendingReplications = 0.
> 2016-08-23 00:24:50,362 INFO  provider.BaseAuditHandler 
> (BaseAuditHandler.java:logStatus(312)) - Audit Status Log: 
> name=hdfs.async.multi_dest.batch.hdfs, interval=01:00.130 minutes, events=1, 
> deferredCount=1, totalEvents=4, totalDeferredCount=4
> 2016-08-23 00:24:50,363 INFO  destination.HDFSAuditDestination 
> (HDFSAuditDestination.java:createConfiguration(263)) - Returning HDFS 
> Filesystem Config: Configuration: core-default.xml, core-site.xml, 
> hdfs-default.xml, hdfs-site.xml, mapred-default.xml, mapred-site.xml, 
> yarn-default.xml, yarn-site.xml
> 2016-08-23 00:24:50,444 INFO  destination.HDFSAuditDestination 
> (HDFSAuditDestination.java:getLogFileStream(224)) - Checking whether log file 
> exists. 
> hdfPath=hdfs://natr76-swxs-dgtoeriesecha-r7-14.openstacklocal:8020/ranger/audit/hdfs/20160823/hdfs_ranger_audit_natr76-swxs-dgtoeriesecha-r7-10.openstacklocal.log,
>  UGI=nn/natr76-swxs-dgtoeriesecha-r7-10.openstacklo...@example.com 
> (auth:KERBEROS)
> 2016-08-23 00:24:50,622 ERROR provider.BaseAuditHandler 
> (BaseAuditHandler.java:logError(329)) - Error writing to log file.
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException):
>  Operation category READ is not supported in state standby
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:87)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.checkOperation(NameNode.java:1932)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1313)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3861)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1076)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:843)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(C

[jira] [Updated] (AMBARI-18706) Ranger Audit Handler not working as expected as NN HA wizard does not set a few properties correctly

2016-10-26 Thread Vivek Ratnavel Subramanian (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vivek Ratnavel Subramanian updated AMBARI-18706:

Status: Patch Available  (was: In Progress)

> Ranger Audit Handler not working as expected as NN HA wizard does not set a 
> few properties correctly
> 
>
> Key: AMBARI-18706
> URL: https://issues.apache.org/jira/browse/AMBARI-18706
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-web
>Affects Versions: 2.5.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Critical
> Fix For: 2.5.0
>
> Attachments: AMBARI-18706.v1.patch
>
>
> When Ranger is installed and the user tries to enable NamenodeHA in HDFS, the 
> following exception is encountered since a property is not configured 
> properly in all the dependent services.
> {code}
> 2016-08-23 00:24:42,923 INFO  hdfs.StateChange 
> (FSNamesystem.java:completeFile(3503)) - DIR* completeFile: 
> /spark-history/.7927fb59-c4fe-4328-9b7a-0d435df56690 is closed by 
> DFSClient_NONMAPREDUCE_-1097091097_1
> 2016-08-23 00:24:43,801 INFO  BlockStateChange 
> (UnderReplicatedBlocks.java:chooseUnderReplicatedBlocks(394)) - 
> chooseUnderReplicatedBlocks selected  Total=0 Reset bookmarks? true
> 2016-08-23 00:24:43,802 INFO  BlockStateChange 
> (BlockManager.java:computeReplicationWorkForBlocks(1527)) - BLOCK* 
> neededReplications = 0, pendingReplications = 0.
> 2016-08-23 00:24:46,802 INFO  BlockStateChange 
> (UnderReplicatedBlocks.java:chooseUnderReplicatedBlocks(394)) - 
> chooseUnderReplicatedBlocks selected  Total=0 Reset bookmarks? true
> 2016-08-23 00:24:46,802 INFO  BlockStateChange 
> (BlockManager.java:computeReplicationWorkForBlocks(1527)) - BLOCK* 
> neededReplications = 0, pendingReplications = 0.
> 2016-08-23 00:24:49,498 INFO  ipc.Server (Server.java:saslProcess(1386)) - 
> Auth successful for 
> hive/natr76-swxs-dgtoeriesecha-r7-11.openstacklo...@example.com 
> (auth:KERBEROS)
> 2016-08-23 00:24:49,533 INFO  authorize.ServiceAuthorizationManager 
> (ServiceAuthorizationManager.java:authorize(135)) - Authorization successful 
> for hive/natr76-swxs-dgtoeriesecha-r7-11.openstacklo...@example.com 
> (auth:KERBEROS) for protocol=interface 
> org.apache.hadoop.hdfs.protocol.ClientProtocol
> 2016-08-23 00:24:49,804 INFO  BlockStateChange 
> (UnderReplicatedBlocks.java:chooseUnderReplicatedBlocks(394)) - 
> chooseUnderReplicatedBlocks selected  Total=0 Reset bookmarks? true
> 2016-08-23 00:24:49,805 INFO  BlockStateChange 
> (BlockManager.java:computeReplicationWorkForBlocks(1527)) - BLOCK* 
> neededReplications = 0, pendingReplications = 0.
> 2016-08-23 00:24:50,362 INFO  provider.BaseAuditHandler 
> (BaseAuditHandler.java:logStatus(312)) - Audit Status Log: 
> name=hdfs.async.multi_dest.batch.hdfs, interval=01:00.130 minutes, events=1, 
> deferredCount=1, totalEvents=4, totalDeferredCount=4
> 2016-08-23 00:24:50,363 INFO  destination.HDFSAuditDestination 
> (HDFSAuditDestination.java:createConfiguration(263)) - Returning HDFS 
> Filesystem Config: Configuration: core-default.xml, core-site.xml, 
> hdfs-default.xml, hdfs-site.xml, mapred-default.xml, mapred-site.xml, 
> yarn-default.xml, yarn-site.xml
> 2016-08-23 00:24:50,444 INFO  destination.HDFSAuditDestination 
> (HDFSAuditDestination.java:getLogFileStream(224)) - Checking whether log file 
> exists. 
> hdfPath=hdfs://natr76-swxs-dgtoeriesecha-r7-14.openstacklocal:8020/ranger/audit/hdfs/20160823/hdfs_ranger_audit_natr76-swxs-dgtoeriesecha-r7-10.openstacklocal.log,
>  UGI=nn/natr76-swxs-dgtoeriesecha-r7-10.openstacklo...@example.com 
> (auth:KERBEROS)
> 2016-08-23 00:24:50,622 ERROR provider.BaseAuditHandler 
> (BaseAuditHandler.java:logError(329)) - Error writing to log file.
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException):
>  Operation category READ is not supported in state standby
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:87)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.checkOperation(NameNode.java:1932)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1313)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3861)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1076)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:843)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlocki