>From the error what I see is your Policy manager URL is null

can you check in xasecure-hdfs-security.xml in /etc/hadoop/conf the following 
parameter value and let me know.

<name>xasecure.hdfs.policymgr.url</name>

It looks like when you enabled you haven’t filled the correct url for the 
policymgr in install.properities file.

Thanks
Ramesh


On Mar 6, 2015, at 5:45 AM, Hadoop Solutions <[email protected]> wrote:

> I saw following exception related to Ranger:
> 
> 2015-03-06 13:21:36,414 INFO  ipc.Server (Server.java:saslProcess(1306)) - 
> Auth successful for jhs/[email protected] 
> (auth:KERBEROS)
> 2015-03-06 13:21:36,422 INFO  authorize.ServiceAuthorizationManager 
> (ServiceAuthorizationManager.java:authorize(118)) - Authorization successful 
> for jhs/[email protected] (auth:KERBEROS) for 
> protocol=interface org.apache.hadoop.hdfs.protocol.ClientProtocol
> 2015-03-06 13:21:36,528 INFO  provider.AuditProviderFactory 
> (AuditProviderFactory.java:<init>(60)) - AuditProviderFactory: creating..
> 2015-03-06 13:21:36,529 INFO  provider.AuditProviderFactory 
> (AuditProviderFactory.java:init(90)) - AuditProviderFactory: initializing..
> 2015-03-06 13:21:36,645 INFO  provider.AuditProviderFactory 
> (AuditProviderFactory.java:init(107)) - AuditProviderFactory: Audit not 
> enabled..
> 2015-03-06 13:21:36,660 INFO  config.PolicyRefresher 
> (PolicyRefresher.java:<init>(60)) - Creating PolicyRefreshser with url: null, 
> refreshInterval: 60000, sslConfigFileName: null, lastStoredFileName: null
> 2015-03-06 13:21:36,668 ERROR config.PolicyRefresher 
> (PolicyRefresher.java:checkFileWatchDogThread(138)) - Unable to start the 
> FileWatchDog for path [null]
> java.lang.NullPointerException
>         at 
> com.xasecure.pdp.config.ConfigWatcher.getAgentName(ConfigWatcher.java:474)
>         at 
> com.xasecure.pdp.config.ConfigWatcher.<init>(ConfigWatcher.java:124)
>         at 
> com.xasecure.pdp.config.PolicyRefresher$1.<init>(PolicyRefresher.java:124)
>         at 
> com.xasecure.pdp.config.PolicyRefresher.checkFileWatchDogThread(PolicyRefresher.java:124)
>         at 
> com.xasecure.pdp.config.PolicyRefresher.<init>(PolicyRefresher.java:69)
>         at com.xasecure.pdp.hdfs.URLBasedAuthDB.<init>(URLBasedAuthDB.java:84)
>         at 
> com.xasecure.pdp.hdfs.URLBasedAuthDB.getInstance(URLBasedAuthDB.java:67)
>         at 
> com.xasecure.pdp.hdfs.XASecureAuthorizer.<clinit>(XASecureAuthorizer.java:28)
>         at java.lang.Class.forName0(Native Method)
>         at java.lang.Class.forName(Class.java:190)
>         at 
> com.xasecure.authorization.hadoop.HDFSAccessVerifierFactory.getInstance(HDFSAccessVerifierFactory.java:43)
>         at 
> org.apache.hadoop.hdfs.server.namenode.XaSecureFSPermissionChecker.AuthorizeAccessForUser(XaSecureFSPermissionChecker.java:137)
>         at 
> org.apache.hadoop.hdfs.server.namenode.XaSecureFSPermissionChecker.check(XaSecureFSPermissionChecker.java:108)
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java)
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:208)
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:171)
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6515)
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6497)
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPathAccess(FSNamesystem.java:6422)
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListingInt(FSNamesystem.java:4957)
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListing(FSNamesystem.java:4918)
>         at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getListing(NameNodeRpcServer.java:826)
>         at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getListing(ClientNamenodeProtocolServerSideTranslatorPB.java:612)
>         at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>         at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:415)
>         at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
> 2015-03-06 13:21:36,670 INFO  hadoop.HDFSAccessVerifierFactory 
> (HDFSAccessVerifierFactory.java:getInstance(44)) - Created a new instance of 
> class: [com.xasecure.pdp.hdfs.XASecureAuthorizer] for HDFS Access 
> verification.
> 2015-03-06 13:21:37,212 INFO  namenode.FSNamesystem 
> (FSNamesystem.java:listCorruptFileBlocks(7220)) - there are no corrupt file 
> blocks.
> 2015-03-06 13:21:37,718 INFO  namenode.FSNamesystem 
> (FSNamesystem.java:listCorruptFileBlocks(7220)) - there are no corrupt file 
> blocks.
> 2015-03-06 13:21:38,974 INFO  ipc.Server (Server.java:saslProcess(1306)) - 
> Auth successful for oozie/[email protected] 
> (auth:KERBEROS)
> 2015-03-06 13:21:38,984 INFO  authorize.ServiceAuthorizationManager 
> (ServiceAuthorizationManager.java:authorize(118)) - Authorization successful 
> for oozie/[email protected] (auth:KERBEROS) for 
> protocol=interface org.apache.hadoop.hdfs.protocol.ClientProtocol
> 2015-03-06 13:21:44,515 INFO  namenode.FSNamesystem 
> (FSNamesystem.java:listCorruptFileBlocks(7220)) - there are no corrupt file 
> blocks.
> 2015-03-06 13:21:45,000 INFO  namenode.FSNamesystem 
> (FSNamesystem.java:listCorruptFileBlocks(7220)) - there are no corrupt file 
> blocks.
> 2015-03-06 13:21:50,709 INFO  blockmanagement.CacheReplicationMonitor 
> (CacheReplicationMonitor.java:run(178)) - Rescanning after 30000 milliseconds
> 2015-03-06 13:21:50,710 INFO  blockmanagement.CacheReplicationMonitor 
> (CacheReplicationMonitor.java:run(201)) - Scanned 0 directive(s) and 0 
> block(s) in 1 millisecond(s).
> 
> 
> On 6 March 2015 at 21:38, Hadoop Solutions <[email protected]> wrote:
> After adding xasecure.add-hadoop-authorization as true, i can able to access 
> hadoop file system.
> 
> I have restarted HDFS and Ranger Admin, but still i am not able to see agents 
> in Ranger console.
> 
> On 6 March 2015 at 21:07, Amith sha <[email protected]> wrote:
> make the xasecure.add-hadoop-authorization as true and after editing the 
> configuration files first restart Hadoop then restart Ranger and then try to 
> access
> 
> Thanks & Regards
> Amithsha
> 
> On Fri, Mar 6, 2015 at 6:29 PM, Muthu Pandi <[email protected]> wrote:
> Did you got the plugin working?? are u able to see the agent in ranger 
> console?
> 
> You have disabled the Hadoop authorization in the audit file it seems so 
> change 
> 
> xasecure.add-hadoop-authorization to true in the audit file
> 
> 
> 
> Regards
> Muthupandi.K
> 
>  Think before you print.
> 
> 
> 
> 
> 
> 
> 
> On Fri, Mar 6, 2015 at 6:13 PM, Hadoop Solutions <[email protected]> 
> wrote:
> Thank you for your help, Muthu.
> 
> I am using HDP 2.2 and i have added audit.xml file. After that i am seeing 
> following error messages.
> 
> 2015-03-06 12:40:51,119 INFO  namenode.FSNamesystem 
> (FSNamesystem.java:listCorruptFileBlocks(7220)) - there are no corrupt file 
> blocks.
> 2015-03-06 12:40:51,485 INFO  namenode.FSNamesystem 
> (FSNamesystem.java:listCorruptFileBlocks(7220)) - there are no corrupt file 
> blocks.
> 2015-03-06 12:40:56,888 INFO  ipc.Server (Server.java:run(2060)) - IPC Server 
> handler 16 on 8020, call 
> org.apache.hadoop.hdfs.protocol.ClientProtocol.getListing from 
> 10.193.153.220:50271 Call#5020 Retry#0
> com.xasecure.authorization.hadoop.exceptions.XaSecureAccessControlException: 
> Permission denied: principal{user=mapred,groups: [hadoop]}, access=EXECUTE, 
> directory="/"
>         at 
> org.apache.hadoop.hdfs.server.namenode.XaSecureFSPermissionChecker.check(XaSecureFSPermissionChecker.java:112)
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java)
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:208)
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:171)
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6515)
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6497)
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPathAccess(FSNamesystem.java:6422)
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListingInt(FSNamesystem.java:4957)
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListing(FSNamesystem.java:4918)
>         at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getListing(NameNodeRpcServer.java:826)
>         at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getListing(ClientNamenodeProtocolServerSideTranslatorPB.java:612)
>         at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>         at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:415)
>         at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
> 
> 
> Can you please let me know wht it belongs to.
> 
> Thanks,
> Shaik
> 
> 
> On 6 March 2015 at 18:31, Muthu Pandi <[email protected]> wrote:
> From your logs it looks like you are using HDP. and the audit.xml file is not 
> in CLASSPATH what version of HDP you r using
> 
> this link is for ranger installation on HDP2.2 
> http://hortonworks.com/blog/apache-ranger-audit-framework/  make sure you 
> have followed everything, below is the snippet from the earlier link which 
> deals with the placing xml file on correct path.
> 
> <image.png>
> 
> Regards
> Muthupandi.K
> 
>  Think before you print.
> 
> 
> 
> 
> 
> 
> 
> On Fri, Mar 6, 2015 at 2:55 PM, Hadoop Solutions <[email protected]> 
> wrote:
> Hi Mathu,
> 
> Please find the attached NN log.
> 
> i have copied all jar to /usr/hdp/current/hadoop-hdfs-namenode/lib location.
> 
> please provide me the right solution for this issue.
> 
> Thanks,
> Shaik
> 
> On 6 March 2015 at 15:48, Muthu Pandi <[email protected]> wrote:
> Could you post the logs of your Active NN or the NN where you deployed your 
> Ranger
> 
> Also Make sure you have copied your JARS to respective folders and restarted 
> the cluster.
> 
> Regards
> Muthupandi.K
> 
>  Think before you print.
> 
> 
> 
> 
> 
> 
> 
> On Fri, Mar 6, 2015 at 1:08 PM, Hadoop Solutions <[email protected]> 
> wrote:
> Hi Amithsha,
> 
> I have deployed ranger-hdfs-plugin again with HA NN url.
> 
> But, i am agents are not listed in Ranger Agents. I am using HDP 2.2.
> 
> Please advise to resolve this issue.
> 
> Thanks,
> Shaik
> 
> On 6 March 2015 at 14:48, Amith sha <[email protected]> wrote:
> Hi Shail,
> 
> Below mentioned steps are  mentioned in Ranger Guide to enable Ranger
> plugin In Hadoop HA cluster
> 
> 
> To enable Ranger in the HDFS HA environment, an HDFS plugin must be
> set up in each NameNode, and then pointed to the same HDFS repository
> set up in the Security Manager. Any policies created within that HDFS
> repository are automatically synchronized to the primary and secondary
> NameNodes through the installed Apache Ranger plugin. That way, if the
> primary NameNode fails, the secondary namenode takes over and the
> Ranger plugin at that NameNode begins to enforce the same policies for
> access control.
> When creating the repository, you must include the fs.default.name for
> the primary NameNode. If the primary NameNode fails during policy
> creation, you can then temporarily use the fs.default.name of the
> secondary NameNode in the repository details to enable directory
> lookup for policy creation.
> 
> Thanks & Regards
> Amithsha
> 
> 
> On Fri, Mar 6, 2015 at 12:00 PM, Hadoop Solutions
> <[email protected]> wrote:
> > Hi,
> >
> > I have installed Ranger from Git repo and I have started Ranger console.
> >
> > I am trying to deploy ranger-hdfs plugin on active NN. But, plugin agent
> > unable to contact with Ranger.
> >
> > Can you please let me know the right procedure for ranger-hdfs plugin
> > deployment on HA NN cluster.
> >
> >
> > Regards,
> > Shaik
> 
> 
> 
> 
> 
> 
> 
> 
> 


-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.

Reply via email to