After adding xasecure.add-hadoop-authorization as true, i can able to
access hadoop file system.

I have restarted HDFS and Ranger Admin, but still i am not able to see
agents in Ranger console.

On 6 March 2015 at 21:07, Amith sha <[email protected]> wrote:

> make the xasecure.add-hadoop-authorization as true and after editing the
> configuration files first restart Hadoop then restart Ranger and then try
> to access
>
> Thanks & Regards
> Amithsha
>
> On Fri, Mar 6, 2015 at 6:29 PM, Muthu Pandi <[email protected]> wrote:
>
>> Did you got the plugin working?? are u able to see the agent in ranger
>> console?
>>
>> You have disabled the Hadoop authorization in the audit file it seems so
>> change
>>
>> xasecure.add-hadoop-authorization to true in the audit file
>>
>>
>>
>>
>>
>> *RegardsMuthupandi.K*
>>
>>  Think before you print.
>>
>>
>>
>> On Fri, Mar 6, 2015 at 6:13 PM, Hadoop Solutions <[email protected]>
>> wrote:
>>
>>> Thank you for your help, Muthu.
>>>
>>> I am using HDP 2.2 and i have added audit.xml file. After that i am
>>> seeing following error messages.
>>>
>>> 2015-03-06 12:40:51,119 INFO  namenode.FSNamesystem
>>> (FSNamesystem.java:listCorruptFileBlocks(7220)) - there are no corrupt file
>>> blocks.
>>> 2015-03-06 12:40:51,485 INFO  namenode.FSNamesystem
>>> (FSNamesystem.java:listCorruptFileBlocks(7220)) - there are no corrupt file
>>> blocks.
>>> 2015-03-06 12:40:56,888 INFO  ipc.Server (Server.java:run(2060)) - IPC
>>> Server handler 16 on 8020, call
>>> org.apache.hadoop.hdfs.protocol.ClientProtocol.getListing from
>>> 10.193.153.220:50271 Call#5020 Retry#0
>>> com.xasecure.authorization.hadoop.exceptions.XaSecureAccessControlException:
>>> Permission denied: principal{user=mapred,groups: [hadoop]}, access=EXECUTE,
>>> directory="/"
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.XaSecureFSPermissionChecker.check(XaSecureFSPermissionChecker.java:112)
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java)
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:208)
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:171)
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6515)
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6497)
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPathAccess(FSNamesystem.java:6422)
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListingInt(FSNamesystem.java:4957)
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListing(FSNamesystem.java:4918)
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getListing(NameNodeRpcServer.java:826)
>>>         at
>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getListing(ClientNamenodeProtocolServerSideTranslatorPB.java:612)
>>>         at
>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>>>         at
>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
>>>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
>>>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
>>>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
>>>         at java.security.AccessController.doPrivileged(Native Method)
>>>         at javax.security.auth.Subject.doAs(Subject.java:415)
>>>         at
>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>>>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
>>>
>>>
>>> Can you please let me know wht it belongs to.
>>>
>>> Thanks,
>>> Shaik
>>>
>>>
>>> On 6 March 2015 at 18:31, Muthu Pandi <[email protected]> wrote:
>>>
>>>> From your logs it looks like you are using HDP. and the audit.xml file
>>>> is not in CLASSPATH what version of HDP you r using
>>>>
>>>> this link is for ranger installation on HDP2.2
>>>> http://hortonworks.com/blog/apache-ranger-audit-framework/  make sure
>>>> you have followed everything, below is the snippet from the earlier link
>>>> which deals with the placing xml file on correct path.
>>>>
>>>> [image: Inline image 1]
>>>>
>>>>
>>>>
>>>> *RegardsMuthupandi.K*
>>>>
>>>>  Think before you print.
>>>>
>>>>
>>>>
>>>> On Fri, Mar 6, 2015 at 2:55 PM, Hadoop Solutions <
>>>> [email protected]> wrote:
>>>>
>>>>> Hi Mathu,
>>>>>
>>>>> Please find the attached NN log.
>>>>>
>>>>> i have copied all jar to /usr/hdp/current/hadoop-hdfs-namenode/lib
>>>>> location.
>>>>>
>>>>> please provide me the right solution for this issue.
>>>>>
>>>>> Thanks,
>>>>> Shaik
>>>>>
>>>>> On 6 March 2015 at 15:48, Muthu Pandi <[email protected]> wrote:
>>>>>
>>>>>> Could you post the logs of your Active NN or the NN where you
>>>>>> deployed your Ranger
>>>>>>
>>>>>> Also Make sure you have copied your JARS to respective folders and
>>>>>> restarted the cluster.
>>>>>>
>>>>>>
>>>>>>
>>>>>> *RegardsMuthupandi.K*
>>>>>>
>>>>>>  Think before you print.
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Fri, Mar 6, 2015 at 1:08 PM, Hadoop Solutions <
>>>>>> [email protected]> wrote:
>>>>>>
>>>>>>> Hi Amithsha,
>>>>>>>
>>>>>>> I have deployed ranger-hdfs-plugin again with HA NN url.
>>>>>>>
>>>>>>> But, i am agents are not listed in Ranger Agents. I am using HDP 2.2.
>>>>>>>
>>>>>>> Please advise to resolve this issue.
>>>>>>>
>>>>>>> Thanks,
>>>>>>> Shaik
>>>>>>>
>>>>>>> On 6 March 2015 at 14:48, Amith sha <[email protected]> wrote:
>>>>>>>
>>>>>>>> Hi Shail,
>>>>>>>>
>>>>>>>> Below mentioned steps are  mentioned in Ranger Guide to enable
>>>>>>>> Ranger
>>>>>>>> plugin In Hadoop HA cluster
>>>>>>>>
>>>>>>>>
>>>>>>>> To enable Ranger in the HDFS HA environment, an HDFS plugin must be
>>>>>>>> set up in each NameNode, and then pointed to the same HDFS
>>>>>>>> repository
>>>>>>>> set up in the Security Manager. Any policies created within that
>>>>>>>> HDFS
>>>>>>>> repository are automatically synchronized to the primary and
>>>>>>>> secondary
>>>>>>>> NameNodes through the installed Apache Ranger plugin. That way, if
>>>>>>>> the
>>>>>>>> primary NameNode fails, the secondary namenode takes over and the
>>>>>>>> Ranger plugin at that NameNode begins to enforce the same policies
>>>>>>>> for
>>>>>>>> access control.
>>>>>>>> When creating the repository, you must include the fs.default.name
>>>>>>>> for
>>>>>>>> the primary NameNode. If the primary NameNode fails during policy
>>>>>>>> creation, you can then temporarily use the fs.default.name of the
>>>>>>>> secondary NameNode in the repository details to enable directory
>>>>>>>> lookup for policy creation.
>>>>>>>>
>>>>>>>> Thanks & Regards
>>>>>>>> Amithsha
>>>>>>>>
>>>>>>>>
>>>>>>>> On Fri, Mar 6, 2015 at 12:00 PM, Hadoop Solutions
>>>>>>>> <[email protected]> wrote:
>>>>>>>> > Hi,
>>>>>>>> >
>>>>>>>> > I have installed Ranger from Git repo and I have started Ranger
>>>>>>>> console.
>>>>>>>> >
>>>>>>>> > I am trying to deploy ranger-hdfs plugin on active NN. But,
>>>>>>>> plugin agent
>>>>>>>> > unable to contact with Ranger.
>>>>>>>> >
>>>>>>>> > Can you please let me know the right procedure for ranger-hdfs
>>>>>>>> plugin
>>>>>>>> > deployment on HA NN cluster.
>>>>>>>> >
>>>>>>>> >
>>>>>>>> > Regards,
>>>>>>>> > Shaik
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>

Reply via email to