Hi Nicolas,

Do you see in the logs that Spark is able to pull the policies from Ranger
API ?
Either way, could you please share the policies you defined in Ranger for
your user ?

Thanks,


Loïc CHANEL
System Big Data engineer
Vision 360 Degrés (Lyon, France)


Le lun. 9 mars 2020 à 13:41, Nicolas Paris <nicolas.pa...@riseup.net> a
écrit :

> Hello,
>
> I am using Apache Ranger 0.7.0. IFAIK, I cannot trivially upgrade to
> newer version.
>
> I have setup the hive plugin and both audit and authorisation works
> fine.
>
> I tried to setup the hdfs plugin on a kerberised hdfs cluster. Most of
> the setup works ok:
> - the connection test to hdfs is successfull
> - I can see the hdfs audit json populating the hdfs folder
> - I also have added some rules (in particular read for user nicolas to
> folder `/tmp/test`)
>
> However, the ranger rules does not look to apply: still the hdfs posix
> authorisation apply or the hdfs acls.
>
> Here is a typical stack trace I get from apache spark when I ask for
> access to an hdfs folder (revealing RangerHdfsAuthorizer is active):
>
> > Caused by: org.apache.hadoop.ipc.RemoteException: Permission denied:
> user=nicolas, access=READ_EXECUTE, inode="/tmp/test":hdfs:hdfs:drwxrwx---
> >       at
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:353)
> >       at
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:252)
> >       at
> org.apache.ranger.authorization.hadoop.RangerHdfsAuthorizer$RangerAccessControlEnforcer.checkDefaultEnforcer(RangerHdfsAuthorizer.java:427)
> >       at
> org.apache.ranger.authorization.hadoop.RangerHdfsAuthorizer$RangerAccessControlEnforcer.checkPermission(RangerHdfsAuthorizer.java:303)
> >       at
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
> >       at
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1950)
> >       at
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1934)
> >       at
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPathAccess(FSDirectory.java:1908)
> >       at
> org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getListingInt(FSDirStatAndListingOp.java:77)
> >       at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListing(FSNamesystem.java:4820)
> >       at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getListing(NameNodeRpcServer.java:1124)
> >       at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getListing(ClientNamenodeProtocolServerSideTranslatorPB.java:657)
> >       at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> >       at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)
> >       at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
> >       at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2351)
> >       at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2347)
> >       at java.security.AccessController.doPrivileged(Native Method)
> >       at javax.security.auth.Subject.doAs(Subject.java:422)
> >       at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1869)
> >       at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2347)
>
> Thanks for your help,
>
> --
> nicolas paris
>

Reply via email to