Hi all,

               I solved this error by changing property in hdfs-site.xml
dfs.persmissions to "true" to  false.is this the correct one.After this
change hive authenticated ranger users.

<property>
<name>dfs.permissions</name>
        <value>false</value>
</property>

Thanks
Mahesh.S

On Fri, Jan 16, 2015 at 12:45 PM, Mahesh Sankaran <[email protected]>
wrote:

> Hi all,
>       I am configuring ranger-hive plugin.hive agent is created and also
> auditing.I created policy  to select,update,create, and drop permissions
> for the database named mahesh for the use admin.when i trying to create
> table, in ranger audit-->access--> access Type (CREATE) and result shows
> "allowed" but in hiveserver2 i got following error.but when i change user
> "/user/hive/warehouse/mahesh.db" into admin(hadoop fs -chown -R admin:admin
> /user/hive/warehouse/mahesh.db) it worked.Seems like It does not
> authenticate ranger user.Kindly help me to solve this problem.
>
> 0: jdbc:hive2://10.10.10.63:10000> create table t2 (id int);
> Error: Error while processing statement: FAILED: Execution Error, return
> code 1 from org.apache.hadoop.hive.ql.exec.DDLTask.
> MetaException(message:Got exception:
> org.apache.hadoop.security.AccessControlException Permission denied:
> user=admin, access=WRITE, inode=:hadoop2:supergroup:drwxr-xr-x
> at
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkFsPermission(FSPermissionChecker.java:271)
> at
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:257)
> at
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:238)
> at
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:179)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6512)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6494)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:6446)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:4248)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4218)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4191)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:813)
> at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:600)
> at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
> ) (state=08S01,code=1)
> 0: jdbc:hive2://10.10.10.63:10000>
>
> Regards
> Mahesh.S
>

Reply via email to