[ 
https://issues.apache.org/jira/browse/RANGER-3058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uma Maheswara Rao G updated RANGER-3058:
----------------------------------------
    Attachment: 
0001-RANGER-3058-ranger-hive-create-table-fails-with-ViewHDFS.patch

> [ranger-hive] create table fails when ViewDFS(client side HDFS mounting fs) 
> mount points are targeting to Ozone/S3 FS 
> ----------------------------------------------------------------------------------------------------------------------
>
>                 Key: RANGER-3058
>                 URL: https://issues.apache.org/jira/browse/RANGER-3058
>             Project: Ranger
>          Issue Type: Bug
>          Components: plugins, Ranger
>    Affects Versions: 2.1.0
>            Reporter: Uma Maheswara Rao G
>            Assignee: Uma Maheswara Rao G
>            Priority: Major
>             Fix For: 2.1.1
>
>         Attachments: 
> 0001-RANGER-3058-ranger-hive-create-table-fails-with-ViewHDFS.patch
>
>          Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently RangerHiveAuthorizer has specific logic flows for HDFS and S3/Ozone.
> If the fs scheme is part of hivePlugin#getFSScheme[1], then it will go and 
> check privileges via fs. 
>  [1] private static String 
> RANGER_PLUGIN_HIVE_ULRAUTH_FILESYSTEM_SCHEMES_DEFAULT = "hdfs:,file:";
> Flow will come to the following code peice:
> {code:java}
> if (!isURIAccessAllowed(user, permission, path, fs))
> { throw new HiveAccessControlException(String.format( "Permission denied: 
> user [%s] does not have [%s] privilege on [%s]", user, permission.name(), 
> path)); 
> }
> continue;
> {code}
>  
> but, when we have paths mounted to other fs, like ozone, the current path 
> will hdfs based path, but in reality that patch is ozone fs path, later this 
> resolution happens inside mount fs. That time, when fs#access will be called 
> to check permissions. Currently access API implemented only in HDFS. Once 
> resolution happens, it will be delegated to OzoneFs. But OzoneFS does not 
> implemented access API.
>  So, the default abstract FileSystem implementation is to just expect 
> permissions matching to the expected mode.
>  Here the expected action mode for createTable is ALL. But Ozone/s3 paths 
> will not have rwx permissions on keys. So, it will fail.
> 0: jdbc:hive2://umag-1.umag.root.xxx.site:218> CREATE EXTERNAL TABLE 
> testtable1 (order_id BIGINT, user_id STRING, item STRING, state STRING) ROW 
> FORMAT DELIMITED FIELDS TERMINATED BY ',' STORED AS TEXTFILE LOCATION '/test';
>  Error: Error while compiling statement: FAILED: HiveAccessControlException 
> Permission denied: user [systest] does not have [ALL] privilege on 
> [hdfs://ns1/test] (state=42000,code=40000)
>  0: jdbc:hive2://umag-1.umag.root.xxx.site:218>
> My mount point on hdfs configured as follows:
>  fs.viewfs.mounttable.ns1.link./test --> o3fs://bucket.volume.ozone1/test
> hdfs://ns1/test will be resolved as o3fs://bucket.volume.ozone1/test.
> So, checkPrevildges will fail
> {code:java}
> Caused by: 
> org.apache.hadoop.hive.ql.security.authorization.plugin.HiveAccessControlException:
>  Permission denied: user [systest] does not have [ALL] privilege on 
> [hdfs://ns1/test]
>       at 
> org.apache.ranger.authorization.hive.authorizer.RangerHiveAuthorizer.checkPrivileges(RangerHiveAuthorizer.java:810)
>  ~[?:?]
>       at 
> org.apache.hadoop.hive.ql.security.authorization.command.CommandAuthorizerV2.doAuthorization(CommandAuthorizerV2.java:77)
>  ~[hive-exec-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-128]
>       at 
> org.apache.hadoop.hive.ql.security.authorization.command.CommandAuthorizer.doAuthorization(CommandAuthorizer.java:58)
>  ~[hive-exec-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-128]
>       at org.apache.hadoop.hive.ql.Compiler.authorize(Compiler.java:406) 
> ~[hive-exec-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-128]
>       at org.apache.hadoop.hive.ql.Compiler.compile(Compiler.java:109) 
> ~[hive-exec-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-128]
>       at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:188) 
> ~[hive-exec-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-128]
>       at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:600) 
> ~[hive-exec-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-128]
>       at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:546) 
> ~[hive-exec-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-128]
>       at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:540) 
> ~[hive-exec-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-128]
>       at 
> org.apache.hadoop.hive.ql.reexec.ReExecDriver.compileAndRespond(ReExecDriver.java:127)
>  ~[hive-exec-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-128]
>       at 
> org.apache.hive.service.cli.operation.SQLOperation.prepare(SQLOperation.java:199)
>  ~[hive-service-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-128]
>       ... 15 more
> {code}
> I will add more trace details in the comments.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to