bluzy opened a new issue, #6750:
URL: https://github.com/apache/iceberg/issues/6750

   ### Apache Iceberg version
   
   1.1.0 (latest release)
   
   ### Query engine
   
   Hive
   
   ### Please describe the bug 🐞
   
   We provide hiveserver for query to iceberg tables.
   Impersonation is enabled, and users have each permissions to access table.
   
   Problem:
   Sometimes `Failed to get table info from metastore` error occured for valid 
users.
   
   I found related logs from Hiveserver2.
   Username in this log is different from requested user.
   
   ```
   Caused by: org.apache.hadoop.hive.metastore.api.MetaException: 
java.security.AccessControlException: Permission denied: user=****, 
access=EXECUTE, inode=****------
        at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:399)
        at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:315)
        at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:242)
        at 
org.apache.ranger.authorization.hadoop.RangerHdfsAuthorizer$RangerAccessControlEnforcer.checkDefaultEnforcer(RangerHdfsAuthorizer.java:589)
        at 
org.apache.ranger.authorization.hadoop.RangerHdfsAuthorizer$RangerAccessControlEnforcer.checkPermission(RangerHdfsAuthorizer.java:377)
        at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:193)
        at 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1852)
        at 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1836)
        at 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPathAccess(FSDirectory.java:1786)
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAccess(FSNamesystem.java:7800)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.checkAccess(NameNodeRpcServer.java:2217)
        at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.checkAccess(ClientNamenodeProtocolServerSideTranslatorPB.java:1659)
        at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:872)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:818)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2678)
   
        at 
org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_table_req_result$get_table_req_resultStandardScheme.read(ThriftHiveMetastore.java)
 ~[hive-exec-3.1.0.3.1.0-6.jar:3.1.0.3.1.0-6]
        at 
org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_table_req_result$get_table_req_resultStandardScheme.read(ThriftHiveMetastore.java)
 ~[hive-exec-3.1.0.3.1.0-6.jar:3.1.0.3.1.0-6]
        at 
org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_table_req_result.read(ThriftHiveMetastore.java)
 ~[hive-exec-3.1.0.3.1.0-6.jar:3.1.0.3.1.0-6]
        at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:86) 
~[hive-exec-3.1.0.3.1.0-6.jar:3.1.0.3.1.0-6]
        at 
org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_get_table_req(ThriftHiveMetastore.java:2133)
 ~[hive-exec-3.1.0.3.1.0-6.jar:3.1.0.3.1.0-6]
        at 
org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.get_table_req(ThriftHiveMetastore.java:2120)
 ~[hive-exec-3.1.0.3.1.0-6.jar:3.1.0.3.1.0-6]
        at 
org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getTable(HiveMetaStoreClient.java:1674)
 ~[hive-exec-3.1.0.3.1.0-6.jar:3.1.0.3.1.0-6]
        at 
org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getTable(HiveMetaStoreClient.java:1666)
 ~[hive-exec-3.1.0.3.1.0-6.jar:3.1.0.3.1.0-6]
        at sun.reflect.GeneratedMethodAccessor239.invoke(Unknown Source) ~[?:?]
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 ~[?:1.8.0_112]
        at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_112]
        at 
org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:208)
 ~[hive-exec-3.1.0.3.1.0-6.jar:3.1.0.3.1.0-6]
        at com.sun.proxy.$Proxy113.getTable(Unknown Source) ~[?:?]
        at 
org.apache.iceberg.hive.HiveTableOperations.lambda$doRefresh$0(HiveTableOperations.java:193)
 ~[iceberg-hive-runtime-0.14.0.jar:?]
        at org.apache.iceberg.ClientPoolImpl.run(ClientPoolImpl.java:58) 
~[iceberg-hive-runtime-0.14.0.jar:?]
        at org.apache.iceberg.ClientPoolImpl.run(ClientPoolImpl.java:51) 
~[iceberg-hive-runtime-0.14.0.jar:?]
        at 
org.apache.iceberg.hive.CachedClientPool.run(CachedClientPool.java:76) 
~[iceberg-hive-runtime-0.14.0.jar:?]
        at 
org.apache.iceberg.hive.HiveTableOperations.doRefresh(HiveTableOperations.java:193)
 ~[iceberg-hive-runtime-0.14.0.jar:?]
        at 
org.apache.iceberg.BaseMetastoreTableOperations.refresh(BaseMetastoreTableOperations.java:96)
 ~[iceberg-hive-runtime-0.14.0.jar:?]
        at 
org.apache.iceberg.BaseMetastoreTableOperations.current(BaseMetastoreTableOperations.java:79)
 ~[iceberg-hive-runtime-0.14.0.jar:?]
        at 
org.apache.iceberg.BaseMetastoreCatalog.loadTable(BaseMetastoreCatalog.java:44) 
~[iceberg-hive-runtime-0.14.0.jar:?]
        at org.apache.iceberg.mr.Catalogs.loadTable(Catalogs.java:115) 
~[iceberg-hive-runtime-0.14.0.jar:?]
        at org.apache.iceberg.mr.Catalogs.loadTable(Catalogs.java:105) 
~[iceberg-hive-runtime-0.14.0.jar:?]
        at 
org.apache.iceberg.mr.hive.HiveIcebergStorageHandler.overlayTableProperties(HiveIcebergStorageHandler.java:254)
 ~[iceberg-hive-runtime-0.14.0.jar:?]
        at 
org.apache.iceberg.mr.hive.HiveIcebergStorageHandler.configureInputJobProperties(HiveIcebergStorageHandler.java:87)
 ~[iceberg-hive-runtime-0.14.0.jar:?]
        at 
org.apache.hadoop.hive.ql.plan.PlanUtils.configureJobPropertiesForStorageHandler(PlanUtils.java:928)
 ~[hive-exec-3.1.0.3.1.0-6.jar:3.1.0.3.1.0-6]
        at 
org.apache.hadoop.hive.ql.plan.PlanUtils.configureInputJobPropertiesForStorageHandler(PlanUtils.java:897)
 ~[hive-exec-3.1.0.3.1.0-6.jar:3.1.0.3.1.0-6]
        at 
org.apache.hadoop.hive.ql.plan.PartitionDesc.PartitionDescConstructorHelper(PartitionDesc.java:126)
 ~[hive-exec-3.1.0.3.1.0-6.jar:3.1.0.3.1.0-6]
        at 
org.apache.hadoop.hive.ql.plan.PartitionDesc.<init>(PartitionDesc.java:86)
 ~[hive-exec-3.1.0.3.1.0-6.jar:3.1.0.3.1.0-6]
        at 
org.apache.hadoop.hive.ql.exec.Utilities.getPartitionDesc(Utilities.java:790) 
~[hive-exec-3.1.0.3.1.0-6.jar:3.1.0.3.1.0-6]
        at 
org.apache.hadoop.hive.ql.optimizer.GenMapRedUtils.setMapWork(GenMapRedUtils.java:520)
 ~[hive-exec-3.1.0.3.1.0-6.jar:3.1.0.3.1.0-6]
        at 
org.apache.hadoop.hive.ql.parse.GenTezUtils.setupMapWork(GenTezUtils.java:206) 
~[hive-exec-3.1.0.3.1.0-6.jar:3.1.0.3.1.0-6]
        at 
org.apache.hadoop.hive.ql.parse.GenTezUtils.createMapWork(GenTezUtils.java:185) 
~[hive-exec-3.1.0.3.1.0-6.jar:3.1.0.3.1.0-6]
        at 
org.apache.hadoop.hive.ql.parse.GenTezWork.process(GenTezWork.java:128) 
~[hive-exec-3.1.0.3.1.0-6.jar:3.1.0.3.1.0-6]
        at 
org.apache.hadoop.hive.ql.lib.DefaultRuleDispatcher.dispatch(DefaultRuleDispatcher.java:90)
 ~[hive-exec-3.1.0.3.1.0-6.jar:3.1.0.3.1.0-6]
        at 
org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatchAndReturn(DefaultGraphWalker.java:105)
 ~[hive-exec-3.1.0.3.1.0-6.jar:3.1.0.3.1.0-6]
        at 
org.apache.hadoop.hive.ql.parse.GenTezWorkWalker.walk(GenTezWorkWalker.java:90) 
~[hive-exec-3.1.0.3.1.0-6.jar:3.1.0.3.1.0-6]
        at 
org.apache.hadoop.hive.ql.parse.GenTezWorkWalker.walk(GenTezWorkWalker.java:109)
 ~[hive-exec-3.1.0.3.1.0-6.jar:3.1.0.3.1.0-6]
        at 
org.apache.hadoop.hive.ql.parse.GenTezWorkWalker.walk(GenTezWorkWalker.java:109)
 ~[hive-exec-3.1.0.3.1.0-6.jar:3.1.0.3.1.0-6]
        at 
org.apache.hadoop.hive.ql.parse.GenTezWorkWalker.walk(GenTezWorkWalker.java:109)
 ~[hive-exec-3.1.0.3.1.0-6.jar:3.1.0.3.1.0-6]
        at 
org.apache.hadoop.hive.ql.parse.GenTezWorkWalker.walk(GenTezWorkWalker.java:109)
 ~[hive-exec-3.1.0.3.1.0-6.jar:3.1.0.3.1.0-6]
        at 
org.apache.hadoop.hive.ql.parse.GenTezWorkWalker.walk(GenTezWorkWalker.java:109)
 ~[hive-exec-3.1.0.3.1.0-6.jar:3.1.0.3.1.0-6]
        at 
org.apache.hadoop.hive.ql.parse.GenTezWorkWalker.startWalking(GenTezWorkWalker.java:72)
 ~[hive-exec-3.1.0.3.1.0-6.jar:3.1.0.3.1.0-6]
        at 
org.apache.hadoop.hive.ql.parse.TezCompiler.generateTaskTree(TezCompiler.java:594)
 ~[hive-exec-3.1.0.3.1.0-6.jar:3.1.0.3.1.0-6]
        at 
org.apache.hadoop.hive.ql.parse.TaskCompiler.compile(TaskCompiler.java:245) 
~[hive-exec-3.1.0.3.1.0-6.jar:3.1.0.3.1.0-6]
        at 
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:12448)
 ~[hive-exec-3.1.0.3.1.0-6.jar:3.1.0.3.1.0-6]
        at 
org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:360)
 ~[hive-exec-3.1.0.3.1.0-6.jar:3.1.0.3.1.0-6]
        at 
org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:289)
 ~[hive-exec-3.1.0.3.1.0-6.jar:3.1.0.3.1.0-6]
        at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:664) 
~[hive-exec-3.1.0.3.1.0-6.jar:3.1.0.3.1.0-6]
        at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1869) 
~[hive-exec-3.1.0.3.1.0-6.jar:3.1.0.3.1.0-6]
        at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:1816) 
~[hive-exec-3.1.0.3.1.0-6.jar:3.1.0.3.1.0-6]
        at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:1811) 
~[hive-exec-3.1.0.3.1.0-6.jar:3.1.0.3.1.0-6]
        at 
org.apache.hadoop.hive.ql.reexec.ReExecDriver.compileAndRespond(ReExecDriver.java:126)
 ~[hive-exec-3.1.0.3.1.0-6.jar:3.1.0.3.1.0-6]
        at 
org.apache.hive.service.cli.operation.SQLOperation.prepare(SQLOperation.java:197)
 ~[hive-service-3.1.0.3.1.0-6.jar:3.1.0.3.1.0-6]
        ... 47 more
   ```
   
   As I look into iceberg codes, I guess that cached `RetryingMetaStoreClient` 
requested with previous user info.
   Is it possible?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to