Hi Quanlong, You're running into https://issues.apache.org/jira/browse/HADOOP-7050. I found that JIRA via https://kb.informatica.com/solution/23/Pages/61/510035.aspx (and Google).
What's surprising to me is that this code has been around in Hadoop for a long time. I think HiveServer2 must have changed in that it now invokes the impersonation code, whereas it must have not before. Here are some pointers: testdata/cluster/node_templates/common/etc/hadoop/conf/core-site.xml.tmpl: <name>hadoop.proxyuser.${USER}.hosts</name> https://github.com/apache/hadoop/blob/dc8e3432013153ac11d31d6b462aa96f8ca2c443/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/authorize/DefaultImpersonationProvider.java#L78 String usersGroupsRegEx = prefixRegEx + "[^.]*(" + Pattern.quote(CONF_USERS) + "|" + Pattern.quote(CONF_GROUPS) + ")"; To my eye, it looks like that regular expression is wrong and is what's disallowing usernames with periods/dots in them. (Once it's loosened, any code that's parsing these may also need to be fixed.) -- Philip On Fri, Mar 30, 2018 at 7:34 PM, Quanlong Huang <huang_quanl...@126.com> wrote: > I failed to start the minicluster too but encountered another errors. > HiveServer2 failed to launch and kept warning: > > 2018-03-30T18:54:05,526 WARN [HiveServer2-Handler-Pool: Thread-49] > thrift.ThriftCLIService: Error opening session: > java.lang.RuntimeException: java.lang.RuntimeException: > org.apache.hadoop.ipc.RemoteException(org.apache. > hadoop.security.authorize.AuthorizationException): User: quanlong.huang > is not allowed to impersonate foo > at org.apache.hive.service.cli.session.HiveSessionProxy. > invoke(HiveSessionProxy.java:89) ~[hive-service-2.1.1-cdh6.x- > SNAPSHOT.jar:2.1.1-cdh6.x-SNAPSHOT] > at org.apache.hive.service.cli.session.HiveSessionProxy. > access$000(HiveSessionProxy.java:36) ~[hive-service-2.1.1-cdh6.x- > SNAPSHOT.jar:2.1.1-cdh6.x-SNAPSHOT] > at > org.apache.hive.service.cli.session.HiveSessionProxy$1.run(HiveSessionProxy.java:63) > ~[hive-service-2.1.1-cdh6.x-SNAPSHOT.jar:2.1.1-cdh6.x-SNAPSHOT] > at java.security.AccessController.doPrivileged(Native Method) > ~[?:1.8.0_121] > at javax.security.auth.Subject.doAs(Subject.java:422) > ~[?:1.8.0_121] > at org.apache.hadoop.security.UserGroupInformation.doAs( > UserGroupInformation.java:1962) ~[hadoop-common-3.0.0-cdh6.x- > 20180302.191654-1.jar:?] > at org.apache.hive.service.cli.session.HiveSessionProxy. > invoke(HiveSessionProxy.java:59) ~[hive-service-2.1.1-cdh6.x- > SNAPSHOT.jar:2.1.1-cdh6.x-SNAPSHOT] > at com.sun.proxy.$Proxy37.open(Unknown Source) ~[?:?] > at org.apache.hive.service.cli.session.SessionManager. > createSession(SessionManager.java:411) [hive-service-2.1.1-cdh6.x- > SNAPSHOT.jar:2.1.1-cdh6.x-SNAPSHOT] > at org.apache.hive.service.cli.session.SessionManager. > openSession(SessionManager.java:363) [hive-service-2.1.1-cdh6.x- > SNAPSHOT.jar:2.1.1-cdh6.x-SNAPSHOT] > at org.apache.hive.service.cli.CLIService. > openSessionWithImpersonation(CLIService.java:189) > [hive-service-2.1.1-cdh6.x-SNAPSHOT.jar:2.1.1-cdh6.x-SNAPSHOT] > at org.apache.hive.service.cli.thrift.ThriftCLIService. > getSessionHandle(ThriftCLIService.java:423) [hive-service-2.1.1-cdh6.x- > SNAPSHOT.jar:2.1.1-cdh6.x-SNAPSHOT] > at org.apache.hive.service.cli.thrift.ThriftCLIService. > OpenSession(ThriftCLIService.java:312) [hive-service-2.1.1-cdh6.x- > SNAPSHOT.jar:2.1.1-cdh6.x-SNAPSHOT] > at org.apache.hive.service.rpc.thrift.TCLIService$Processor$ > OpenSession.getResult(TCLIService.java:1377) [hive-exec-2.1.1-cdh6.x- > SNAPSHOT.jar:2.1.1-cdh6.x-SNAPSHOT] > at org.apache.hive.service.rpc.thrift.TCLIService$Processor$ > OpenSession.getResult(TCLIService.java:1362) [hive-exec-2.1.1-cdh6.x- > SNAPSHOT.jar:2.1.1-cdh6.x-SNAPSHOT] > at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) > [hive-exec-2.1.1-cdh6.x-SNAPSHOT.jar:2.1.1-cdh6.x-SNAPSHOT] > at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) > [hive-exec-2.1.1-cdh6.x-SNAPSHOT.jar:2.1.1-cdh6.x-SNAPSHOT] > at org.apache.hive.service.auth.TSetIpAddressProcessor.process( > TSetIpAddressProcessor.java:56) [hive-service-2.1.1-cdh6.x- > SNAPSHOT.jar:2.1.1-cdh6.x-SNAPSHOT] > at > org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286) > [hive-exec-2.1.1-cdh6.x-SNAPSHOT.jar:2.1.1-cdh6.x-SNAPSHOT] > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > [?:1.8.0_121] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > [?:1.8.0_121] > at java.lang.Thread.run(Thread.java:745) [?:1.8.0_121] > Caused by: java.lang.RuntimeException: org.apache.hadoop.ipc. > RemoteException(org.apache.hadoop.security.authorize.AuthorizationException): > User: quanlong.huang is not allowed to impersonate foo > at > org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:596) > ~[hive-exec-2.1.1-cdh6.x-SNAPSHOT.jar:2.1.1-cdh6.x-SNAPSHOT] > at > org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:539) > ~[hive-exec-2.1.1-cdh6.x-SNAPSHOT.jar:2.1.1-cdh6.x-SNAPSHOT] > at > org.apache.hive.service.cli.session.HiveSessionImpl.open(HiveSessionImpl.java:169) > ~[hive-service-2.1.1-cdh6.x-SNAPSHOT.jar:2.1.1-cdh6.x-SNAPSHOT] > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > ~[?:1.8.0_121] > at sun.reflect.NativeMethodAccessorImpl.invoke( > NativeMethodAccessorImpl.java:62) ~[?:1.8.0_121] > at sun.reflect.DelegatingMethodAccessorImpl.invoke( > DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_121] > at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_121] > at org.apache.hive.service.cli.session.HiveSessionProxy. > invoke(HiveSessionProxy.java:78) ~[hive-service-2.1.1-cdh6.x- > SNAPSHOT.jar:2.1.1-cdh6.x-SNAPSHOT] > ... 21 more > Caused by: org.apache.hadoop.ipc.RemoteException: User: quanlong.huang is > not allowed to impersonate foo > at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1491) > ~[hadoop-common-3.0.0-cdh6.x-20180302.191654-1.jar:?] > at org.apache.hadoop.ipc.Client.call(Client.java:1437) > ~[hadoop-common-3.0.0-cdh6.x-20180302.191654-1.jar:?] > at org.apache.hadoop.ipc.Client.call(Client.java:1347) > ~[hadoop-common-3.0.0-cdh6.x-20180302.191654-1.jar:?] > at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker. > invoke(ProtobufRpcEngine.java:228) ~[hadoop-common-3.0.0-cdh6.x- > 20180302.191654-1.jar:?] > at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker. > invoke(ProtobufRpcEngine.java:116) ~[hadoop-common-3.0.0-cdh6.x- > 20180302.191654-1.jar:?] > at com.sun.proxy.$Proxy31.getFileInfo(Unknown Source) ~[?:?] > at org.apache.hadoop.hdfs.protocolPB. > ClientNamenodeProtocolTranslatorPB.getFileInfo( > ClientNamenodeProtocolTranslatorPB.java:875) ~[hadoop-hdfs-client-3.0.0- > cdh6.x-20180302.192732-2.jar:?] > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > ~[?:1.8.0_121] > at sun.reflect.NativeMethodAccessorImpl.invoke( > NativeMethodAccessorImpl.java:62) ~[?:1.8.0_121] > at sun.reflect.DelegatingMethodAccessorImpl.invoke( > DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_121] > at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_121] > at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod( > RetryInvocationHandler.java:422) ~[hadoop-common-3.0.0-cdh6.x- > 20180302.191654-1.jar:?] > at org.apache.hadoop.io.retry.RetryInvocationHandler$Call. > invokeMethod(RetryInvocationHandler.java:165) > ~[hadoop-common-3.0.0-cdh6.x-20180302.191654-1.jar:?] > at org.apache.hadoop.io.retry.RetryInvocationHandler$Call. > invoke(RetryInvocationHandler.java:157) ~[hadoop-common-3.0.0-cdh6.x- > 20180302.191654-1.jar:?] > at org.apache.hadoop.io.retry.RetryInvocationHandler$Call. > invokeOnce(RetryInvocationHandler.java:95) ~[hadoop-common-3.0.0-cdh6.x- > 20180302.191654-1.jar:?] > at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke( > RetryInvocationHandler.java:359) ~[hadoop-common-3.0.0-cdh6.x- > 20180302.191654-1.jar:?] > at com.sun.proxy.$Proxy32.getFileInfo(Unknown Source) ~[?:?] > at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1643) > ~[hadoop-hdfs-client-3.0.0-cdh6.x-20180302.192732-2.jar:?] > at org.apache.hadoop.hdfs.DistributedFileSystem$29. > doCall(DistributedFileSystem.java:1494) ~[hadoop-hdfs-client-3.0.0- > cdh6.x-20180302.192732-2.jar:?] > at org.apache.hadoop.hdfs.DistributedFileSystem$29. > doCall(DistributedFileSystem.java:1491) ~[hadoop-hdfs-client-3.0.0- > cdh6.x-20180302.192732-2.jar:?] > at org.apache.hadoop.fs.FileSystemLinkResolver.resolve( > FileSystemLinkResolver.java:81) ~[hadoop-common-3.0.0-cdh6.x- > 20180302.191654-1.jar:?] > at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus( > DistributedFileSystem.java:1506) ~[hadoop-hdfs-client-3.0.0- > cdh6.x-20180302.192732-2.jar:?] > at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1668) > ~[hadoop-common-3.0.0-cdh6.x-20180302.191654-1.jar:?] > at org.apache.hadoop.hive.ql.session.SessionState. > createRootHDFSDir(SessionState.java:701) ~[hive-exec-2.1.1-cdh6.x- > SNAPSHOT.jar:2.1.1-cdh6.x-SNAPSHOT] > at org.apache.hadoop.hive.ql.session.SessionState. > createSessionDirs(SessionState.java:640) ~[hive-exec-2.1.1-cdh6.x- > SNAPSHOT.jar:2.1.1-cdh6.x-SNAPSHOT] > at > org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:572) > ~[hive-exec-2.1.1-cdh6.x-SNAPSHOT.jar:2.1.1-cdh6.x-SNAPSHOT] > at > org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:539) > ~[hive-exec-2.1.1-cdh6.x-SNAPSHOT.jar:2.1.1-cdh6.x-SNAPSHOT] > at > org.apache.hive.service.cli.session.HiveSessionImpl.open(HiveSessionImpl.java:169) > ~[hive-service-2.1.1-cdh6.x-SNAPSHOT.jar:2.1.1-cdh6.x-SNAPSHOT] > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > ~[?:1.8.0_121] > at sun.reflect.NativeMethodAccessorImpl.invoke( > NativeMethodAccessorImpl.java:62) ~[?:1.8.0_121] > at sun.reflect.DelegatingMethodAccessorImpl.invoke( > DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_121] > at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_121] > at org.apache.hive.service.cli.session.HiveSessionProxy. > invoke(HiveSessionProxy.java:78) ~[hive-service-2.1.1-cdh6.x- > SNAPSHOT.jar:2.1.1-cdh6.x-SNAPSHOT] > ... 21 more > > I just rebase the codes and run ./buildall.sh -format > Any thoughts? > > Thanks, > Quanlong > > > At 2018-03-30 13:08:33, "Dimitris Tsirogiannis" <dtsirogian...@cloudera.com> > wrote: > >I enabled full logging in my postgres that hosts the sentry and metastore > >db and I don't see the table being created. If anyone has gone through the > >process, can you: a) verify that relation SENTRY_ROLE exists in your > >sentry_policy db, and b) tell me how many relations are in your policy_db. > > > >Thanks > >Dimitris > > > >On Thu, Mar 29, 2018 at 9:32 PM, Dimitris Tsirogiannis < > >dtsirogian...@cloudera.com> wrote: > > > >> Good point. I used -format that in theory handles both the metastore and > >> the sentry policy dB. The sentry_policy db is created and has some tables > >> but not the SENTRY_ROLE. > >> > >> Dimitris > >> > >> On Thu, Mar 29, 2018 at 6:29 PM Jim Apple <jbap...@cloudera.com> wrote: > >> > >>> I think I might have once fixed that using > >>> > >>> ./buildall.sh -notests -format_metastore -format_sentry_policy_db > >>> > >>> > >>> On Thu, Mar 29, 2018 at 6:15 PM, Dimitris Tsirogiannis < > >>> dtsirogian...@cloudera.com> wrote: > >>> > >>> > I tried rebuilding my minicluster but Sentry refuses to start. I get > >>> > "ERROR: relation "SENTRY_ROLE" does not exist in the sentry logs. Does > >>> that > >>> > ring any bells? > >>> > > >>> > Thanks > >>> > Dimitris > >>> > > >>> > On Tue, Mar 27, 2018 at 2:50 PM, Philip Zeyliger <phi...@cloudera.com> > >>> > wrote: > >>> > > >>> > > Hi folks, > >>> > > > >>> > > I just sent off https://gerrit.cloudera.org/#/c/9743/ and > >>> > > https://issues.apache.org/jira/browse/IMPALA-4277 for GVD, which > >>> changes > >>> > > the default minicluster to be based on Hadoop 3.0, Hive 2.1, Sentry > >>> 2.0, > >>> > > and so on. This change *will not* be back-ported to 2.x. > >>> > > > >>> > > When you pull that change in, you'll need to re-build your minicluster > >>> > > with, e.g., ./buildall.sh -testdata -format -notests. This will pull > >>> in > >>> > the > >>> > > new dependencies, format your cluster, and load up all the data. As > >>> you > >>> > > know, it takes 1-2 hours. > >>> > > > >>> > > If you want to hold off, you can elso set export > >>> > > IMPALA_MINICLUSTER_PROFILE_OVERRIDE=2 in your environment. > >>> > > > >>> > > Note that this choice between versions happens at build time, and > >>> CMake > >>> > > depends on it. So, switching back and forth requires re-running CMake. > >>> > > > >>> > > Please let me know if you run into any trouble. This is a big enough > >>> that > >>> > > there may be some bumps on the road. > >>> > > > >>> > > -- Philip > >>> > > > >>> > > >>> > >> > >