[ https://issues.apache.org/jira/browse/FLINK-26827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17512175#comment-17512175 ]
luoyuxia commented on FLINK-26827: ---------------------------------- [~zhushifeng] for hive-exec-2.1.0.jar, when the major version of hadoop isn't 2, it'll throw such exception. Maybe you can try with hive-exec-2.3.4.jar, it'll be fine is the major version is 3. Hope it can help. > FlinkSQL和hive整合报错 > ----------------- > > Key: FLINK-26827 > URL: https://issues.apache.org/jira/browse/FLINK-26827 > Project: Flink > Issue Type: Bug > Components: Table SQL / API > Affects Versions: 1.13.3 > Environment: 环境:cdh6.2.1 linux系统,j d k1.8 > Reporter: zhushifeng > Priority: Major > Attachments: image-2022-03-24-09-33-31-786.png > > > Topic : FlinkSQL combine with Hive > > *step1:* > environment: > HIVE2.1 > Flink1.13.3 > FlinkCDC2.1 > CDH6.2.1 > > *step2:* > when I do the following thing I come across some problems. For example, > copy the following jar to /flink-1.13.3/lib/ > // Flink's Hive connector > flink-connector-hive_2.11-1.13.3.jar > // Hive dependencies > hive-exec-2.1.0.jar. == hive-exec-2.1.1-cdh6.2.1.jar > // add antlr-runtime if you need to use hive dialect > antlr-runtime-3.5.2.jar > !image-2022-03-24-09-33-31-786.png! > > *step3:* restart the Flink Cluster > # ./start-cluster.sh > # Starting cluster. > # Starting standalonesession daemon on host xuehai-cm. > # Starting taskexecutor daemon on host xuehai-cm. > # Starting taskexecutor daemon on host xuehai-nn. > # Starting taskexecutor daemon on host xuehai-dn. > > *step4:* > CREATE CATALOG myhive WITH ( > 'type' = 'hive', > 'default-database' = 'default', > 'hive-conf-dir' = '/etc/hive/conf' > ); > -- set the HiveCatalog as the current catalog of the session > USE CATALOG myhive; > > *step5:* use the hive > Flink SQL> select * from rptdata.basic_xhsys_user ; > Exception in thread "main" org.apache.flink.table.client.SqlClientException: > Unexpected exception. This is a bug. Please consider filing an issue. > at > org.apache.flink.table.client.SqlClient.startClient(SqlClient.java:201) > at org.apache.flink.table.client.SqlClient.main(SqlClient.java:161) > Caused by: java.lang.ExceptionInInitializerError > at java.lang.Class.forName0(Native Method) > at java.lang.Class.forName(Class.java:348) > at > org.apache.flink.connectors.hive.HiveSourceFileEnumerator.createMRSplits(HiveSourceFileEnumerator.java:94) > at > org.apache.flink.connectors.hive.HiveSourceFileEnumerator.createInputSplits(HiveSourceFileEnumerator.java:71) > at > org.apache.flink.connectors.hive.HiveTableSource.lambda$getDataStream$1(HiveTableSource.java:212) > at > org.apache.flink.connectors.hive.HiveParallelismInference.logRunningTime(HiveParallelismInference.java:107) > at > org.apache.flink.connectors.hive.HiveParallelismInference.infer(HiveParallelismInference.java:95) > at > org.apache.flink.connectors.hive.HiveTableSource.getDataStream(HiveTableSource.java:207) > at > org.apache.flink.connectors.hive.HiveTableSource$1.produceDataStream(HiveTableSource.java:123) > at > org.apache.flink.table.planner.plan.nodes.exec.common.CommonExecTableSourceScan.translateToPlanInternal(CommonExecTableSourceScan.java:96) > at > org.apache.flink.table.planner.plan.nodes.exec.ExecNodeBase.translateToPlan(ExecNodeBase.java:134) > at > org.apache.flink.table.planner.plan.nodes.exec.ExecEdge.translateToPlan(ExecEdge.java:247) > at > org.apache.flink.table.planner.plan.nodes.exec.stream.StreamExecSink.translateToPlanInternal(StreamExecSink.java:114) > at > org.apache.flink.table.planner.plan.nodes.exec.ExecNodeBase.translateToPlan(ExecNodeBase.java:134) > at > org.apache.flink.table.planner.delegation.StreamPlanner.$anonfun$translateToPlan$1(StreamPlanner.scala:70) > at > scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:233) > at scala.collection.Iterator.foreach(Iterator.scala:937) > at scala.collection.Iterator.foreach$(Iterator.scala:937) > at scala.collection.AbstractIterator.foreach(Iterator.scala:1425) > at scala.collection.IterableLike.foreach(IterableLike.scala:70) > at scala.collection.IterableLike.foreach$(IterableLike.scala:69) > at scala.collection.AbstractIterable.foreach(Iterable.scala:54) > at scala.collection.TraversableLike.map(TraversableLike.scala:233) > at scala.collection.TraversableLike.map$(TraversableLike.scala:226) > at scala.collection.AbstractTraversable.map(Traversable.scala:104) > at > org.apache.flink.table.planner.delegation.StreamPlanner.translateToPlan(StreamPlanner.scala:69) > at > org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:165) > at > org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:1518) > at > org.apache.flink.table.api.internal.TableEnvironmentImpl.executeQueryOperation(TableEnvironmentImpl.java:791) > at > org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:1225) > at > org.apache.flink.table.client.gateway.local.LocalExecutor.lambda$executeOperation$3(LocalExecutor.java:213) > at > org.apache.flink.table.client.gateway.context.ExecutionContext.wrapClassLoader(ExecutionContext.java:90) > at > org.apache.flink.table.client.gateway.local.LocalExecutor.executeOperation(LocalExecutor.java:213) > at > org.apache.flink.table.client.gateway.local.LocalExecutor.executeQuery(LocalExecutor.java:235) > at > org.apache.flink.table.client.cli.CliClient.callSelect(CliClient.java:479) > at > org.apache.flink.table.client.cli.CliClient.callOperation(CliClient.java:412) > at > org.apache.flink.table.client.cli.CliClient.lambda$executeStatement$0(CliClient.java:327) > at java.util.Optional.ifPresent(Optional.java:159) > at > org.apache.flink.table.client.cli.CliClient.executeStatement(CliClient.java:327) > at > org.apache.flink.table.client.cli.CliClient.executeInteractive(CliClient.java:297) > at > org.apache.flink.table.client.cli.CliClient.executeInInteractiveMode(CliClient.java:221) > at org.apache.flink.table.client.SqlClient.openCli(SqlClient.java:151) > at org.apache.flink.table.client.SqlClient.start(SqlClient.java:95) > at > org.apache.flink.table.client.SqlClient.startClient(SqlClient.java:187) > ... 1 more > Caused by: java.lang.RuntimeException: java.lang.IllegalArgumentException: > Unrecognized Hadoop major version number: 3.0.0-cdh6.2.1 > at > org.apache.hadoop.hive.shims.ShimLoader.getHadoopShims(ShimLoader.java:102) > at > org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.<clinit>(OrcInputFormat.java:161) > ... 45 more > Caused by: java.lang.IllegalArgumentException: Unrecognized Hadoop major > version number: 3.0.0-cdh6.2.1 > at > org.apache.hadoop.hive.shims.ShimLoader.getMajorVersion(ShimLoader.java:177) > at > org.apache.hadoop.hive.shims.ShimLoader.loadShims(ShimLoader.java:144) > at > org.apache.hadoop.hive.shims.ShimLoader.getHadoopShims(ShimLoader.java:99) > ... 46 more > Shutting down the session... > done. > > > -- This message was sent by Atlassian Jira (v8.20.1#820001)