这个问题同样在最新的 master 分支也有这个问题,我建了一个 Issue 描述了下整个流程 https://issues.apache.org/jira/browse/FLINK-19995
hl9...@126.com <hl9...@126.com> 于2020年9月28日周一 下午6:06写道: > 按照您的方法重试了下,又报了另一个错误: > Flink SQL> CREATE TABLE tx ( > > account_id BIGINT, > > amount BIGINT, > > transaction_time TIMESTAMP(3), > > WATERMARK FOR transaction_time AS transaction_time - > INTERVAL '5' SECOND > > ) WITH ( > > 'connector.type' = 'kafka', > > 'connector.version' = 'universal', > > 'connector.topic' = 'heli01', > > 'connector.properties.group.id' = 'heli-test', > > 'connector.properties.bootstrap.servers' = ' > 10.100.51.56:9092', > > 'connector.startup-mode' = 'earliest-offset', > > 'format.type' = 'csv' > > ); > [INFO] Table has been created. > > Flink SQL> show tables ; > tx > > Flink SQL> select * from tx ; > [ERROR] Could not execute SQL statement. Reason: > org.apache.flink.kafka.shaded.org.apache.kafka.common.KafkaException: > org.apache.kafka.common.serialization.ByteArrayDeserializer is not an > instance of > org.apache.flink.kafka.shaded.org.apache.kafka.common.serialization.Deserializer > > 附:lib包清单 > [test@rcx51101 lib]$ pwd > /opt/flink-1.10.2/lib > > flink-csv-1.10.2.jar > flink-dist_2.12-1.10.2.jar > flink-jdbc_2.12-1.10.2.jar > flink-json-1.10.2.jar > flink-shaded-hadoop-2-uber-2.6.5-10.0.jar > flink-sql-connector-kafka_2.11-1.10.2.jar > flink-table_2.12-1.10.2.jar > flink-table-blink_2.12-1.10.2.jar > log4j-1.2.17.jar > mysql-connector-java-5.1.48.jar > slf4j-log4j12-1.7.15.jar > > > > > hl9...@126.com > > 发件人: Leonard Xu > 发送时间: 2020-09-28 16:36 > 收件人: user-zh > 主题: Re: sql-cli执行sql报错 > Hi > benchao的回复是的对的, > 你用SQL client 时, 不需要datastream connector的jar包,直接用SQL connector 对应的jar包 > flink-*sql*-connector-kafka***.jar就行了,把你添加的其他jar包都删掉。 > > > > 相关lib包: > > flink-connector-kafka_2.12-1.10.2.jar > > kafka-clients-0.11.0.3.jar > > 祝好 > Leonard >