确认一下standalone cluster 和 sql client 用的是同一份 flink/blink bin
印象中两者不一致会有一些奇怪的问题。


On Mon, Feb 25, 2019 at 9:56 AM 张洪涛 <hongtao12...@163.com> wrote:

>
>
> sql-client.sh 的启动参数首先在classpath里面会包含kafka相关的jar  另外会有--jar
> 包含所有connector的jar
>
>
> 这些jars在sql-client提交job时候会上传到cluster的blob store 但是很奇怪为啥找不到
>
>
>  00:00:06 /usr/lib/jvm/java-1.8.0-openjdk/bin/java
> -Dlog.file=/bigdata/flink-1.5.1/log/flink-root-sql-client-gpu06.log
> -Dlog4j.configuration=file:/bigdata/flink-1.5.1/conf/log4j-cli.properties
> -Dlogback.configurationFile=file:/bigdata/flink-1.5.1/conf/logback.xml
> -classpath
> /bigdata/flink-1.5.1/lib/flink-python_2.11-1.5.1.jar:/bigdata/flink-1.5.1/lib/flink-shaded-hadoop2-uber-1.5.1.jar:/bigdata/flink-1.5.1/lib/log4j-1.2.17.jar:/bigdata/flink-1.5.1/lib/slf4j-log4j12-1.7.7.jar:/bigdata/flink-1.5.1/lib/flink-dist_2.11-1.5.1.jar::/bigdata/hadoop-2.7.5/etc/hadoop::/bigdata/flink-1.5.1/opt/connectors/kafka011/flink-connector-kafka-0.11_2.11-1.5.1-sql-jar.jar:/bigdata/flink-1.5.1/opt/connectors/kafka010/flink-connector-kafka-0.10_2.11-1.5.1-sql-jar.jar:/bigdata/flink-1.5.1/opt/connectors/kafka09/flink-connector-kafka-0.9_2.11-1.5.1-sql-jar.jar:/bigdata/flink-1.5.1/opt/connectors/kafka08/flink-connector-kafka-0.8_2.11-1.5.1.jar:/bigdata/flink-1.5.1/opt/connectors/flink-hbase_2.11-1.5.1.jar:/bigdata/flink-1.5.1/opt/connectors/flink-connector-hadoop-compatibility_2.11-1.5.1.jar:/bigdata/flink-1.5.1/opt/connectors/flink-connector-hive_2.11-1.5.1.jar:/bigdata/flink-1.5.1/opt/sql-client/datanucleus-api-jdo-4.2.4.jar:/bigdata/flink-1.5.1/opt/sql-client/javax.jdo-3.2.0-m3.jar:/bigdata/flink-1.5.1/opt/sql-client/datanucleus-core-4.1.17.jar:/bigdata/flink-1.5.1/opt/sql-client/datanucleus-rdbms-4.1.19.jar:/bigdata/flink-1.5.1/opt/sql-client/flink-sql-client-1.5.1.jar
> org.apache.flink.table.client.SqlClient embedded -d
> conf/sql-client-defaults.yaml --jar
> /bigdata/flink-1.5.1/opt/connectors/kafka011/flink-connector-kafka-0.11_2.11-1.5.1-sql-jar.jar
> --jar
> /bigdata/flink-1.5.1/opt/connectors/kafka010/flink-connector-kafka-0.10_2.11-1.5.1-sql-jar.jar
> --jar
> /bigdata/flink-1.5.1/opt/connectors/kafka09/flink-connector-kafka-0.9_2.11-1.5.1-sql-jar.jar
> --jar
> /bigdata/flink-1.5.1/opt/connectors/kafka08/flink-connector-kafka-0.8_2.11-1.5.1.jar
> --jar /bigdata/flink-1.5.1/opt/connectors/flink-hbase_2.11-1.5.1.jar --jar
> /bigdata/flink-1.5.1/opt/connectors/flink-connector-hadoop-compatibility_2.11-1.5.1.jar
> --jar
> /bigdata/flink-1.5.1/opt/connectors/flink-connector-hive_2.11-1.5.1.jar
> --jar /bigdata/flink-1.5.1/opt/sql-client/flink-sql-client-1.5.1.jar
>
>
>
>
>
>
> 在 2019-02-22 19:32:18,"Becket Qin" <becket....@gmail.com> 写道:
> >能不能看一下运行sql-client.sh的运行参数。具体做法是:
> >
> >运行sql-client.sh
> >ps | grep sql-client
> >
> >查看一下其中是不是有这个 flink-connector-kafka-0.11 的 jar.
> >
> >Jiangjie (Becket) Qin
> >
> >On Fri, Feb 22, 2019 at 6:54 PM 张洪涛 <hongtao12...@163.com> wrote:
> >
> >>
> >>
> >> 是包含这个类的
> >>
> >>
> >> jar -tf flink-connector-kafka-0.11_2.11-*.jar | grep Crc32C
> >>
> >>
> org/apache/flink/kafka011/shaded/org/apache/kafka/common/utils/Crc32C$1.class
> >>
> >>
> org/apache/flink/kafka011/shaded/org/apache/kafka/common/utils/Crc32C$ChecksumFactory.class
> >>
> >>
> org/apache/flink/kafka011/shaded/org/apache/kafka/common/utils/Crc32C$Java9ChecksumFactory.class
> >>
> >>
> org/apache/flink/kafka011/shaded/org/apache/kafka/common/utils/Crc32C$PureJavaChecksumFactory.class
> >>
> org/apache/flink/kafka011/shaded/org/apache/kafka/common/utils/Crc32C.class
> >>
> >>
> org/apache/flink/kafka011/shaded/org/apache/kafka/common/utils/PureJavaCrc32C.class
> >>
> >>
> >>
> >>
> >>
> >>
> >> 在 2019-02-22 18:03:18,"Zhenghua Gao" <doc...@gmail.com> 写道:
> >> >能否看一下对应的包里是否有这个类, 方法如下(假设你的blink安装包在 /tmp/blink):
> >> >
> >> >cd /tmp/blink/opt/connectors/kafka011
> >> >jar -tf flink-connector-kafka-0.11_2.11-*.jar | grep Crc32C
> >> >
> >> >On Fri, Feb 22, 2019 at 2:56 PM 张洪涛 <hongtao12...@163.com> wrote:
> >> >
> >> >>
> >> >>
> >> >> 大家好!
> >> >>
> >> >>
> >> >> 我正在测试Blink sql client kafka sink connector ,但是发现写入失败,以下是我的步骤
> >> >>
> >> >>
> >> >> 环境配置
> >> >> blink standalone 模式
> >> >>
> >> >>
> >> >>
> >> >>
> >> >> 1. 配置environment 启动sql client
> >> >>
> >> >>
> >> >> 2. 创建kafka sink table
> >> >> CREATETABLEkafka_sink(
> >> >>    messageKeyVARBINARY,
> >> >>    messageValueVARBINARY,
> >> >>    PRIMARYKEY(messageKey))
> >> >> with(
> >> >>    type='KAFKA011',
> >> >>    topic='sink-topic',
> >> >>    `bootstrap.servers`='172.19.0.108:9092',
> >> >>    retries='3'
> >> >> );
> >> >>
> >> >>
> >> >> 3. 创建查询语句并执行
> >> >> INSERT INTO kafka_sink
> >> >> SELECT CAST('123' AS VARBINARY) AS key,
> >> >> CAST(CONCAT_WS(',', 'HELLO', 'WORLD') AS VARBINARY) AS msg;
> >> >>
> >> >>
> >> >>
> >> >>
> >> >> 错误日志(from task executor log)
> >> >>
> >> >>
> >> >> 主要是找不到kafka common package下面的一个类, 但是启动sql client 时候已经把kafka connector
> >> >> 相关的jar包包括在内 在提交job时候 也会把这些jars 和 jobgraph一并上传到cluster,理论上这些class都会被加载
> >> >>
> >> >>
> >> >>
> >> >>
> >> >>
> >> >>
> >> >> 2019-02-22 14:37:18,356 ERROR
> >> >>
> >>
> org.apache.flink.kafka011.shaded.org.apache.kafka.common.utils.KafkaThread
> >> >> - Uncaught exception in kafka-producer-network-thread | producer-1:
> >> >> java.lang.NoClassDefFoundError:
> >> >> org/apache/flink/kafka011/shaded/org/apache/kafka/common/utils/Crc32C
> >> >>         at
> >> >>
> >>
> org.apache.flink.kafka011.shaded.org.apache.kafka.common.record.DefaultRecordBatch.writeHeader(DefaultRecordBatch.java:468)
> >> >>         at
> >> >>
> >>
> org.apache.flink.kafka011.shaded.org.apache.kafka.common.record.MemoryRecordsBuilder.writeDefaultBatchHeader(MemoryRecordsBuilder.java:339)
> >> >>         at
> >> >>
> >>
> org.apache.flink.kafka011.shaded.org.apache.kafka.common.record.MemoryRecordsBuilder.close(MemoryRecordsBuilder.java:293)
> >> >>         at
> >> >>
> >>
> org.apache.flink.kafka011.shaded.org.apache.kafka.clients.producer.internals.ProducerBatch.close(ProducerBatch.java:391)
> >> >>         at
> >> >>
> >>
> org.apache.flink.kafka011.shaded.org.apache.kafka.clients.producer.internals.RecordAccumulator.drain(RecordAccumulator.java:485)
> >> >>         at
> >> >>
> >>
> org.apache.flink.kafka011.shaded.org.apache.kafka.clients.producer.internals.Sender.sendProducerData(Sender.java:254)
> >> >>         at
> >> >>
> >>
> org.apache.flink.kafka011.shaded.org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:223)
> >> >>         at
> >> >>
> >>
> org.apache.flink.kafka011.shaded.org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:162)
> >> >>         at java.lang.Thread.run(Thread.java:748)
> >> >> Caused by: java.lang.ClassNotFoundException:
> >> >> org.apache.flink.kafka011.shaded.org.apache.kafka.common.utils.Crc32C
> >> >>         at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
> >> >>         at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
> >> >>         at
> >> >>
> >>
> org.apache.flink.runtime.execution.librarycache.FlinkUserCodeClassLoaders$ChildFirstClassLoader.loadClass(FlinkUserCodeClassLoaders.java:120)
> >> >>         at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
> >> >>
> >> >>
> >> >>
> >> >>
> >> >> --
> >> >>   Best Regards
> >> >>   Hongtao
> >> >>
> >> >>
> >> >
> >> >--
> >> >若批評無自由,則讚美無意義!
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >> --
> >>   Best Regards,
> >>   HongTao
> >>
> >>
>
>
>
>
>
>
>
> --
>   Best Regards,
>   HongTao
>
>

回复