Flink1.9 sql 提交失败

2019-09-09 Thread 越张
代码: EnvironmentSettings bsSettings = EnvironmentSettings.newInstance().useBlinkPlanner().inStreamingMode().build(); StreamExecutionEnvironment streamEnv = StreamExecutionEnvironment.getExecutionEnvironment(); TableEnvironment tableEnv = StreamTableEnvironment.create(streamEnv, bsSettings);

Flink????????????

2019-09-09 Thread Evan
??flink1.7.1 ??centos 7 job??start-cluster.sh test04 ??flink job?? $ bin/flink run -m test04:8081 -c org.apache.flink.quickstart.factory.FlinkConsumerKafkaSinkToKuduMainClass flink-scala-project1.jar

Flink????????????

2019-09-09 Thread Evan
??flink1.7.1 ??centos 7 job??start-cluster.sh test04 ??flink job?? $ bin/flink run -m test04:8081 -c org.apache.flink.quickstart.factory.FlinkConsumerKafkaSinkToKuduMainClass flink-scala-project1.jar

How to handle avro BYTES type in flink

2019-09-09 Thread Catlyn Kong
Hi fellow streamers, I'm trying to support avro BYTES type in my flink application. Since ByteBuffer isn't a supported type, I'm converting the field to an Array[Byte]: case Type.BYTES => (avroObj: AnyRef) => { if (avroObj == null) { null } else { val byteBuffer =

Flink SQL problem

2019-09-09 Thread davran.muzavarov
Hi I have encountered a problem with Flink SQL. My code: DataSet dataSet0 = env.fromCollection( infos0 ); tableEnv.registerDataSet( "table0", dataSet0 ); String sql = "select closePrice from table0" Table table = tableEnv.sql( sql ); tableEnv.registerTable( tableName, table );

Using FlinkKafkaConsumer API

2019-09-09 Thread Vishwas Siravara
I am using flink-kafka-connector and this is my dependency "org.apache.flink" %% "flink-connector-kafka" % *"1.7.0"*, Whe I look at my dependency tree the kafka client version is -org.apache.kafka:kafka-clients:2.0.1 which comes from the above package. However when I run my code in the

StreamingFileSink rolling callback Inbox

2019-09-09 Thread Anton Parkhomenko
Hello, I’m writing a Flink job that reads heterogenius (one row contains several types that need to be partitioned downstream) data from AWS Kinesis and writes to S3 directory structure like s3://bucket/year/month/day/hour/type, this all works great with StreamingFileSink in Flink 1.9, but

请教 flink 如何读取存放于 hdfs 的 lzo 压缩文件?

2019-09-09 Thread Kevin Liao
Hi, ALL 现在想讲过去的一组 mr 程序迁移到 flink 平台,原始日志存储在 hdfs 上,以 lzo 压缩,想读取成一个 datastream 处理,看到 StreamExecutionEnvironment.readFile(FileInputFormat inputFormat, String filePath), 这个方法似乎符合要求,但是 lzo 的解压应该用哪个包呢? google 没有找到什么明确的线索 请教各位,谢谢

Re: Will there be a Flink 1.9.1 release ?

2019-09-09 Thread Kurt Young
Hi Debasish, I think there is a good chance to have 1.9.1, the only question is when. 1.9.0 released ~2 weeks ago, and I think some users are still under the migration if they want to use 1.9.0. Wait another 1 or 2 weeks and also see whether there are some critical bugs in 1.9.0 sounds reasonable

Re: Will there be a Flink 1.9.1 release ?

2019-09-09 Thread Kostas Kloudas
Hi Debasish, So far I am not aware of any concrete timeline for Flink 1.9.1 but I think that Gordon and Kurt (cc'ed) who were the release-1.9 managers are the best to answer this question. Cheers, Kostas On Mon, Sep 9, 2019 at 9:38 AM Debasish Ghosh wrote: > > Hello - > > Is there a plan for a

Re:Re: Kafka 与 extractly-once

2019-09-09 Thread Jimmy Wong
Hi,我的理解是这样的:这个问题是由于 source 的重放,导致 Flink 内部就重复计算,并且会传到下游,最终反映在结果上,或者说这种操作不能保证内部 Extractly-Once?比如在 [8:00,8:05) 这 5 分钟之内,在 8:03 某个 task 挂了,然后又重新拉起。这时候从 checkpoint 的恢复获得的是这 8:00 分钟之前的 Kafka offset,但是 [8:00,8:03) 之内的消息已经消费,流向下游。重新拉起之后,由于某种原因 source 重放,那么这时候 [8:00,8:03) 的数据会再次被消费,并且会发往下游。 在

Re:Re: Kafka 与 extractly-once

2019-09-09 Thread Jimmy Wong
HI,能详细说下 “后端幂等消费” 的方案麽? 在 2019-09-09 14:37:55,"chang chan" 写道: >消息队列本身很难保证消息不重复 >exactly once 可以用 消息队列的 at least once + 后端幂等消费来实现 >另外不建议使用 kafka 事务, 会拉低消息消费的速度 > >Jimmy Wong 于2019年9月9日周一 上午11:50写道: > >> Hi,all: >> 请教一下,我设置 checkpoint 的时间是 5 分钟,如果在这 5 分钟之内,某个 task

??????Kafka ?? extractly-once

2019-09-09 Thread Jimmy Wong
Hi, ?? Extractly-Once | | Jimmy | | wangzmk...@163.com | ?? ??2019??09??9??

Will there be a Flink 1.9.1 release ?

2019-09-09 Thread Debasish Ghosh
Hello - Is there a plan for a Flink 1.9.1 release in the short term ? We are using Flink and Avro with Avrohugger generating Scala case classes form Avro schema. Hence we need https://github.com/apache/flink/pull/9565 which has been closed recently. regards. -- Debasish Ghosh

Re: suggestion of FLINK-10868

2019-09-09 Thread Till Rohrmann
Hi Anyang, I think we cannot take your proposal because this means that whenever we want to call notifyAllocationFailure when there is a connection problem between the RM and the JM, then we fail the whole cluster. This is something a robust and resilient system should not do because connection

Re: Kafka 与 extractly-once

2019-09-09 Thread chang chan
消息队列本身很难保证消息不重复 exactly once 可以用 消息队列的 at least once + 后端幂等消费来实现 另外不建议使用 kafka 事务, 会拉低消息消费的速度 Jimmy Wong 于2019年9月9日周一 上午11:50写道: > Hi,all: > 请教一下,我设置 checkpoint 的时间是 5 分钟,如果在这 5 分钟之内,某个 task 挂了,然后又重新拉起。我是不是可以理解为这时候从 > checkpoint 的数据获得的是这 5 分钟之前的 Kafka offset,但是这 5 >

??????Kafka ?? extractly-once

2019-09-09 Thread ??????
exactly onceexactly once?? ??

Re: Kafka 与 extractly-once

2019-09-09 Thread jasine chen
你好,应该是配合kafka 事务,在checkpoint的时候做事务提交,下游只读取commit的消息就能保证exactly-once,当然,会丧失一定的时效性 Jasine Chen jasinec...@gmail.com Beijing On Sep 9, 2019, 11:50 AM +0800, Jimmy Wong , wrote: > Hi,all: > 请教一下,我设置 checkpoint 的时间是 5 分钟,如果在这 5 分钟之内,某个 task 挂了,然后又重新拉起。我是不是可以理解为这时候从 > checkpoint 的数据获得的是这 5 分钟之前的 Kafka