代码:
EnvironmentSettings bsSettings =
EnvironmentSettings.newInstance().useBlinkPlanner().inStreamingMode().build();
StreamExecutionEnvironment streamEnv =
StreamExecutionEnvironment.getExecutionEnvironment();
TableEnvironment tableEnv = StreamTableEnvironment.create(streamEnv,
bsSettings);
??flink1.7.1
??centos 7
job??start-cluster.sh
test04
??flink job??
$ bin/flink run -m test04:8081 -c
org.apache.flink.quickstart.factory.FlinkConsumerKafkaSinkToKuduMainClass
flink-scala-project1.jar
??flink1.7.1
??centos 7
job??start-cluster.sh
test04
??flink job??
$ bin/flink run -m test04:8081 -c
org.apache.flink.quickstart.factory.FlinkConsumerKafkaSinkToKuduMainClass
flink-scala-project1.jar
Hi fellow streamers,
I'm trying to support avro BYTES type in my flink application. Since
ByteBuffer isn't a supported type, I'm converting the field to an
Array[Byte]:
case Type.BYTES =>
(avroObj: AnyRef) => {
if (avroObj == null) {
null
} else {
val byteBuffer =
Hi I have encountered a problem with Flink SQL.
My code:
DataSet dataSet0 = env.fromCollection( infos0 );
tableEnv.registerDataSet( "table0", dataSet0 );
String sql = "select closePrice from table0"
Table table = tableEnv.sql( sql );
tableEnv.registerTable( tableName, table );
I am using flink-kafka-connector and this is my dependency
"org.apache.flink" %% "flink-connector-kafka" % *"1.7.0"*,
Whe I look at my dependency tree the kafka client version is
-org.apache.kafka:kafka-clients:2.0.1 which comes from the above package.
However when I run my code in the
Hello,
I’m writing a Flink job that reads heterogenius (one row contains several
types that need to be partitioned downstream) data from AWS Kinesis and
writes to S3 directory structure like s3://bucket/year/month/day/hour/type,
this all works great with StreamingFileSink in Flink 1.9, but
Hi, ALL
现在想讲过去的一组 mr 程序迁移到 flink 平台,原始日志存储在 hdfs 上,以 lzo 压缩,想读取成一个 datastream 处理,看到
StreamExecutionEnvironment.readFile(FileInputFormat inputFormat, String
filePath), 这个方法似乎符合要求,但是 lzo 的解压应该用哪个包呢?
google 没有找到什么明确的线索
请教各位,谢谢
Hi Debasish,
I think there is a good chance to have 1.9.1, the only question is when.
1.9.0 released ~2 weeks ago, and I think some users are still under the
migration if they want to use 1.9.0. Wait another 1 or 2 weeks and also
see whether there are some critical bugs in 1.9.0 sounds reasonable
Hi Debasish,
So far I am not aware of any concrete timeline for Flink 1.9.1 but
I think that Gordon and Kurt (cc'ed) who were the release-1.9
managers are the best to answer this question.
Cheers,
Kostas
On Mon, Sep 9, 2019 at 9:38 AM Debasish Ghosh wrote:
>
> Hello -
>
> Is there a plan for a
Hi,我的理解是这样的:这个问题是由于 source 的重放,导致 Flink 内部就重复计算,并且会传到下游,最终反映在结果上,或者说这种操作不能保证内部
Extractly-Once?比如在 [8:00,8:05) 这 5 分钟之内,在 8:03 某个 task 挂了,然后又重新拉起。这时候从
checkpoint 的恢复获得的是这 8:00 分钟之前的 Kafka offset,但是 [8:00,8:03)
之内的消息已经消费,流向下游。重新拉起之后,由于某种原因 source 重放,那么这时候 [8:00,8:03) 的数据会再次被消费,并且会发往下游。
在
HI,能详细说下 “后端幂等消费” 的方案麽?
在 2019-09-09 14:37:55,"chang chan" 写道:
>消息队列本身很难保证消息不重复
>exactly once 可以用 消息队列的 at least once + 后端幂等消费来实现
>另外不建议使用 kafka 事务, 会拉低消息消费的速度
>
>Jimmy Wong 于2019年9月9日周一 上午11:50写道:
>
>> Hi,all:
>> 请教一下,我设置 checkpoint 的时间是 5 分钟,如果在这 5 分钟之内,某个 task
Hi, ??
Extractly-Once
| |
Jimmy
|
|
wangzmk...@163.com
|
??
??2019??09??9??
Hello -
Is there a plan for a Flink 1.9.1 release in the short term ? We are using
Flink and Avro with Avrohugger generating Scala case classes form Avro
schema. Hence we need https://github.com/apache/flink/pull/9565 which has
been closed recently.
regards.
--
Debasish Ghosh
Hi Anyang,
I think we cannot take your proposal because this means that whenever we
want to call notifyAllocationFailure when there is a connection problem
between the RM and the JM, then we fail the whole cluster. This is
something a robust and resilient system should not do because connection
消息队列本身很难保证消息不重复
exactly once 可以用 消息队列的 at least once + 后端幂等消费来实现
另外不建议使用 kafka 事务, 会拉低消息消费的速度
Jimmy Wong 于2019年9月9日周一 上午11:50写道:
> Hi,all:
> 请教一下,我设置 checkpoint 的时间是 5 分钟,如果在这 5 分钟之内,某个 task 挂了,然后又重新拉起。我是不是可以理解为这时候从
> checkpoint 的数据获得的是这 5 分钟之前的 Kafka offset,但是这 5
>
exactly
onceexactly
once??
??
你好,应该是配合kafka 事务,在checkpoint的时候做事务提交,下游只读取commit的消息就能保证exactly-once,当然,会丧失一定的时效性
Jasine Chen
jasinec...@gmail.com
Beijing
On Sep 9, 2019, 11:50 AM +0800, Jimmy Wong , wrote:
> Hi,all:
> 请教一下,我设置 checkpoint 的时间是 5 分钟,如果在这 5 分钟之内,某个 task 挂了,然后又重新拉起。我是不是可以理解为这时候从
> checkpoint 的数据获得的是这 5 分钟之前的 Kafka
18 matches
Mail list logo