Re:Re: flink1.9.0 LOCAL_WEBSERVER 问题

2019-08-22 Thread hb
1.9 版本之前,都是可以这么用的,正常的,1.9也是有这个API的啊 在 2019-08-23 12:28:14,"Zili Chen" 写道: >你是在哪看到这个配置的,我查了下代码甚至这个选项都没有使用点(x > >Best, >tison. > > >hb <343122...@163.com> 于2019年8月23日周五 下午1:22写道: > >> flink1.9.0 下 本地web 页面主页404,代码: >> ``` >> var config = new Configuration() >>

flink1.9.0 LOCAL_WEBSERVER 问题

2019-08-22 Thread hb
flink1.9.0 下 本地web 页面主页404,代码: ``` var config = new Configuration() config.setBoolean(ConfigConstants.LOCAL_START_WEBSERVER, true) config.setInteger(RestOptions.PORT, 8089) val env = StreamExecutionEnvironment.createLocalEnvironment(8, config) ``` 打开 http://localhost:8089/ 显示 {"errors":["Not

Re: [ANNOUNCE] Apache Flink 1.9.0 released

2019-08-22 Thread Peter Huang
It is great news for the community. Thanks to everyone who contributed to the release management. Congratulations! On Thu, Aug 22, 2019 at 9:14 PM Haibo Sun wrote: > Great news! Thanks Gordon and Kurt! > > Best, > Haibo > > > At 2019-08-22 20:03:26, "Tzu-Li (Gordon) Tai" wrote: > >The Apache

Re:[ANNOUNCE] Apache Flink 1.9.0 released

2019-08-22 Thread Haibo Sun
Great news! Thanks Gordon and Kurt!Best, Haibo At 2019-08-22 20:03:26, "Tzu-Li (Gordon) Tai" wrote: >The Apache Flink community is very happy to announce the release of Apache >Flink 1.9.0, which is the latest major release. > >Apache Flink® is an open-source stream processing framework for

Re: Flink Kafka Connector相关问题

2019-08-22 Thread 戴鑫铉
Hi Victor: 您的回复已收到,谢谢您详细的解答!非常感谢! Victor Wong 于2019年8月23日周五 上午10:20写道: > Hi 鑫铉: > 我尝试解答下; > > 1. 这是否意味着使用FlinkKafkaConsumer09必须开启checkpoint机制,不然不会定期提交offset至kafka呢? > 根据官方文档 [1],checkpoint offset是Flink的功能,auto commit offset是kafka > client的功能;这俩功能在作用上有一定重叠,所以根据配置会有不同的行为表现; >

FLINK TABLE API 自定义聚合函数UDAF从check point恢复任务报状态序列化不兼容问题( For heap backends, the new state serializer must not be incompatible)

2019-08-22 Thread orlando qi
大家好: 我在使用flink table api 实现group by 聚合操作的时候,自定义了一个UDAF函数,首次在集群上运行的时候是正确的,但是从check point恢复任务的时候报下面错误,但是使用内置的函数就没问题,不知道我该怎么解决呢? java.lang.RuntimeException: Error while getting state at org.apache.flink.runtime.state.DefaultKeyedStateStore.getState(DefaultKeyedStateStore.java:62) at

FLINK TABLE API UDAF QUESTION, For heap backends, the new state serializer must not be incompatible

2019-08-22 Thread orlando qi
Hello everyone: I defined a UDAF function when I am using the FLINK TABLE API to achieve the aggregation operation. There is no problem with the task running from beginning in cluster. But it throws an exception when it is restart task from checkpoint,How can I resolve it ?

Re: [ANNOUNCE] Apache Flink 1.9.0 released

2019-08-22 Thread Dian Fu
Great news! Thanks Gordon and Kurt for pushing this forward and everybody who contributed to this release. Regards, Dian > 在 2019年8月23日,上午9:41,Guowei Ma 写道: > > Congratulations!! > Best, > Guowei > > > Congxian Qiu mailto:qcx978132...@gmail.com>> > 于2019年8月23日周五 上午9:32写道: >

Re: Flink Kafka Connector相关问题

2019-08-22 Thread Victor Wong
Hi 鑫铉: 我尝试解答下; 1. 这是否意味着使用FlinkKafkaConsumer09必须开启checkpoint机制,不然不会定期提交offset至kafka呢? 根据官方文档 [1],checkpoint offset是Flink的功能,auto commit offset是kafka client的功能;这俩功能在作用上有一定重叠,所以根据配置会有不同的行为表现; 如果Flink没有开启checkpoint,那么offset的保存依赖于kafka client的auto commit offset;

Re: Externalized checkpoints

2019-08-22 Thread Congxian Qiu
Hi, Vishwas As Zhu Zhu said, you can set "state.checkpoints.num-retained"[1] to specify the maximum number of completed checkpoints to retain. maybe you can also ref the external checkpoint cleanup type[2] config for how to clean up the retained checkpoint[2] [1]

Re: [ANNOUNCE] Apache Flink 1.9.0 released

2019-08-22 Thread Guowei Ma
Congratulations!! Best, Guowei Congxian Qiu 于2019年8月23日周五 上午9:32写道: > Congratulations, and thanks for everyone who make this release possible. > Best, > Congxian > > > Kurt Young 于2019年8月23日周五 上午8:13写道: > >> Great to hear! Thanks Gordon for driving the release, and it's been a >> great

Re: [ANNOUNCE] Apache Flink 1.9.0 released

2019-08-22 Thread Congxian Qiu
Congratulations, and thanks for everyone who make this release possible. Best, Congxian Kurt Young 于2019年8月23日周五 上午8:13写道: > Great to hear! Thanks Gordon for driving the release, and it's been a > great pleasure to work with you as release managers for the last couple of > weeks. And thanks

Flink 实时监控目录下的新文件会有文件被遗漏

2019-08-22 Thread 王佩
在Flink 1.8.0下,通过 env.readFile 实时监控目录下的新文件并处理。5千多个文件,有25个文件被遗漏。 逻辑如下: 1、一个Flink程序实时将小文件写入目录A 2、另一个Flink程序通过env.readFile、PROCESS_CONTINUOUSLY模式实时监控目录A,然后做其他操作 发现,第二个Flink程序偶尔会遗漏文件。 请教下: 为什么会有文件丢失,丢失的原因可能是什么?并行度?

Re: [ANNOUNCE] Apache Flink 1.9.0 released

2019-08-22 Thread Kurt Young
Great to hear! Thanks Gordon for driving the release, and it's been a great pleasure to work with you as release managers for the last couple of weeks. And thanks everyone who contributed to this version, you're making Flink an even better project! Best, Kurt Yun Tang 于2019年8月23日 周五02:17写道: >

Exception when trying to change StreamingFileSink S3 bucket

2019-08-22 Thread sidhartha saurav
Hi, We are trying to change our StreamingFileSink S3 bucket, say from s3:// *eu1/output_old* to s3://*eu2/output_new*. When we do so we get an exception and the taskmanger goes into a restart loop. We suspect that it tries to restore state and gets the bucketid from saved state [* final BucketID

Re: TaskManager not connecting to ResourceManager in HA mode

2019-08-22 Thread Zili Chen
Nice to hear :-) Best, tison. Aleksandar Mastilovic 于2019年8月23日周五 上午2:22写道: > Thanks for all the help, people - you made me go through my code once > again and discover that I switched argument positions for job manager and > resource manager addresses :-) > > The docker ensemble now starts

Re: TaskManager not connecting to ResourceManager in HA mode

2019-08-22 Thread Aleksandar Mastilovic
Thanks for all the help, people - you made me go through my code once again and discover that I switched argument positions for job manager and resource manager addresses :-) The docker ensemble now starts fine, I’m working on ironing out the bugs now. I’ll participate in the survey too! > On

Re: [ANNOUNCE] Apache Flink 1.9.0 released

2019-08-22 Thread Yun Tang
Glad to hear this and really appreciate Gordon and Kurt's drive on this release, and thanks for everyone who ever contributed to this release. Best Yun Tang From: Becket Qin Sent: Friday, August 23, 2019 0:19 To: 不常用邮箱 Cc: Yang Wang ; user Subject: Re:

Re: [ANNOUNCE] Apache Flink 1.9.0 released

2019-08-22 Thread Becket Qin
Cheers!! Thanks Gordon and Kurt for driving the release! On Thu, Aug 22, 2019 at 5:36 PM 不常用邮箱 wrote: > Good news! > > Best. > -- > Louis > Email: xu_soft39211...@163.com > > On Aug 22, 2019, at 22:10, Yang Wang wrote: > > Glad to hear that. > Thanks Gordon, Kurt and everyone who had made

Re: [ANNOUNCE] Apache Flink 1.9.0 released

2019-08-22 Thread 不常用邮箱
Good news! Best. -- Louis Email: xu_soft39211...@163.com > On Aug 22, 2019, at 22:10, Yang Wang wrote: > > Glad to hear that. > Thanks Gordon, Kurt and everyone who had made contributions to the great > version. > > > Best, > Yang > > > Biao Liu mailto:mmyy1...@gmail.com>> 于2019年8月22日周四

Re: [ANNOUNCE] Apache Flink 1.9.0 released

2019-08-22 Thread Yang Wang
Glad to hear that. Thanks Gordon, Kurt and everyone who had made contributions to the great version. Best, Yang Biao Liu 于2019年8月22日周四 下午9:33写道: > Great news! > > Thank your Gordon & Kurt for being the release managers! > Thanks all contributors worked on this release! > > Thanks, > Biao

Re: Can I use watermarkers to have a global trigger of different ProcessFunction's?

2019-08-22 Thread Felipe Gutierrez
thanks for the detail explanation! I removed my implementation of the watermark which is not necessary in my case. I will only use Watermarkers if I am dealing with out of order events. *--* *-- Felipe Gutierrez* *-- skype: felipe.o.gutierrez* *--* *https://felipeogutierrez.blogspot.com

Re: [ANNOUNCE] Apache Flink 1.9.0 released

2019-08-22 Thread Biao Liu
Great news! Thank your Gordon & Kurt for being the release managers! Thanks all contributors worked on this release! Thanks, Biao /'bɪ.aʊ/ On Thu, 22 Aug 2019 at 21:14, Paul Lam wrote: > Well done! Thanks to everyone who contributed to the release! > > Best, > Paul Lam > > Yu Li

Re: [ANNOUNCE] Apache Flink 1.9.0 released

2019-08-22 Thread Paul Lam
Well done! Thanks to everyone who contributed to the release! Best, Paul Lam Yu Li 于2019年8月22日周四 下午9:03写道: > Thanks for the update Gordon, and congratulations! > > Great thanks to all for making this release possible, especially to our > release managers! > > Best Regards, > Yu > > > On Thu,

Re: [ANNOUNCE] Apache Flink 1.9.0 released

2019-08-22 Thread Yu Li
Thanks for the update Gordon, and congratulations! Great thanks to all for making this release possible, especially to our release managers! Best Regards, Yu On Thu, 22 Aug 2019 at 14:55, Xintong Song wrote: > Congratulations! > Thanks Gordon and Kurt for being the release managers, and

Re: [ANNOUNCE] Apache Flink 1.9.0 released

2019-08-22 Thread Xintong Song
Congratulations! Thanks Gordon and Kurt for being the release managers, and thanks all the contributors. Thank you~ Xintong Song On Thu, Aug 22, 2019 at 2:39 PM Yun Gao wrote: > Congratulations ! > > Very thanks for Gordon and Kurt for managing the release and very > thanks for

Re: [ANNOUNCE] Apache Flink 1.9.0 released

2019-08-22 Thread Yun Gao
Congratulations ! Very thanks for Gordon and Kurt for managing the release and very thanks for everyone for the contributions ! Best, Yun -- From:Zhu Zhu Send Time:2019 Aug. 22 (Thu.) 20:18 To:Eliza Cc:user

Re: [ANNOUNCE] Apache Flink 1.9.0 released

2019-08-22 Thread Zhu Zhu
Thanks Gordon for the update. Congratulations that we have Flink 1.9.0 released! Thanks to all the contributors. Thanks, Zhu Zhu Eliza 于2019年8月22日周四 下午8:10写道: > > > On 2019/8/22 星期四 下午 8:03, Tzu-Li (Gordon) Tai wrote: > > The Apache Flink community is very happy to announce the release of > >

Re: [ANNOUNCE] Apache Flink 1.9.0 released

2019-08-22 Thread Zili Chen
Congratulations! Thanks Gordon and Kurt for being the release manager. Thanks all the contributors who have made this release possible. Best, tison. Jark Wu 于2019年8月22日周四 下午8:11写道: > Congratulations! > > Thanks Gordon and Kurt for being the release manager and thanks a lot to > all the

Re: [ANNOUNCE] Apache Flink 1.9.0 released

2019-08-22 Thread JingsongLee
Congratulations~~~ Thanks gordon and everyone~ Best, Jingsong Lee -- From:Oytun Tez Send Time:2019年8月22日(星期四) 14:06 To:Tzu-Li (Gordon) Tai Cc:dev ; user ; announce Subject:Re: [ANNOUNCE] Apache Flink 1.9.0 released

Re: [ANNOUNCE] Apache Flink 1.9.0 released

2019-08-22 Thread Oytun Tez
Ah, I was worried State Processor API would work only on savepoints, but this confirmed the opposite: The new State Processor API covers all variations of snapshots: savepoints, full checkpoints and incremental checkpoints. Now this is a good day for me. I can imagine creating some abstractions

Re: [ANNOUNCE] Apache Flink 1.9.0 released

2019-08-22 Thread Jark Wu
Congratulations! Thanks Gordon and Kurt for being the release manager and thanks a lot to all the contributors. Cheers, Jark On Thu, 22 Aug 2019 at 20:06, Oytun Tez wrote: > Congratulations team; thanks for the update, Gordon. > > --- > Oytun Tez > > *M O T A W O R D* > The World's Fastest

Re: [ANNOUNCE] Apache Flink 1.9.0 released

2019-08-22 Thread Eliza
On 2019/8/22 星期四 下午 8:03, Tzu-Li (Gordon) Tai wrote: The Apache Flink community is very happy to announce the release of Apache Flink 1.9.0, which is the latest major release. Congratulations and thanks~ regards.

Re: [ANNOUNCE] Apache Flink 1.9.0 released

2019-08-22 Thread Oytun Tez
Congratulations team; thanks for the update, Gordon. --- Oytun Tez *M O T A W O R D* The World's Fastest Human Translation Platform. oy...@motaword.com — www.motaword.com On Thu, Aug 22, 2019 at 8:03 AM Tzu-Li (Gordon) Tai wrote: > The Apache Flink community is very happy to announce the

[ANNOUNCE] Apache Flink 1.9.0 released

2019-08-22 Thread Tzu-Li (Gordon) Tai
The Apache Flink community is very happy to announce the release of Apache Flink 1.9.0, which is the latest major release. Apache Flink® is an open-source stream processing framework for distributed, high-performing, always-available, and accurate data streaming applications. The release is

Re: timeout error while connecting to Kafka

2019-08-22 Thread miki haiat
Can you try to remove this from your pom file . org.apache.flink flink-connector-kafka_2.11 1.7.0 Is their any reason why you are using flink 1.5 and not latest release. Best, Miki On Thu, Aug 22, 2019 at 2:19 PM Eyal Pe'er wrote: > Hi

Flink Kafka Connector相关问题

2019-08-22 Thread 戴鑫铉
您好,这次发邮件主要想请教一下关于Flink Kafka Connector的一些问题:

RE: timeout error while connecting to Kafka

2019-08-22 Thread Eyal Pe'er
Hi Miki, First, I would like to thank you for the fast response. I recheck Kafka and it is up and running fine. I’m still getting the same error (Timeout expired while fetching topic metadata). Maybe my Flink version is wrong (Kafka version is 0.9)? org.apache.flink

Re: Multiple trigger events on keyed window

2019-08-22 Thread David Anderson
If you still need help diagnosing the cause of the misbehavior, please share more of the code with us. On Wed, Aug 21, 2019 at 6:24 PM Eric Isling wrote: > > I should add that the behaviour persists, even when I force parallelism to 1. > > On Wed, Aug 21, 2019 at 5:19 PM Eric Isling wrote: >>

Re: timeout error while connecting to Kafka

2019-08-22 Thread Qi Kang
The code itself is fine. Turning the app’s log level to DEBUG will give you more information. BTW, please make sure that the addresses of Kafka brokers are properly resolved. > On Aug 22, 2019, at 15:45, Eyal Pe'er wrote: > > Hi, > > I'm trying to consume events using Apache Flink. > > The

Re: combineGroup get false results

2019-08-22 Thread Fabian Hueske
Hi, If all key fields are primitive types (long) or String, their hash values should be deterministic. There are two things that can go wrong: 1) Records are assigned to the wrong group. 2) The computation of a group is buggy. I'd first check that 1) is correct. Can you replace the sum function

Re: combineGroup get false results

2019-08-22 Thread anissa moussaoui
Hi Fabian, My GroupReduce function sum one column of input rows of each group. My key fields is array of multiple type, in this case is string and long. The result that i'm posting is just represents sampling of output dataset. Thank you in advance ! Anissa Le jeu. 22 août 2019 à 11:24,

Flink-Netty-Connector????????????????????????

2019-08-22 Thread 278391968
Flink-Netty-Connector??Flink-Netty-ConnectorSocket?? ??Flink-Netty-Connector??

Re: combineGroup get false results

2019-08-22 Thread Fabian Hueske
Hi Anissa, This looks strange. If I understand your code correctly, your GroupReduce function is summing up a field. Looking at the results that you posted, it seems as if there is some data missing (the total sum does not seem to match). For groupReduce it is important that the grouping keys

Re: Multiple trigger events on keyed window

2019-08-22 Thread David Anderson
The role of the watermarks in your job will be to trigger the closing of the sliding event time windows. In order to play that role properly, they should be based on the timestamps in the events, rather than some arbitrary constant (L). The reason why the same object is responsible for

Re: Maximal watermark when two streams are connected

2019-08-22 Thread Fabian Hueske
Hi Sung, There is no switch to configure the WM to be the max of both streams and it would also in fact violate the core principles of the mechanism. Watermarks are used to track the progress of event time in streams. The implementations of operators rely on the fact that (almost) all records

Re: combineGroup get false results

2019-08-22 Thread anissa moussaoui
Thanks for your feedback! Sorry, effectively I used reductionGroup, but that gives different results when I change the parallelism to 8 (more than 1) and the true results are with Parallelism 1 and I want to set it to 8. I do not know how do to have the same result by modifying the parallelism

Re: How to shorten MATCH_RECOGNIZE's DEFINE clause

2019-08-22 Thread Fabian Hueske
Hi Dongwon, I'm not super familiar with Flink's MATCH_RECOGNIZE support, but Dawid (in CC) might have some ideas about it. Best, Fabian Am Mi., 21. Aug. 2019 um 07:23 Uhr schrieb Dongwon Kim < eastcirc...@gmail.com>: > Hi, > > Flink relational apis with MATCH_RECOGNITION looks very attractive

Re: Error while sinking results to Cassandra using Flink Cassandra Connector

2019-08-22 Thread Fabian Hueske
Hi Manvi, A NoSuchMethodError typically indicates a version mismatch. I would check if the Flink versions of your program, the client, and the cluster are the same. Best, Fabian Am Di., 20. Aug. 2019 um 21:09 Uhr schrieb manvmali : > Hi, I am facing the issue of writing the data stream result

Re: combineGroup get false results

2019-08-22 Thread Fabian Hueske
Hi Anissa, Are you using combineGroup or reduceGroup? Your question refers to combineGroup, but the code only shows reduceGroup. combineGroup is non-deterministic by design to enable efficient partial results without network and disk IO. reduceGroup is deterministic given a deterministic key

Re: timeout error while connecting to Kafka

2019-08-22 Thread miki haiat
Can you double check that the kafka instance is up ? The code looks fine. Best, Miki On Thu, Aug 22, 2019 at 10:45 AM Eyal Pe'er wrote: > Hi, > > I'm trying to consume events using Apache Flink. > > The code is very basic, trying to connect the topic split words by space > and print it to

timeout error while connecting to Kafka

2019-08-22 Thread Eyal Pe'er
Hi, I'm trying to consume events using Apache Flink. The code is very basic, trying to connect the topic split words by space and print it to the console. Kafka version is 0.9. import org.apache.flink.api.common.functions.FlatMapFunction; import

Re: Maximal watermark when two streams are connected

2019-08-22 Thread Sung Gon Yi
I use assignerTimestampsAndWatermarks after connecting two streams and it works well. Thank you. > On 22 Aug 2019, at 3:26 PM, Jark Wu wrote: > > Hi Sung, > > Watermark will be advanced only when records come in if you are using > ".assignTimestampsAndWatermarks()". > One way to solve this

Re: Maximal watermark when two streams are connected

2019-08-22 Thread Jark Wu
Hi Sung, Watermark will be advanced only when records come in if you are using ".assignTimestampsAndWatermarks()". One way to solve this problem is you should call ".assignTimestampsAndWatermarks()" before the condition to make sure there are messages. Best, Jark On Thu, 22 Aug 2019 at 13:52,

回复: Re: 流表在与维表join时,维表没有动态查询

2019-08-22 Thread sjlsumait...@163.com
我目前测试了以中维表join的方式是将维表作为状态广播出去,可以做到动态更新维表,不知道你现在是怎么实现这个问题的 另外我在跟别人讨论的时候,有人借助第三方数据库进行维表join操作,但是我认为借助第三方数据库可能会有瓶颈 sjlsumait...@163.com 发件人: 雒正林 发送时间: 2019-07-04 16:53 收件人: qi luo 抄送: user-zh 主题: Re: 流表在与维表join时,维表没有动态查询 是 source 类,不是JDBCSinkFunction 。我理解UDTF 应该是用来 做数据transformation的,能做到动态的去查询吗?