Re: Failing to build Flink 1.9 using Scala 2.12

2022-12-28 Thread Milind Vaidya
Hi David, Thanks for this useful information. This unblocked and now locally build is successful which was not the case earlier. I have a few more questions though. - Are there any requirements for the maven and / or plugin version ? - Another error says *Failure to find

Re: Failing to build Flink 1.9 using Scala 2.12

2022-12-24 Thread David Anderson
Flink only officially supports Scala 2.12 up to 2.12.7 -- you are running into the binary compatibility check, intended to keep you from unknowingly running into problems. You can disable japicmp, and everything will hopefully work: mvn clean install -DskipTests -Djapicmp.skip -Dscala-2.12

Failing to build Flink 1.9 using Scala 2.12

2022-12-23 Thread Milind Vaidya
Hi First of all, I do understand that I am using a very old version. But as of now the team can not help it. We need to move to Scala 2.12 first and then we will move forward towards the latest version of Flink. I have added following things to main pom.xml 2.11.12 2.11 Under Scala-2.11

Re: Failing to compile Flink 1.9 with Scala 2.12

2022-08-30 Thread Martijn Visser
Vaidya" > *收件人: *"Weihua Hu" > *抄送: *"User" > *发送时间: *星期五, 2022年 8 月 19日 上午 1:26:45 > *主题: *Re: Failing to compile Flink 1.9 with Scala 2.12 > > Hi Weihua, > > Thanks for the update. I do understand that, but right now it is not > possible to

Re: Failing to compile Flink 1.9 with Scala 2.12

2022-08-18 Thread yuxia
At least for Flink 1.15, it's recommended to use maven 3.2.5. So I guess maybe you can try use a lower version of maven. Best regards, Yuxia 发件人: "Milind Vaidya" 收件人: "Weihua Hu" 抄送: "User" 发送时间: 星期五, 2022年 8 月 19日 上午 1:26:45 主题: Re: Failing to compile

Re: Failing to compile Flink 1.9 with Scala 2.12

2022-08-18 Thread Milind Vaidya
Hi Weihua, Thanks for the update. I do understand that, but right now it is not possible to update immediately to 1.15, so wanted to know what is the way out. - Milind On Thu, Aug 18, 2022 at 7:19 AM Weihua Hu wrote: > Hi > Flink 1.9 is not updated since 2020-04-24, it's recommended

Re: Failing to compile Flink 1.9 with Scala 2.12

2022-08-18 Thread Weihua Hu
Hi Flink 1.9 is not updated since 2020-04-24, it's recommended to use the latest stable version 1.15.1. Best, Weihua On Thu, Aug 18, 2022 at 5:36 AM Milind Vaidya wrote: > Hi > > Trying to compile and build Flink jars based on Scala 2.12. > > Settings : > Java 8 >

Failing to compile Flink 1.9 with Scala 2.12

2022-08-17 Thread Milind Vaidya
Hi Trying to compile and build Flink jars based on Scala 2.12. Settings : Java 8 Maven 3.6.3 / 3.8.6 Many online posts suggest using Java 8 which is already in place. Building using Jenkins. Any clues as to how to get rid of it? net.alchim31.maven scala-maven-plugin 3.3.2 -nobootcp

Re: idleTimeMsPerSecond on Flink 1.9?

2021-03-10 Thread Till Rohrmann
Hi Lakshmi, as you have said the StreamTask code base has evolved quite a bit between Flink 1.9 and Flink 1.12. With the mailbox model it now works quite differently. Moreover, the community no longer actively maintains versions < 1.11. Hence, if possible I would recommend you to upgrade to

idleTimeMsPerSecond on Flink 1.9?

2021-03-08 Thread Lakshmi Gururaja Rao
Hi I'm trying to understand the implementation of idleTimeMsPerSecond. Specifically what I'm trying to do is, adapt this metric to be used with Flink 1.9 (for a fork). I tried an approach similar to this PR <https://github.com/apache/flink/pull/11564/files> and measuring the time to r

Re: flink 1.9 关于回撤流的问题

2020-09-17 Thread godfrey he
ate吗? > > star <3149768...@qq.com> 于2020年6月8日周一 上午9:38写道: > > > 非常感谢,正是我想要的。也谢谢金竹老师的分享! > > > > > > > > > > --原始邮件-- > > 发件人:"Sun.Zhu"<17626017...@163.com; > > 发送时间:2020年6月7日(星期天) 凌晨0:02 >

Re: flink 1.9 关于回撤流的问题

2020-09-15 Thread Shengkai Fang
017...@163.com; > 发送时间:2020年6月7日(星期天) 凌晨0:02 > 收件人:"user-zh@flink.apache.org" 抄送:"user-zh@flink.apache.org" 主题:回复:flink 1.9 关于回撤流的问题 > > > > Hi,star > 金竹老师发过一篇文章,重写了KafkaConnector的实现,支持upsert模式,可以参考下[1] > > > [1]https://mp.weixin.qq.com/s/MSs7HSaegyW

Re: Pushing metrics to Influx from Flink 1.9 on AWS EMR(5.28)

2020-08-14 Thread bat man
Hello Arvid, Thanks I’ll check my config and use the correct reporter and test it out. Thanks, Hemant On Fri, 14 Aug 2020 at 6:57 PM, Arvid Heise wrote: > Hi Hemant, > > according to the influx section of the 1.9 metric documentation [1], you > should use the reporter without a factory. The

Re: Pushing metrics to Influx from Flink 1.9 on AWS EMR(5.28)

2020-08-14 Thread Arvid Heise
Hi Hemant, according to the influx section of the 1.9 metric documentation [1], you should use the reporter without a factory. The factory was added later. metrics.reporter.influxdb.class: org.apache.flink.metrics.influxdb.InfluxdbReportermetrics.reporter.influxdb.host:

Re: Pushing metrics to Influx from Flink 1.9 on AWS EMR(5.28)

2020-08-13 Thread bat man
Anyone who has made metrics integration to external systems for flink running on AWS EMR, can you share if its a configuration issue or EMR specific issue. Thanks, Hemant On Wed, Aug 12, 2020 at 9:55 PM bat man wrote: > An update in the yarn logs I could see the below - > > Classpath: >

Re: Pushing metrics to Influx from Flink 1.9 on AWS EMR(5.28)

2020-08-12 Thread bat man
An update in the yarn logs I could see the below - Classpath:

Pushing metrics to Influx from Flink 1.9 on AWS EMR(5.28)

2020-08-12 Thread bat man
Hello Experts, I am running Flink - 1.9.0 on AWS EMR(emr-5.28.1). I want to push metrics to Influxdb. I followed the documentation[1]. I added the configuration to /usr/lib/flink/conf/flink-conf.yaml and copied the jar to /usr/lib/flink//lib folder on master node. However, I also understand that

Re: Handle idle kafka source in Flink 1.9

2020-08-05 Thread bat man
Hello Arvid, Thanks for the suggestion/reference and my apologies for the late reply. With this I am able to process the data with some topics not having regular data. Obviously, late data is being handheld as in side-output and has a process for it. One challenge is to handle the back-fill as

Re: Any change in behavior related to the "web.upload.dir" behavior between Flink 1.9 and 1.11

2020-08-03 Thread Avijit Saha
the image? This exception should only > be thrown if there is already a file with the same path, and I don't think > Flink would do that. > > On 03/08/2020 21:43, Avijit Saha wrote: > > Hello, > > Has there been any change in behavior related to the "web.upload.dir" &

Re: Any change in behavior related to the "web.upload.dir" behavior between Flink 1.9 and 1.11

2020-08-03 Thread Chesnay Schepler
change in behavior related to the "web.upload.dir" behavior between Flink 1.9 and 1.11? I have a failure case where when build an image using "flink:1.11.0-scala_2.12" in Dockerfile, the job manager job submissions fail with the following Exception but the same flow works

Any change in behavior related to the "web.upload.dir" behavior between Flink 1.9 and 1.11

2020-08-03 Thread Avijit Saha
Hello, Has there been any change in behavior related to the "web.upload.dir" behavior between Flink 1.9 and 1.11? I have a failure case where when build an image using "flink:1.11.0-scala_2.12" in Dockerfile, the job manager job submissions fail with the following Excepti

Re: Between Flink 1.9 and 1.11 - any behavior change for web.upload.dir

2020-08-03 Thread Avijit Saha
Hello, Has there been any change in behavior related to the "web.upload.dir" behavior between Flink 1.9 and 1.11? I have a failure case where when build an image using "flink:1.11.0-scala_2.12" in Dockerfile, the job manager job submissions fail with the following Excepti

Re: Handle idle kafka source in Flink 1.9

2020-07-30 Thread Arvid Heise
Hi Hemant, sorry for the late reply. You can just create your own watermark assigner and either copy the assigner from Flink 1.11 or take the one that we use in our trainings [1]. [1]

Re: Handle idle kafka source in Flink 1.9

2020-07-23 Thread bat man
Thanks Niels for a great talk. You have covered two of my pain areas - slim and broken streams. Since I am dealing with device data from on-prem data centers. The first option of generating fabricated watermark events is fine, however as mentioned in your talk how are you handling forwarding it to

Re: Handle idle kafka source in Flink 1.9

2020-07-23 Thread Niels Basjes
Have a look at this presentation I gave a few weeks ago. https://youtu.be/bQmz7JOmE_4 Niels Basjes On Wed, 22 Jul 2020, 08:51 bat man, wrote: > Hi Team, > > Can someone share their experiences handling this. > > Thanks. > > On Tue, Jul 21, 2020 at 11:30 AM bat man wrote: > >> Hello, >> >> I

Re: Handle idle kafka source in Flink 1.9

2020-07-22 Thread bat man
Hi Team, Can someone share their experiences handling this. Thanks. On Tue, Jul 21, 2020 at 11:30 AM bat man wrote: > Hello, > > I have a pipeline which consumes data from a Kafka source. Since, the > partitions are partitioned by device_id in case a group of devices is down > some partitions

Handle idle kafka source in Flink 1.9

2020-07-21 Thread bat man
Hello, I have a pipeline which consumes data from a Kafka source. Since, the partitions are partitioned by device_id in case a group of devices is down some partitions will not get normal flow of data. I understand from documentation here[1] in flink 1.11 one can declare the source idle -

flink 1.9 中 StreamTableEnvironment 注册 registerDataStream处理嵌套别名

2020-07-03 Thread Jun Zou
Hi, 我在使用flink 1.9版本的 StreamTableEnvironment 注册 table 时,想指定一个嵌套字段的 cloumns alianame, 例如: String fieldExprsStr = "modbus.parsedResponse,timestamp"; tableEnv.registerDataStream(src.getName(), srcStream, fieldExprsStr); 在对 modbus.parsedResponse 进行校验的

?????? flink 1.9 ??????UDAF ????state??????????????

2020-06-07 Thread star
?? ?? ---- ??:"Benchao Li"

Re: flink 1.9 自定义UDAF 实现state管理的逻辑吗?

2020-06-07 Thread Benchao Li
也做checkpoint > > > > > --原始邮件-- > 发件人:"Benchao Li" 发送时间:2020年6月8日(星期一) 上午10:46 > 收件人:"user-zh" > 主题:Re: flink 1.9 自定义UDAF 实现state管理的逻辑吗? > > > > 没有完全明白你的问题。 > 你是要问UDAF的相关的state是怎么被Flink管理的么? > 还是问UDAF里面如果用了state,应该自己怎么来管理呢? > &g

?????? flink 1.9 ??????UDAF ????state??????????????

2020-06-07 Thread star
udaf?? ??arrayListarraylistarraylist checkpoint ---- ??:"Benchao Li"

Re: flink 1.9 自定义UDAF 实现state管理的逻辑吗?

2020-06-07 Thread Benchao Li
没有完全明白你的问题。 你是要问UDAF的相关的state是怎么被Flink管理的么? 还是问UDAF里面如果用了state,应该自己怎么来管理呢? star <3149768...@qq.com> 于2020年6月8日周一 上午10:44写道: > 请教大家, > > > flink 1.9 自定义UDAF 实现state管理的逻辑吗? > > > 还是和sql一样 自己管理stage? > > > class MyFunc extends AggregateFunction{ > createAccu

flink 1.9 ??????UDAF ????state??????????????

2020-06-07 Thread star
?? flink 1.9 ??UDAF state?? ??sql stage?? class MyFunc extends AggregateFunction{ createAccumulator accumulate getValue merge }

??????flink 1.9 ????????????????

2020-06-06 Thread Sun.Zhu
Hi,star KafkaConnectorupsert[1] [1]https://mp.weixin.qq.com/s/MSs7HSaegyWWU3Fig2PYYA | | Sun.Zhu | | 17626017...@163.com | ?? ??2020??06??3?? 14:47??star<3149768...@qq.com> ??

Re:?????? flink 1.9 ????????????????

2020-06-04 Thread Michael Ran
topic??table > > >??table?? > > > > > > >---- >??:"godfrey he":2020??6??3??(??) 3:40 >??:"user-zh" >:Re: flink 1

?????? flink 1.9 ????????????????

2020-06-03 Thread star
append??state?? distinctappend?? append?? ----

Re: flink 1.9 关于回撤流的问题

2020-06-03 Thread LakeShen
> 发件人:"godfrey he" 发送时间:2020年6月3日(星期三) 下午3:40 > 收件人:"user-zh" > 主题:Re: flink 1.9 关于回撤流的问题 > > > > hi star, > Flink 1.11 开始已经支持 table source 读取 retract 消息,update 消息。 > 目前支持 Debezium format,Canal format [1],其他的情况目前需要自己实现。 > >

?????? flink 1.9 ????????????????

2020-06-03 Thread star
??select year,month,day,province,sub_name,sum(amount),count(*) as cou from mytable group by year,month,day,province,sub_name; ?? ?? ??

Re: flink 1.9 关于回撤流的问题

2020-06-03 Thread godfrey he
tar"<3149768...@qq.com; > 发送时间:2020年6月3日(星期三) 下午2:47 > 收件人:"user-zh@flink.apache.org" > 主题:flink 1.9 关于回撤流的问题 > > > > 大家好, > > > > 在做实时统计分析的时候我将基础汇总层做成了一个回撤流(toRetractStream)输出到kafka里(因为后面很多实时报表依赖这个流,所以就输出了) > 问题是这个kafka里到回撤流还能转成flink 的RetractStream流吗,转成后我想注册成一张表,然后基于这个表再做聚合分析 > > > > > 谢谢

??????flink 1.9 ????????????????

2020-06-03 Thread 1048262223
Hi Flink ??RetractStream sinkupdatekafkaupdatesink??kafka??RetractStream Best, Yichao Yang ---- ??:"star"<3149768...@qq.com; :2020??6??3??(??) 2:47

flink 1.9 ????????????????

2020-06-03 Thread star
??toRetractStreamkafka?? ??kafka??flink ??RetractStream

Re:Re: 回复:flink 1.9 1.10 on yarn在cdh上怎么搭建一个客户端

2020-05-28 Thread air23
link on yarn是没问题,任务运行也没问题,还可以使用Flink on > hive! > >flink-shaded-hadoop-2-uber-2.6.5-10.0.jar > > > > >发件人: 111 >发送时间: 2020-05-28 09:13 >收件人: user-zh@flink.apache.org >主题: 回复:flink 1.9 1.10 on yarn在cdh上怎么搭建一个客户端 >Hi, >一般只要你有yarn环境,在任意一台机器上下载flink安装包,

Re: 回复:flink 1.9 1.10 on yarn在cdh上怎么搭建一个客户端

2020-05-27 Thread wangweigu...@stevegame.cn
题: 回复:flink 1.9 1.10 on yarn在cdh上怎么搭建一个客户端 Hi, 一般只要你有yarn环境,在任意一台机器上下载flink安装包,配一下HADOOP_CONF环境变量就可以使用。 如果是session模式:可以使用Yarn-session.sh启动yarn session集群,然后通过flink run xxx 提交程序。 如果是per job模式:直接使用flink run即可。 best, Xinghalo

回复:flink 1.9 1.10 on yarn在cdh上怎么搭建一个客户端

2020-05-27 Thread 111
Hi, 一般只要你有yarn环境,在任意一台机器上下载flink安装包,配一下HADOOP_CONF环境变量就可以使用。 如果是session模式:可以使用Yarn-session.sh启动yarn session集群,然后通过flink run xxx 提交程序。 如果是per job模式:直接使用flink run即可。 best, Xinghalo

flink 1.9 1.10 on yarn在cdh上怎么搭建一个客户端

2020-05-27 Thread 王飞
flink 1.9 1.10 在cdh上怎么搭建一个客户端。 我需要一个客户端启动flink on yan. 1.7版本 是正常的。 但是1.9 和1.10 启动了on yarn 任务。我的环境是cdh hadoop。 谢谢回答

Re: flink 1.9 conflict jackson version

2020-04-07 Thread Fanbin Bu
由 网易邮箱大师 <https://mail.163.com/dashi/dlpro.html?from=mail81> 定制 >> >> On 12/17/2019 08:10,Fanbin Bu >> wrote: >> >> Hi, >> >> After I upgrade flink 1.9, I got the following error message on EMR, it >> works locally on IntelliJ. >> >&g

Re: flink 1.9 conflict jackson version

2020-04-07 Thread aj
2Fmail-online.nosdn.127.net%2Fsma8dc7719018ba2517da7111b3db5a170.jpg=%5B%22ouywl%40139.com%22%5D> > 签名由 网易邮箱大师 <https://mail.163.com/dashi/dlpro.html?from=mail81> 定制 > > On 12/17/2019 08:10,Fanbin Bu > wrote: > > Hi, > > After I upgrade flink 1.9, I got the following error message on EMR,

Flink(≥1.9) Table/SQL Trigger

2020-03-30 Thread Jimmy Wong
Hi,all: 我记得 Flink ( ≥1.9) 的 SQL/Table 是不支持 CountTrigger.of(1),这种自定义Trigger的吧 请问对于 Flink ( ≥1.9) 的 SQL/Table 如何实现自定义 Trigger?比如 CountTrigger (per-record Trigger),ContinuousEventTimeTrigger(specifical-time Trigger) 等。 | | Jimmy Wong | | wangzmk...@163.com | 签名由网易邮箱大师定制

Re: flink 1.9 状态后端为FsStateBackend,修改checkpoint时出现警告

2020-03-27 Thread Congxian Qiu
Hi 从报错来看,你用 StateProcessAPI,StateProcessAPI 的某些 function(这里是 getMetricGroup) 不提供支持,因此会有这个提示,如果你没有显示调用这个 function 的话,那可能是个 bug Best, Congxian guanyq 于2020年3月25日周三 下午3:24写道: > > > > > > >

flink 1.9 状态后端为FsStateBackend,修改checkpoint时出现警告

2020-03-25 Thread guanyq
package com.guanyq.study.libraries.stateProcessorApi.FsStateBackend; import org.apache.flink.api.common.functions.FlatMapFunction; import org.apache.flink.api.common.state.ListState; import org.apache.flink.api.common.state.ListStateDescriptor; import

Re: Hadoop user jar for flink 1.9 plus

2020-03-20 Thread Vishal Santoshi
.8.x on production and were planning to go to > > flink 1.9 or above. We have always used hadoop uber jar from > > > https://mvnrepository.com/artifact/org.apache.flink/flink-shaded-hadoop2-uber > but > > > it seems they go up to 1.8.3 and their distribution ends 2

Re: Hadoop user jar for flink 1.9 plus

2020-03-17 Thread Chesnay Schepler
You can download flink-shaded-hadoop from the downloads page: https://flink.apache.org/downloads.html#additional-components On 17/03/2020 15:56, Vishal Santoshi wrote: We have been on flink 1.8.x on production and were planning to go to flink 1.9 or above. We have always used hadoop uber jar

Hadoop user jar for flink 1.9 plus

2020-03-17 Thread Vishal Santoshi
We have been on flink 1.8.x on production and were planning to go to flink 1.9 or above. We have always used hadoop uber jar from https://mvnrepository.com/artifact/org.apache.flink/flink-shaded-hadoop2-uber but it seems they go up to 1.8.3 and their distribution ends 2019. How do or where do we

Re: Does Flink 1.9 support create or replace views syntax in raw sql?

2020-02-27 Thread godfrey he
t;>> >>>>>>> Sorry, 1.10 not support "CREATE VIEW" in raw SQL too. Workaround is: >>>>>>> - Using TableEnvironment.createTemporaryView... >>>>>>> - Or using "create view" and "drop view" in the sql-client. &g

Re: Does Flink 1.9 support create or replace views syntax in raw sql?

2020-02-27 Thread kant kodali
gt;> >>>>>> FLIP-71 will be finished in 1.11 soon. >>>>>> >>>>>> Best, >>>>>> Jingsong Lee >>>>>> >>>>>> On Sun, Jan 19, 2020 at 4:10 PM kant kodali >>>>>> wrote: &g

Re: Does Flink 1.9 support create or replace views syntax in raw sql?

2020-02-27 Thread kant kodali
will be finished in 1.11 soon. >>>>> >>>>> Best, >>>>> Jingsong Lee >>>>> >>>>> On Sun, Jan 19, 2020 at 4:10 PM kant kodali >>>>> wrote: >>>>> >>>>>> I tried the following. >>&

Re: Does Flink 1.9 support create or replace views syntax in raw sql?

2020-02-26 Thread godfrey he
ableEnv.sqlUpdate("CREATE VIEW my_view AS SELECT * FROM sample1 FULL >>>>> OUTER JOIN sample2 on sample1.f0=sample2.f0"); >>>>> >>>>> Table result = bsTableEnv.sqlQuery("select * from my_view"); >>>>> >>>>

Re: Does Flink 1.9 support create or replace views syntax in raw sql?

2020-02-25 Thread kant kodali
sult = bsTableEnv.sqlQuery("select * from my_view"); >>>> >>>> It looks like >>>> https://cwiki.apache.org/confluence/display/FLINK/FLIP-71+-+E2E+View+support+in+FLINK+SQL >>>> Views >>>> are not supported. Can I expect them to be supporte

【checkpoint】Flink 1.9 , checkpoint is declined for exception with message 'Pending record count must be zero at this point'

2020-02-20 Thread tao wang
Hi all , may someone help me !! tks. The full exception as follows. > 2020-02-21 08:32:15,738 INFO > org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Discarding > checkpoint 941 of job 0e16cf38a0bff313544e1f31d078f75b. > org.apache.flink.runtime.checkpoint.CheckpointException:

Re: Does Flink 1.9 support create or replace views syntax in raw sql?

2020-01-20 Thread Jingsong Li
t;>> It looks like >>> https://cwiki.apache.org/confluence/display/FLINK/FLIP-71+-+E2E+View+support+in+FLINK+SQL >>> Views >>> are not supported. Can I expect them to be supported in Flink 1.10? >>> >>> Currently, with Spark SQL when the query g

Re: Does Flink 1.9 support create or replace views syntax in raw sql?

2020-01-20 Thread Jark Wu
t; > Currently, with Spark SQL when the query gets big I break it down into views > and this is one of the most important features my application relies on. is > there any workaround for this at the moment? > > Thanks! > > On Sat, Jan 18, 2020 at 6:24 PM kant kodali <mailto:kanth...@gmail.com>> wrote: > Hi All, > > Does Flink 1.9 support create or replace views syntax in raw SQL? like spark > streaming does? > > Thanks! > > > -- > Best, Jingsong Lee

Re: Does Flink 1.9 support create or replace views syntax in raw sql?

2020-01-20 Thread kant kodali
1+-+E2E+View+support+in+FLINK+SQL >> Views >> are not supported. Can I expect them to be supported in Flink 1.10? >> >> Currently, with Spark SQL when the query gets big I break it down into >> views and this is one of the most important features my application relies >

Re: Does Flink 1.9 support create or replace views syntax in raw sql?

2020-01-20 Thread Jingsong Li
hen the query gets big I break it down into > views and this is one of the most important features my application relies > on. is there any workaround for this at the moment? > > Thanks! > > On Sat, Jan 18, 2020 at 6:24 PM kant kodali wrote: > >> Hi All, >> >&

Re: Does Flink 1.9 support create or replace views syntax in raw sql?

2020-01-19 Thread kant kodali
the moment? Thanks! On Sat, Jan 18, 2020 at 6:24 PM kant kodali wrote: > Hi All, > > Does Flink 1.9 support create or replace views syntax in raw SQL? like > spark streaming does? > > Thanks! >

Does Flink 1.9 support create or replace views syntax in raw sql?

2020-01-18 Thread kant kodali
Hi All, Does Flink 1.9 support create or replace views syntax in raw SQL? like spark streaming does? Thanks!

Re: are blink changes merged into flink 1.9?

2020-01-12 Thread Benchao Li
ner >>>- Old Planner (Legacy Planner) >>> >>> You can try out blink planner by [2]. >>> Hope this helps. >>> >>> [1] >>> https://ci.apache.org/projects/flink/flink-docs-release-1.9/dev/table/common.html#main-differences-between-the-two

Re: are blink changes merged into flink 1.9?

2020-01-12 Thread Benchao Li
n try out blink planner by [2]. >> Hope this helps. >> >> [1] >> https://ci.apache.org/projects/flink/flink-docs-release-1.9/dev/table/common.html#main-differences-between-the-two-planners >> [2] >> https://ci.apache.org/projects/flink/flink-docs-release-1.9/d

Re: are blink changes merged into flink 1.9?

2020-01-12 Thread kant kodali
ommon.html#main-differences-between-the-two-planners > [2] > https://ci.apache.org/projects/flink/flink-docs-release-1.9/dev/table/common.html#create-a-tableenvironment > > > kant kodali 于2020年1月12日周日 上午7:48写道: > >> Hi All, >> >> Are blink changes merged into fli

Re: are blink changes merged into flink 1.9?

2020-01-11 Thread Benchao Li
/common.html#main-differences-between-the-two-planners [2] https://ci.apache.org/projects/flink/flink-docs-release-1.9/dev/table/common.html#create-a-tableenvironment kant kodali 于2020年1月12日周日 上午7:48写道: > Hi All, > > Are blink changes merged into flink 1.9? It looks like there are a lot of &

are blink changes merged into flink 1.9?

2020-01-11 Thread kant kodali
Hi All, Are blink changes merged into flink 1.9? It looks like there are a lot of features and optimizations in Blink and if they aren't merged into flink 1.9 I am not sure on which one to use? is there any plan towards merging it? Thanks!

Re: Migrate custom partitioner from Flink 1.7 to Flink 1.9

2020-01-08 Thread Arvid Heise
Hi Salva, I already answered on SO [1], but I'll replicate it here: With Flink 1.9, you cannot dynamically broadcast to all channels anymore. Your StreamPartitioner has to statically specify if it's a broadcast with isBroadcast. Then, selectChannel is never invoked. Do you have a specific use

Re: Migrate custom partitioner from Flink 1.7 to Flink 1.9

2020-01-03 Thread Salva Alcántara
ation(), new MyDynamicPartitioner()) ) ``` The problem when migrating to Flink 1.9 is that MyDynamicPartitioner cannot handle broadcasted elements as explained in the question description. So, based on your reply, I guess I could do something like this: ``` resultSingleChannel = ne

Re: Migrate custom partitioner from Flink 1.7 to Flink 1.9

2020-01-03 Thread Chesnay Schepler
adcast().union(singleChannel) // apply operations on result On 26/12/2019 08:20, Salva Alcántara wrote: I am trying to migrate a custom dynamic partitioner from Flink 1.7 to Flink 1.9. The original partitioner implemented the `selectChannels` method within the `StreamPartitioner` interfac

Migrate custom partitioner from Flink 1.7 to Flink 1.9

2019-12-25 Thread Salva Alcántara
I am trying to migrate a custom dynamic partitioner from Flink 1.7 to Flink 1.9. The original partitioner implemented the `selectChannels` method within the `StreamPartitioner` interface like this: ```java // Original: working for Flink 1.7 //@Override public int[] selectChannels

Re: Flink 1.9 SQL Kafka Connector,Json format,how to deal with not json message?

2019-12-25 Thread Jark Wu
Hi LakeShen, I'm sorry there is no such configuration for json format currently. I think it makes sense to add such configuration like 'format.ignore-parse-errors' in csv format. I created FLINK-15396[1] to track this. Best, Jark [1]: https://issues.apache.org/jira/browse/FLINK-15396 On Thu,

Re: Flink 1.9 SQL Kafka Connector,Json format,how to deal with not json message?

2019-12-25 Thread Jark Wu
Hi LakeShen, I'm sorry there is no such configuration for json format currently. I think it makes sense to add such configuration like 'format.ignore-parse-errors' in csv format. I created FLINK-15396[1] to track this. Best, Jark [1]: https://issues.apache.org/jira/browse/FLINK-15396 On Thu,

Flink 1.9 SQL Kafka Connector,Json format,how to deal with not json message?

2019-12-25 Thread LakeShen
Hi community,when I write the flink ddl sql like this: CREATE TABLE kafka_src ( id varchar, a varchar, b TIMESTAMP, c TIMESTAMP ) with ( ... 'format.type' = 'json', 'format.property-version' = '1', 'format.derive-schema' = 'true', 'update-mode' = 'append' ); If the

Flink 1.9 SQL Kafka Connector,Json format,how to deal with not json message?

2019-12-25 Thread LakeShen
Hi community,when I write the flink ddl sql like this: CREATE TABLE kafka_src ( id varchar, a varchar, b TIMESTAMP, c TIMESTAMP ) with ( ... 'format.type' = 'json', 'format.property-version' = '1', 'format.derive-schema' = 'true', 'update-mode' = 'append' ); If the

Re: Re: FLINK 1.9 + YARN+ SessionWindows + 大数据量 + 运行一段时间后 OOM

2019-12-18 Thread Xintong Song
- "TaskManager分配用于排序,hash表和缓存中间结果的内存位于JVM堆外" 这个是针对 batch (dataset / blink sql) 作业的,我看你跑的应该是 streaming 作业,把 taskmanager.memory.off-heap 设成 true 只是单纯为了减小 jvm heap size,留出空间给 rocksdb。 - 有一个 flink-examples 的目录,里面有一些示例作业,不过主要是展示 api 用法的。部署、资源调优方面的示例暂时还没有。 - 另外,我在上一封邮件里描述的解决方案

Re:Re: FLINK 1.9 + YARN+ SessionWindows + 大数据量 + 运行一段时间后 OOM

2019-12-18 Thread USERNAME
@tonysong...@gmail.com 感谢回复 看了下参数的含义, taskmanager.memory.off-heap: 如果设置为true,TaskManager分配用于排序,hash表和缓存中间结果的内存位于JVM堆外。对于具有较大内存的设置,这可以提高在内存上执行的操作的效率(默认为false)。 JVM堆内使用的内存是受YARN限制的,JVM堆外不受YARN限制,如果这样确实能 说通现在我的问题, 已经修改并且在测试了,非常感谢tonysong...@gmail.com

Re: FLINK 1.9 + YARN+ SessionWindows + 大数据量 + 运行一段时间后 OOM

2019-12-17 Thread Xintong Song
你这个不是OOM,是 container 内存超用被 yarn 杀掉了。 JVM 的内存是不可能超用的,否则会报 OOM。所以比较可能是 RocksDB 的内存够用量增加导致了超用。 建议: 1. 增加如下配置 taskmanager.memory.off-heap: true taskmanager.memory.preallocate: false 2. 若果已经采用了如下配置,或者改了配置之后仍存在问题,可以尝试调大下面这个配置,未配置时默认值是0.25 containerized.heap-cutoff-ratio Thank you~ Xintong Song

FLINK 1.9 + YARN+ SessionWindows + 大数据量 + 运行一段时间后 OOM

2019-12-17 Thread USERNAME
版本:flink 1.9.1 --运行命令 flink run -d -m yarn-cluster -yn 40 -ys 2 --部分代码 env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime); RocksDBStateBackend backend = new RocksDBStateBackend(CHECKPOINT_PATH, true); .keyBy("imei") //10W+

flink 1.9 conflict jackson version

2019-12-16 Thread Fanbin Bu
Hi, After I upgrade flink 1.9, I got the following error message on EMR, it works locally on IntelliJ. I'm explicitly declaring the dependency as implementation 'com.fasterxml.jackson.module:jackson-module-scala_2.11:2.10.1' and I have implementation group: 'com.amazonaws', name: 'aws-java-sdk

Re: Flink 1.9 Sql Rowtime Error

2019-11-01 Thread OpenInx
Hi Polarisary. Checked the flink codebase and your stacktraces, seems you need to format the timestamp as : "-MM-dd'T'HH:mm:ss.SSS'Z'" The code is here:

Flink 1.9 Sql Rowtime Error

2019-11-01 Thread Polarisary
Hi All: I have define kafka connector Descriptor, and registe Table tEnv.connect(new Kafka() .version("universal") .topic(tableName) .startFromEarliest() .property("zookeeper.connect", “xxx") .property("bootstrap.servers", “xxx")

Re: Flink 1.9 measuring time taken by each operator in DataStream API

2019-10-25 Thread Fabian Hueske
Hi Komal, Measuring latency is always a challenge. The problem here is that your functions are chained, meaning that the result of a function is directly passed on to the next function and only when the last function emits the result, the first function is called with a new record. This makes

Flink 1.9 measuring time taken by each operator in DataStream API

2019-10-24 Thread Komal Mariam
Hello, I have a few questions regarding flink’s dashboard and monitoring tools. I have a fixed number of records that I process through the datastreaming API on my standalone cluster and want to know how long it takes to process them. My questions are: 1)How can I see the time taken in

Re: flink 1.9

2019-10-18 Thread Gezim Sejdiu
Hi Chesnay, I see. Many thanks for your prompt reply. Will make us of flink-shaded-hadoop-uber jar when deploying Flink using Docker starting from Flink v.1.8.0. Best regards, On Fri, Oct 18, 2019 at 1:30 PM Chesnay Schepler wrote: > We will not release Flink version bundling Hadoop. > > The

Re: flink 1.9

2019-10-18 Thread Chesnay Schepler
We will not release Flink version bundling Hadoop. The versioning for flink-shaded-hadoop-uber is entirely decoupled from Flink version. You can just use the flink-shaded-hadoop-uber jar linked on the downloads page with any Flink version. On 18/10/2019 13:25, GezimSejdiu wrote: Hi Flink

Re: flink 1.9

2019-10-18 Thread GezimSejdiu
Hi Flink community, I'm aware of the split done for binary sources of Flink starting from Flink 1.8.0 version, i.e there are no hadoop-shaded binaries available on apache dist. archive: https://archive.apache.org/dist/flink/flink-1.8.0/. Are there any plans to move the hadoop-pre-build binaries

?????? Flink 1.9 SQL/TableAPI ????uid??State ????????

2019-10-16 Thread ????????
2019??10??17??(??) 11:05 ??: "user-zh"; : Re: Flink 1.9 SQL/TableAPI uid??State Hi?? 1. ?? table table Table API ?? SQL?? SQL 3. HiveTableInputFormat reache

Re: Flink 1.9 SQL/TableAPI 设置uid及State 更新问题

2019-10-16 Thread Jark Wu
只需要关联流数据,但少部分除了关联流数据,也需要考虑“历史"状态, 比如Only > emit global min/max/distinct value,且不考虑retract。这种实践一般怎么”优雅“或者”平台透明”解决? > > > 非常感谢。 > > > -- 原始邮件 -- > 发件人: "Jark Wu"; > 发送时间: 2019年10月16日(星期三) 下午4:04 > 收件人: "user-zh"

Re: Flink 1.9 SQL/TableAPI 设置uid及State 更新问题

2019-10-16 Thread Jark Wu
<12214...@qq.com> wrote: > Hi ~, > > > 在使用Flink 1.9 > SQL时,需要结合外部大量数据与当前流进行Join、TopN和Distinct操作,考虑采用初始化相关Operator的State方法,遇到下面几个问题,麻烦解答下: > 1. 是否SQL或Table API是禁止设置uid或者uidhash的?包括对Kafka > DataStreamSource设置了uid或者uidhash也无效? > 2. 在不改变Graph下,对一个SQL Job 下某个GroupAggregato

Re: Batch Job in a Flink 1.9 Standalone Cluster

2019-10-14 Thread Timothy Victor
Thanks for the insight Roman, and also for the GC tips. There are 2 reasons why I wanted to see this memory released. First as a way to just confirm my understanding of Flink memory segment handling. Second is that I run a single standalone cluster that runs both streaming and batch jobs, and

Re: Batch Job in a Flink 1.9 Standalone Cluster

2019-10-14 Thread Roman Grebennikov
Forced GC does not mean that JVM will even try to release the freed memory back to the operating system. This highly depends on the JVM and garbage collector used for your Flink setup, but most probably it's the jvm8 with the ParallelGC collector. ParallelGC is known to be not that aggressive

Flink 1.9 Failed to take leadership with session id 异常

2019-10-12 Thread 王佩
Flink 1.9 DataStream程序,运行一段时间后报如下错误: 2019-10-09 21:07:44 INFO org.apache.flink.runtime.jobmaster.JobMaster dissolveResourceManagerConnection 1010 Close ResourceManager connection be4e0b96b331165ff9f4bd7ef4868d94: JobManager is no longer the leader.. 2019-10-09 21:07:44 INFO

Re: Batch Job in a Flink 1.9 Standalone Cluster

2019-10-12 Thread Timothy Victor
This part about the GC not cleaning up after the job finishes makes sense. However, I o served that even after I run a "jcmd GC.run" on the task manager process ID the memory is still not released. This is what concerns me. Tim On Sat, Oct 12, 2019, 2:53 AM Xintong Song wrote: > Generally

Re: Batch Job in a Flink 1.9 Standalone Cluster

2019-10-11 Thread Timothy Victor
Thanks Xintong! In my case both of those parameters are set to false (default). I think I am sort of following what's happening here. I have one TM with heap size set to 1GB. When the cluster is started the TM doesn't use that 1GB (no allocations). Once the first batch job is submitted I can

Re: Batch Job in a Flink 1.9 Standalone Cluster

2019-10-10 Thread Xintong Song
I think it depends on your configurations. - Are you using on-heap/off-heap managed memory? (configured by 'taskmanager.memory.off-heap', by default is false) - Is managed memory pre-allocated? (configured by 'taskmanager.memory.preallocate', by default is ffalse) If managed memory is

Re: Batch Job in a Flink 1.9 Standalone Cluster

2019-10-10 Thread Yang Wang
Hi Tim, Do you mean the user heap memory used by the tasks of finished jobs is not freed up? If this is the case, the memory usage of taskmanger will increase as more and more jobs finished. However this does not happen, the memory will be freed up by jvm gc. BTW, flink has its own memory

  1   2   >