Re: OVER operator filtering out records

2019-08-23 Thread Vinod Mehra
Although things improved during bootstrapping and when even volume was larger. As soon as the traffic slowed down the events are getting stuck (buffered?) at the OVER operator for a very long time. Several hours. On Fri, Aug 23, 2019 at 5:04 PM Vinod Mehra wrote: > (Forgot to mention that we

test

2019-08-23 Thread 张金
test

Re: Problem with Flink on Yarn

2019-08-23 Thread Rong Rong
This seems like your Kerberos server is starting to issue invalid token to your job manager. Can you share how your Kerberos setting is configured? This might also relate to how your KDC servers are configured. -- Rong On Fri, Aug 23, 2019 at 7:00 AM Zhu Zhu wrote: > Hi Juan, > > Have you

Re: [ANNOUNCE] Apache Flink 1.9.0 released

2019-08-23 Thread Gavin Lee
Got it. Thanks Till & Zili. +1 for the release notes need to cover such issues. On Fri, Aug 23, 2019 at 11:01 PM Oytun Tez wrote: > Hi all, > > We also had to rollback our upgrade effort for 2 reasons: > > - Official Docker container is not ready yet > - This artefact is not published with >

Re: Per Partition Watermarking source idleness

2019-08-23 Thread Eduardo Winpenny Tejedor
Hi Prakhar, Everything is probably working as expected, if a partition does not receive any messages then the watermark of the operator does not advance (as it is the minimum across all partitions). You'll need to define a strategy for the watermark to advance even when no messages are received

Re: OVER operator filtering out records

2019-08-23 Thread Vinod Mehra
(Forgot to mention that we are using Flink 1.4) Update: Earlier the OVER operator was assigned a parallelism of 64. I reduced it to 1 and the problem went away! Now the OVER operator is not filtering/buffering the events anymore. Can someone explain this please? Thanks, Vinod On Fri, Aug 23,

Re: [SURVEY] How do you use high-availability services in Flink?

2019-08-23 Thread Aleksandar Mastilovic
Hi all, Since I’m currently working on an implementation of HighAvailabilityServicesFactory I thought it would be good to report here about my experience so far. Our use case is cloud based, where we package Flink and our supplementary code into a docker image, then run those images through

flink sql syntax for accessing object

2019-08-23 Thread Fanbin Bu
Hi, I have a table with schema being a scala case class or a Map. How do I access the field? Tried the following and it doesn't work. case class MyObject(myField: String) case class Event(myObject: MyObject, myMap: Map[String, String]) table = tableEnv.fromDataStream[Event](myStream, 'myObject,

OVER operator filtering out records

2019-08-23 Thread Vinod Mehra
We have a SQL based flink job which is consume a very low volume stream (1 or 2 events in few hours): *SELECT user_id,COUNT(*) OVER (PARTITION BY user_id ORDER BY rowtime RANGE INTERVAL '30' DAY PRECEDING) as count_30_days, COALESCE(occurred_at, logged_at) AS latency_marker,

kinesis table connector support

2019-08-23 Thread Fanbin Bu
Hi, Looks like Flink table connectors do not include `kinesis`. (only FileSystem, Kafka, ES) see https://ci.apache.org/projects/flink/flink-docs-stable/dev/table/connect.html#table-connectors . I also found some examples for Kafka:

Use logback instead of log4j

2019-08-23 Thread Vishwas Siravara
Hi , >From the flink doc , in order to use logback instead of log4j " Users willing to use logback instead of log4j can just exclude log4j (or delete it from the lib/ folder)." https://ci.apache.org/projects/flink/flink-docs-stable/monitoring/logging.html . However when i delete it from the lib

Hive version in Flink

2019-08-23 Thread Yebgenya Lazarkhosrouabadi
Hello, I'm using Flink on Cloudera-quickstart-vm-5.13 and need to access the Hive-Tables. The version of hive on Cloudera is 1.1.0 , but in order to access the data of the Hive-Tables, a higher version of hive is needed. Unfortunately it is not possible to easily change the version of Hive on

type error with generics ..

2019-08-23 Thread Debasish Ghosh
Hello - I have the following call to addSource where I pass a Custom SourceFunction .. env.addSource( new CollectionSourceFunctionJ(data, TypeInformation.of(new TypeHint(){})) ) where data is List and CollectionSourceFunctionJ is a Scala case class .. case class

Re: [ANNOUNCE] Apache Flink 1.9.0 released

2019-08-23 Thread Oytun Tez
Hi all, We also had to rollback our upgrade effort for 2 reasons: - Official Docker container is not ready yet - This artefact is not published with scala: org.apache.flink:flink-queryable-state-client-java_2.11:jar:1.9.0 --- Oytun Tez *M O T A W O R D* The World's Fastest Human

Re: [ANNOUNCE] Apache Flink 1.9.0 released

2019-08-23 Thread Zili Chen
Hi Till, Did we mention this in release note(or maybe previous release note where we did the exclusion)? Best, tison. Till Rohrmann 于2019年8月23日周五 下午10:28写道: > Hi Gavin, > > if I'm not mistaken, then the community excluded the Scala FlinkShell > since a couple of versions for Scala 2.12. The

Re: [ANNOUNCE] Apache Flink 1.9.0 released

2019-08-23 Thread Till Rohrmann
Hi Gavin, if I'm not mistaken, then the community excluded the Scala FlinkShell since a couple of versions for Scala 2.12. The problem seems to be that some of the tests failed. See here [1] for more information. [1] https://issues.apache.org/jira/browse/FLINK-10911 Cheers, Till On Fri, Aug

Re: Problem with Flink on Yarn

2019-08-23 Thread Zhu Zhu
Hi Juan, Have you tried Flink release built with Hadoop 2.7 or later version? If you are using Flink 1.8/1.9, it should be Pre-bundled Hadoop 2.7+ jar which can be found in the Flink download page. I think YARN-3103 is about AMRMClientImp.class and it is in the flink shaded hadoop jar. Thanks,

Are there any news on custom trigger support for SQL/Table API?

2019-08-23 Thread Theo Diefenthal
Hi there, I currently evaluate to let our experienced system users write Flink-SQL queries directly. Currently, all queries our users need are implemented programmatically. There is one major problem preventing us from just giving SQL to our users directly. Almost all queries of our users

Re: I'm not able to make a stream-stream Time windows JOIN in Flink SQL

2019-08-23 Thread Theo Diefenthal
Hi Fabian, Hi Zhenghua Thank you for your suggestions and telling me that I was on the right track. And good to know how to find out whether something yields to time-bounded or regular join. @Fabian: Regarding your suggested first option: Isn't that exactly what my first try was? With this

Problem with Flink on Yarn

2019-08-23 Thread Juan Gentile
Hello! We are running Flink on Yarn and we are currently getting the following error: 2019-08-23 06:11:01,534 WARN org.apache.hadoop.security.UserGroupInformation - PriviledgedActionException as: (auth:KERBEROS)

Re: [ANNOUNCE] Apache Flink 1.9.0 released

2019-08-23 Thread Gavin Lee
I used package on apache official site, with mirror [1], the difference is I used scala 2.12 version. I also tried to build from source for both scala 2.11 and 2.12, when build 2.12 the FlinkShell.class is in flink-dist jar file but after running mvn clean package -Dscala-2.12, this class was

Per Partition Watermarking source idleness

2019-08-23 Thread Prakhar Mathur
Hi, We are using flink v1.6. We are facing data loss issues while consuming data from older offsets in Kafka with windowing. We are exploring per partition watermarking strategy. But we noticed that when we are trying to consume from multiple topics and if any of the partition is not receiving

回复: flink-1.9 打包问题

2019-08-23 Thread 苏 欣
settings.xml加上这个镜像试试 confluent confluent confluent http://packages.confluent.io/maven sean...@live.com 发件人: Jimmy Wong 发送时间: 2019-08-23 16:56 收件人: user-zh 主题: flink-1.9 打包问题 Hi,大家好,我用阿里的 settings.xml

flink-1.9 打包问题

2019-08-23 Thread Jimmy Wong
Hi,大家好,我用阿里的 settings.xml 打包 flink-1.9 的时候,使用的命令如下 > mvn clean install -DskipTests 但是,在打包结束是报错如下: > [ERROR] Failed to execute goal on project flink-avro-confluent-registry: > Could not resolve dependencies for project > org.apache.flink:flink-avro-confluent-registry:jar:1.9-SNAPSHOT: Failure

Re: Re: Re: Re: Re: Re: flink1.9.0 LOCAL_WEBSERVER 问题

2019-08-23 Thread Zili Chen
相应的提了 LOCAL_WEBSERVER 的 issue[1] Best, tison. [1] https://issues.apache.org/jira/browse/FLINK-13828 hb <343122...@163.com> 于2019年8月23日周五 下午3:26写道: > 谢谢,的确是这样的, 少了依赖哈哈 > > > 在 2019-08-23 14:20:54,"Zili Chen" 写道: > >这是因为网页相关的文件被打包在 flink-runtime-web_${scala.binary.version} 的 resource >

Re: [ANNOUNCE] Apache Flink 1.9.0 released

2019-08-23 Thread Zili Chen
I downloaded 1.9.0 dist from here[1] and didn't see the issue. Where do you download it? Could you try to download the dist from [1] and see whether the problem last? Best, tison. [1] http://mirrors.tuna.tsinghua.edu.cn/apache/flink/flink-1.9.0/flink-1.9.0-bin-scala_2.11.tgz Gavin Lee

Re: [ANNOUNCE] Apache Flink 1.9.0 released

2019-08-23 Thread Gavin Lee
Thanks for your reply @Zili. I'm afraid it's not the same issue. I found that the FlinkShell.class was not included in flink dist jar file in 1.9.0 version. Nowhere can find this class file inside jar, either in opt or lib directory under root folder of flink distribution. On Fri, Aug 23, 2019

Re: [ANNOUNCE] Apache Flink 1.9.0 released

2019-08-23 Thread Zili Chen
Hi Gavin, I also find a problem in shell if the directory contain whitespace then the final command to run is incorrect. Could you check the final command to be executed? FYI, here is the ticket[1]. Best, tison. [1] https://issues.apache.org/jira/browse/FLINK-13827 Gavin Lee 于2019年8月23日周五

Re: Can I use watermarkers to have a global trigger of different ProcessFunction's?

2019-08-23 Thread David Anderson
If you want to use event time processing with in-order data, then you can use an AscendingTimestampExtractor [1]. David [1] https://ci.apache.org/projects/flink/flink-docs-stable/dev/event_timestamp_extractors.html#assigners-with-ascending-timestamps On Thu, Aug 22, 2019 at 4:03 PM Felipe

Re: [ANNOUNCE] Apache Flink 1.9.0 released

2019-08-23 Thread Gavin Lee
Why bin/start-scala-shell.sh local return following error? bin/start-scala-shell.sh local Error: Could not find or load main class org.apache.flink.api.scala.FlinkShell For flink 1.8.1 and previous ones, no such issues. On Fri, Aug 23, 2019 at 2:05 PM qi luo wrote: > Congratulations and

Re:Re: Re: Re: Re: Re: flink1.9.0 LOCAL_WEBSERVER 问题

2019-08-23 Thread hb
谢谢,的确是这样的, 少了依赖哈哈 在 2019-08-23 14:20:54,"Zili Chen" 写道: >这是因为网页相关的文件被打包在 flink-runtime-web_${scala.binary.version} 的 resource >下面,只要能正确依赖、下载然后被发现就行了。 > >你之前可以应该是因为依赖里有这个模块。 > >Best, >tison. > > >Zili Chen 于2019年8月23日周五 下午3:19写道: > >> 添加这个依赖就可以了 >> >> >> org.apache.flink >>

checkpoint failure suddenly even state size is into 10 mb around

2019-08-23 Thread Sushant Sawant
Hi all, m facing two issues which I believe are co-related though. 1. Kafka source shows high back pressure. 2. Sudden checkpoint failure for entire day until restart. My job does following thing, a. Read from Kafka b. Asyncio to external system c. Dumping in Cassandra, Elasticsearch

Re: Re: Re: Re: Re: flink1.9.0 LOCAL_WEBSERVER 问题

2019-08-23 Thread Zili Chen
这是因为网页相关的文件被打包在 flink-runtime-web_${scala.binary.version} 的 resource 下面,只要能正确依赖、下载然后被发现就行了。 你之前可以应该是因为依赖里有这个模块。 Best, tison. Zili Chen 于2019年8月23日周五 下午3:19写道: > 添加这个依赖就可以了 > > > org.apache.flink > flink-runtime-web_2.11 > 1.9.0 > > > Best, > tison. > > > Zili Chen

Re: Re: Re: Re: Re: flink1.9.0 LOCAL_WEBSERVER 问题

2019-08-23 Thread Zili Chen
添加这个依赖就可以了 org.apache.flink flink-runtime-web_2.11 1.9.0 Best, tison. Zili Chen 于2019年8月23日周五 下午3:12写道: > 这个应该跟 1.9 使用了新版 WebUI 有关,我不太清楚。你可以到 JIRA 上提 issue 贴上 1.9 和之前版本的前后对比图让相关的 > Flink 开发者帮忙看一下。 > > 后面的问题,看源码发现的(x > > Best, > tison. > > > hb <343122...@163.com> 于2019年8月23日周五

Re: Re: Re: Re: Re: flink1.9.0 LOCAL_WEBSERVER 问题

2019-08-23 Thread Zili Chen
这个应该跟 1.9 使用了新版 WebUI 有关,我不太清楚。你可以到 JIRA 上提 issue 贴上 1.9 和之前版本的前后对比图让相关的 Flink 开发者帮忙看一下。 后面的问题,看源码发现的(x Best, tison. hb <343122...@163.com> 于2019年8月23日周五 下午3:05写道: > 请问 这个【配置项无效】 是在哪里看的, debug程序里看的么 > > > > > > > > > 在 2019-08-23 14:01:32,"Zili Chen" 写道: > >我看看能不能怎么加依赖或者手动放网页文件来解决这个问题。 > > >

Re:Re: Re: Re: Re: flink1.9.0 LOCAL_WEBSERVER 问题

2019-08-23 Thread hb
请问 这个【配置项无效】 是在哪里看的, debug程序里看的么 在 2019-08-23 14:01:32,"Zili Chen" 写道: >我看看能不能怎么加依赖或者手动放网页文件来解决这个问题。 > >另外,【配置项无效】是说,这个配置项在代码里没有用到,所以无论你怎么配,都不会对程序产生影响;而不是说,不管怎么配,都产生了 WebUI 无效的效果。 > >Best, >tison. > > >Zili Chen 于2019年8月23日周五 下午2:59写道: > >> 喔,明白了,这个是因为你访问 /taskmanagers 是一个 REST 接口,Flink

Re: Re: Re: Re: flink1.9.0 LOCAL_WEBSERVER 问题

2019-08-23 Thread Zili Chen
我看看能不能怎么加依赖或者手动放网页文件来解决这个问题。 另外,【配置项无效】是说,这个配置项在代码里没有用到,所以无论你怎么配,都不会对程序产生影响;而不是说,不管怎么配,都产生了 WebUI 无效的效果。 Best, tison. Zili Chen 于2019年8月23日周五 下午2:59写道: > 喔,明白了,这个是因为你访问 /taskmanagers 是一个 REST 接口,Flink 的 WebUI 实际上是正常的,所以能正常的返回你。 > > 你访问主页的时候,由于加载主页需要相应的 html 等文件,而 Flink 找不到,所以就告诉你 not found >

Re: Re: Re: Re: flink1.9.0 LOCAL_WEBSERVER 问题

2019-08-23 Thread Zili Chen
喔,明白了,这个是因为你访问 /taskmanagers 是一个 REST 接口,Flink 的 WebUI 实际上是正常的,所以能正常的返回你。 你访问主页的时候,由于加载主页需要相应的 html 等文件,而 Flink 找不到,所以就告诉你 not found Best, tison. hb <343122...@163.com> 于2019年8月23日周五 下午2:51写道: > 个人理解,能通过这个端口访问restfull API 应该表示 这个设置生效了,但是首页却是404 > > > 访问: http://localhost:8089/ 404 > 访问:

Re:Re: Re: Re: flink1.9.0 LOCAL_WEBSERVER 问题

2019-08-23 Thread hb
个人理解,能通过这个端口访问restfull API 应该表示 这个设置生效了,但是首页却是404 访问: http://localhost:8089/ 404 访问: http://localhost:8089/taskmanagers/ 正常:

Re: Re: Re: flink1.9.0 LOCAL_WEBSERVER 问题

2019-08-23 Thread Zili Chen
源码里没有标为废弃应该是 FLINK 的一个 issue,你可以到 JIRA[1]上提,这个选项确实是没有用的。 听起来你的程序是个测试程序,能提供相应的源码吗?如果你说后面能访问 /taskmanagers 的话可能并没有问题(x Best, tison. [1] https://issues.apache.org/jira/browse/ hb <343122...@163.com> 于2019年8月23日周五 下午2:27写道: > 我在idea里 用maven 下载依赖,在idea里运行flink程序源码里没有标志为废弃啊 > ```package >

Re:Re:Re: Re: flink1.9.0 LOCAL_WEBSERVER 问题

2019-08-23 Thread hb
我在idea里 用maven 下载依赖,在idea里运行flink程序 源码里没有标志为废弃啊 ``` package org.apache.flink.configuration public final class ConfigConstants { ... /** * @deprecated Use {@link ResourceManagerOptions#LOCAL_NUMBER_RESOURCE_MANAGER} instead. */ @Deprecated public static final int

Re:Re: Re: flink1.9.0 LOCAL_WEBSERVER 问题

2019-08-23 Thread hb
我在idea里 用maven 下载依赖,在idea里运行flink程序源码里没有标志为废弃啊 ```package org.apache.flink.configurationpublic final class ConfigConstants { ... /** * @deprecated Use {@link ResourceManagerOptions#LOCAL_NUMBER_RESOURCE_MANAGER} instead. */@Deprecatedpublic static final int

Re: Re: flink1.9.0 LOCAL_WEBSERVER 问题

2019-08-23 Thread Zili Chen
另外有个问题是,你是下载二进制 zip 还是从源码编译安装的? Best, tison. Zili Chen 于2019年8月23日周五 下午2:04写道: > 我切到了 1.9 的代码上看,这个 ConfigConstants.LOCAL_START_WEBSERVER 应该是个废设置,设不设都没有任何效果。 > > 所以问题应该跟这个选项没关系,比如你刷新 localhost:8089 能不能恢复? > > Best, > tison. > > > hb <343122...@163.com> 于2019年8月23日周五 下午1:47写道: > >> 1.9

Re: [ANNOUNCE] Apache Flink 1.9.0 released

2019-08-23 Thread qi luo
Congratulations and thanks for the hard work! Qi > On Aug 22, 2019, at 8:03 PM, Tzu-Li (Gordon) Tai wrote: > > The Apache Flink community is very happy to announce the release of Apache > Flink 1.9.0, which is the latest major release. > > Apache Flink® is an open-source stream processing

Re: Re: flink1.9.0 LOCAL_WEBSERVER 问题

2019-08-23 Thread Zili Chen
我切到了 1.9 的代码上看,这个 ConfigConstants.LOCAL_START_WEBSERVER 应该是个废设置,设不设都没有任何效果。 所以问题应该跟这个选项没关系,比如你刷新 localhost:8089 能不能恢复? Best, tison. hb <343122...@163.com> 于2019年8月23日周五 下午1:47写道: > 1.9 版本之前,都是可以这么用的,正常的,1.9也是有这个API的啊 > 在 2019-08-23 12:28:14,"Zili Chen" 写道: > >你是在哪看到这个配置的,我查了下代码甚至这个选项都没有使用点(x