https://ci.apache.org/projects/flink/flink-docs-stable/dev/table/streaming/time_attributes.html#event-time
On Sun, Feb 21, 2021 at 8:43 AM Aeden Jameson
wrote:
> In my job graph viewed through the Flink UI I see a task named,
>
> rowtime field: (#11: event_time TIME ATTRIBUTE(ROWTIME))
>
> that
Hi,
Is there some way that I can configure an operator based on the key in a
stream?
Eg: If the key is 'abcd', then create a window of size X counts, if the key
is 'bfgh', then create a window of size Y counts.
Is this scenario possible in flink
In my job graph viewed through the Flink UI I see a task named,
rowtime field: (#11: event_time TIME ATTRIBUTE(ROWTIME))
that has an upstream Kafka source task. What exactly does the rowtime task do?
--
Thank you,
Aeden
Hi
这是我之前看到一篇关于OOM KILL 的分析文章,不知道对你有没有用
http://www.whitewood.me/2021/01/02/%E8%AF%A6%E8%A7%A3-Flink-%E5%AE%B9%E5%99%A8%E5%8C%96%E7%8E%AF%E5%A2%83%E4%B8%8B%E7%9A%84-OOM-Killed/
On Thu, Feb 18, 2021 at 9:01 AM lian wrote:
> 各位大佬好:
> 1. 背景:使用Flink
>
I have a table with two BIGINT fields for start and end of an event as UNIX
time in milliseconds. I want to be able to have a resulting column with the
delta in milliseconds and group by that difference. Also, I want to be able
to have aggregations with window functions based upon the `end` field.
Adding "list" to verbs helps, do I need to add anything else ?
From: Alexey Trenikhun
Sent: Saturday, February 20, 2021 2:10 PM
To: Flink User Mail List
Subject: stop job with Savepoint
Hello,
I'm running per job Flink cluster, JM is deployed as Kubernetes Job
Hello,
I'm running per job Flink cluster, JM is deployed as Kubernetes Job with
restartPolicy: Never, highavailability is KubernetesHaServicesFactory. Job runs
fine for some time, configmaps are created etc. Now in order to upgrade Flink
job, I'm trying to stop job with savepoint (flink stop
Hello,
I launched a job with a larger load on hadoop yarn cluster.
The Job finished after running 5 hours, I didn't find any error from
JobManger log besides this connect exception.
*2021-02-20 13:20:14,110 WARN akka.remote.transport.netty.NettyTransport
- Remote connection
I'm using the latest Flink 1.12 and the timestamps precision is coming from
Debezium, which I think is a standard ISO-8601 timestamp.
On Thu, 18 Feb 2021 at 16:19, Timo Walther wrote:
> Hi Sebastián,
>
> which Flink version are you using? And which precision do the timestamps
> have?
>
> This
I mean the SQL queries being validated when I do `mvn compile` or any
target that runs that so that basic syntax checking is performed without
having to submit the job to the cluster.
On Thu, 18 Feb 2021 at 16:17, Timo Walther wrote:
> Hi Sebastián,
>
> what do you consider as compile time? If
Hello, *Jörn Franke*.
Thank you for reply.
If I correctly realise your answer, the watermark Flink mechanism should
help me sort events in order I need. So I should dig deeper in that issue.
For example, I read three topics, make joins and after get two events by the
same user in this order:
也可以在build镜像的时候来进行设置
Best,
Yang
Michael Ran 于2021年2月19日周五 下午7:35写道:
> k8s 设置的
> 在 2021-02-19 09:37:28,"casel.chen" 写道:
> >目前是UTC时区的,怎样才能设置成当地的东8区呢?谢谢!
> >
> >
> >2021-02-19 01:34:21,259 INFO akka.event.slf4j.Slf4jLogger
> [] - Slf4jLogger started
> >2021-02-19 01:34:22,155
我理解你说的应该是standalone session,这种模式下一个TM上面是会跑不同job的task的
TM里面的框架日志都是混在一起的,如果你的job class是在不同的package下面
可以用log4j2针对不同的package设置不同的logger以及appender来输出到不同路径
Best,
Yang
xingoo <23603...@qq.com> 于2021年2月20日周六 下午5:31写道:
> Dear All:
> 目前Flink部署主要采用standalone,想了解下如何在同一个taskmanager区分各个job的日志。
>
>
>
> --
>
Hi,
What is the way to run the code (from eclipse or intellij idea) to the
Apache Flink UI?
Thank you!
并发度的设置有优先级,客户端级别小于算子级别,所以上游source算子单独设置并发度会生效,而下游仍然是客户端级别的并发度。
在 2021-02-20 18:23:52,"guaishushu1...@163.com" 写道:
> 这几天研究了flink table 转化为stream node 的源码,发现是某个算子的并发度取决于上一个算子的并发度。
>但是在实际测试过程中发现使用window aggregate 语句时候 该算子的并发度和上游的source不一致 和我cli 命令配置的并发度一致
>这是为什么呢?
>
>
You are working in a distributed system so event ordering by time may not be
sufficient (or most likely not). Due to network delays, devices offline etc it
can happen that an event arrives much later although it happened before. Check
watermarks in flink and read on at least once, mostly once
这几天研究了flink table 转化为stream node 的源码,发现是某个算子的并发度取决于上一个算子的并发度。
但是在实际测试过程中发现使用window aggregate 语句时候 该算子的并发度和上游的source不一致 和我cli 命令配置的并发度一致
这是为什么呢?
guaishushu1...@163.com
Dear All:
目前Flink部署主要采用standalone,想了解下如何在同一个taskmanager区分各个job的日志。
--
Sent from: http://apache-flink.147419.n8.nabble.com/
18 matches
Mail list logo