The Apache Flink community is very happy to announce the release of Apache
Flink 1.14.5, which is the fourth bugfix release for the Apache Flink 1.14
series.
Apache Flink® is an open-source stream processing framework for
distributed, high-performing, always-available, and accurate data streaming
Matyas and Gyula have shared many great informations about how to make the
Flink Kubernetes Operator work on the EKS.
One more input about how to prepare the user jars. If you are more familiar
with K8s, you could use persistent volume to provide the user jars and them
mount the volume to
Hi Matt,
I believe an artifact fetcher (e.g
https://hub.docker.com/r/agiledigital/s3-artifact-fetcher ) + the pod
template (
https://nightlies.apache.org/flink/flink-kubernetes-operator-docs-main/docs/custom-resource/pod-template/#pod-template)
is an elegant way to solve your problem.
The
Thank you very much for the help Matyas and Gyula!
I just saw a video today where you were presenting the FKO. Really nice
stuff!
So I'm guessing we're executing "flink run" at some point on the master and
that this is when we need the jar file to be local?
Am I right in assuming that this
A small addition to what Matyas has said:
The limitation of only supporting local scheme is coming from the Flink
Kubernetes Application mode directly and is not related to the operator
itself.
Once this feature is added to Flink itself the operator can also support it
for newer Flink versions.
Hi Matt,
- In FlinkDeployments you can utilize an init container to download your
artifact onto a shared volume, then you can refer to it as local:/.. from
the main container. FlinkDeployments comes with pod template support
Hi Flink team!
I'm interested in getting the new Flink Kubernetes Operator to work on AWS
EKS. Following the documentation I got pretty far. However, when trying
to run a job I got the following error:
Only "local" is supported as schema for application mode. This assumes t
> hat the jar is
Hi,
Thank you all for your replies. The suggestion “to allow Akka cluster
communication to bypass the Istio sidecar proxy” helped and we were able to
deploy.
I understand and agree with the rational to “follow a standard as Flink and
avoid implementing guidelines from different
Thanks for pinging me!
Yes, this is my main target to finish this feature however there are major
code parts which are still missing.
Please have a look at the umbrella jira to get better understanding:
https://issues.apache.org/jira/browse/FLINK-21232
In general it's not advised to use it for
Hi,
确认了下, cdc source 目前全量结束后 task 还是保持的,不会 finish, 这里的 finished task 应该是你提到的
" 使用了lookup join + 外部mysql维表,任务开始时,全量加载了一次维表数据,对应task状态就变成了finished。"
Best,
Lincoln Lee
amber_...@qq.com.INVALID 于2022年6月21日周二 14:35写道:
> 非常感谢!你的建议很有用。
>
>
Hi,
For your information G (ccd) is actively working on this topic. [1] He is
in the best position to answer your questions as far as I know. :-)
[1]
https://cwiki.apache.org/confluence/display/FLINK/FLIP-211%3A+Kerberos+delegation+token+framework
On Tue, Jun 21, 2022 at 8:38 AM vtygoss wrote:
Hi, flink community!
I don't know much details for KDC. Can different TaskManagers hold different
tokens? If so, driver and each worker can renew their tokens in their
respective DelegationTokenManager individually.
Thanks for your any replies.
Best Regards!
在 2022年6月21日 13:30,vtygoss
非常感谢!你的建议很有用。
我在代码中添加execution.checkpointing.checkpoints-after-tasks-finish.enabled相关配置,完美解决了问题。
我使用了lookup join + 外部mysql维表,任务开始时,全量加载了一次维表数据,对应task状态就变成了finished。
best wishes!
amber_...@qq.com
发件人: Lincoln Lee
发送时间: 2022-06-21 11:18
收件人: user-zh
主题: Re: Re: 使用join+聚合时,checkpoint异常
13 matches
Mail list logo