怎样解决Flink滑动窗口状态过多且连续输出无变化状态到下游写入问题?

2024-11-07 文章 casel.chen
假设需要每秒统计过去一天用户登录次数指标的话,一条数据就会落在多达86400个窗口上,每个窗口都会产用state大小,而且每秒输出状态到下游,即使状态没有发生变化也会重复输出,造成下游算子写入压力大,请问有什么好的办法可以规避么?请分别从flink
 stream api和sql讲解一下,谢谢!

Re:Re: flink sql维表关联数据膨胀问题

2024-11-06 文章 Xuyang
Hi, casel.

一般情况下,维表侧也都会尽量做下推的。

比如对于:

```

SELECT * FROM MyTable AS T

JOIN LookupTable FOR SYSTEM_TIME AS OF T.proctime AS D

ON T.a = D.id AND D.age = 10

WHERE T.c > 1000

```

会优化成

```

Calc(select=[a, b, c, PROCTIME_MATERIALIZE(proctime) AS proctime, rowtime, id, 
name, CAST(10 AS INTEGER) AS age])

+- LookupJoin(table=[default_catalog.default_database.LookupTable], 
joinType=[InnerJoin], lookup=[age=10, id=a], where=[(age = 10)], select=[a, b, 
c, proctime, rowtime, id, name])

   +- Calc(select=[a, b, c, proctime, rowtime], where=[(c > 1000)])

  +- DataStreamScan(table=[[default_catalog, default_database, MyTable]], 
fields=[a, b, c, proctime, rowtime])

```

可以看到age = 10也会作为lookup join的key去维表查询。

可以把你的plan贴出来看看吗?




--

Best!
Xuyang





在 2024-11-06 19:47:21,"Hongshun Wang"  写道:
> 谓词下推取决于connector实现
>
>On Tue, Nov 5, 2024 at 2:48 PM casel.chen  wrote:
>
>> 场景是使用flink
>> sql流表lookup关联维表(一对多,部分数据会出现一条关联出上千条结果)后只取其中一条,这会导致维表查询压力非常大,像这种有没有办法在flink
>> sql层面进行谓词下推?
>> 从flink UI上看该lookup join算子的输出数据量是输入数据量的几百倍


Flink BigQuery Connector Public Preview

2024-11-06 文章 Jayant Jain
Greetings,

The Google BigQuery team is reaching out to help customers employ the Flink
BigQuery connector for building better data streaming and analytics
solutions. We recently released a new version (0.4.0), which can be
accessed via GitHub
<https://github.com/GoogleCloudDataproc/flink-bigquery-connector> and Maven
<https://central.sonatype.com/artifact/com.google.cloud.flink/flink-1.17-connector-bigquery/overview>.
With this release, the connector offers all core features that simplify the
integration between Flink and BigQuery.

Highlights:

   -

   BigQuery sink in Datastream and Table/SQL APIs
   -

  Offers at-least-once and exactly-once delivery guarantees
  -

  Maximum parallelism up to 128 (tuned according to BigQuery quota
  limits)
  -

  Insert only
  -

   BigQuery source (bounded) in Datastream and Table/SQL APIs
   -

   In-built client management to optimize and abstract BigQuery client
   complexities in your Flink applications
   -

   Flink metrics for runtime observability into BigQuery sink’s performance


We look forward to your valuable feedback for improving this product and
achieving its full potential.


Regards,

Jayant


Re: [ANNOUNCE] Apache Flink 2.0 Preview released

2024-11-06 文章 Zakelly Lan
Hi Benoit,

Please find the result here[1].

The Nexmark repo[2] does not officially support the flink 2.0 preview
version. However, we have made a PR[3] for this and once it is merged, we
will offer a guide to run Nexmark Q20 with disaggregated state management.


[1] https://github.com/ververica/ForSt/releases/tag/v0.1.2-beta
[2] https://github.com/nexmark/nexmark
[3] https://github.com/nexmark/nexmark/pull/62


Best,
Zakelly


On Wed, Nov 6, 2024 at 12:12 AM Benoit Tailhades 
wrote:

> Hello,
>
> Release note is talking about a complete end-to-end trial using Nexmark.
> Where could this be found ?
>
> Thank you.
>
> Le lun. 4 nov. 2024 à 02:48, Enric Ott <243816...@qq.com> a écrit :
>
>> Hello,Community:
>>   Is there a compelete benchmark for Apache Flink 2.0 Preview?
>>   Thanks.
>>
>>
>> -- 原始邮件 --
>> *发件人:* "Enric Ott" <243816...@qq.com>;
>> *发送时间:* 2024年10月23日(星期三) 晚上6:03
>> *收件人:* "Xintong Song";"dev"> >;"user";"user-zh"> >;"announce";
>> *主题:* 回复:[ANNOUNCE] Apache Flink 2.0 Preview released
>>
>> How to import the source code(from github) to Intelligent Idea,seems that
>> a project descriptor is missing.
>>
>>
>> -- 原始邮件 --
>> *发件人:* "Xintong Song" ;
>> *发送时间:* 2024年10月23日(星期三) 下午5:26
>> *收件人:* "dev";"user"> >;"user-zh";"announce";
>> *主题:* [ANNOUNCE] Apache Flink 2.0 Preview released
>>
>> The Apache Flink community is very happy to announce the release of
>> Apache Flink 2.0 Preview.
>>
>> Apache Flink® is an open-source unified stream and batch data processing
>> framework for distributed, high-performing, always-available, and accurate
>> data applications.
>>
>> This release is a preview of the upcoming Flink 2.0 release. The purpose
>> is to facilitate early adaptation to the breaking changes for our users and
>> partner projects (e.g., connectors), and to offer a sneak peek into the
>> exciting new features while gathering feedback.
>>
>> Note: Flink 2.0 Preview is not a stable release and should not be used in
>> production environments.
>>
>> The release is available for download at:
>> https://flink.apache.org/downloads.html
>>
>> Please checkout the release blog post for an overview of this release:
>> https://flink.apache.org/2024/10/23/preview-release-of-apache-flink-2.0/
>>
>> The full release notes are available in jira:
>>
>> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12355070
>>
>> We would like to thank all contributors of the Apache Flink community who
>> made this release possible!
>>
>> Best,
>>
>> Becket, Jark, Martijn and Xintong
>>
>>


[ANNOUNCE] Apache Flink Kubernetes Operator 1.10.0 released

2024-10-29 文章 Őrhidi Mátyás
The Apache Flink community is very happy to announce the release of Apache
Flink Kubernetes Operator 1.10.0

The Flink Kubernetes Operator allows users to manage their Apache Flink
applications and their lifecycle through native k8s tooling like kubectl.

Please check out the release blog post for an overview of the release:
https://flink.apache.org/2024/10/25/apache-flink-kubernetes-operator-1.10.0-release-announcement

The release is available for download at:
https://flink.apache.org/downloads.html

Maven artifacts for Flink Kubernetes Operator can be found at:
https://search.maven.org/artifact/org.apache.flink/flink-kubernetes-operator

Official Docker image for Flink Kubernetes Operator applications can be
found at:
https://hub.docker.com/r/apache/flink-kubernetes-operator

The full release notes are available in Jira:
https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12354833

We would like to thank all contributors of the Apache Flink community who
made this release possible!

Regards,
Matyas Orhidi


Re: [ANNOUNCE] Apache Flink 2.0 Preview released

2024-10-23 文章 weijie guo
Hi Enric
I clone the code from apache/flink repo and import it to Idea, But there
was nothing unexpected.

在 2024年10月23日星期三,Enric Ott <243816...@qq.com> 写道:

> How to import the source code(from github) to Intelligent Idea,seems that
> a project descriptor is missing.
>
>
> -- 原始邮件 --
> *发件人:* "Xintong Song" ;
> *发送时间:* 2024年10月23日(星期三) 下午5:26
> *收件人:* "dev";"user" >;"user-zh";"announce";
> *主题:* [ANNOUNCE] Apache Flink 2.0 Preview released
>
> The Apache Flink community is very happy to announce the release of Apache
> Flink 2.0 Preview.
>
> Apache Flink® is an open-source unified stream and batch data processing
> framework for distributed, high-performing, always-available, and accurate
> data applications.
>
> This release is a preview of the upcoming Flink 2.0 release. The purpose
> is to facilitate early adaptation to the breaking changes for our users and
> partner projects (e.g., connectors), and to offer a sneak peek into the
> exciting new features while gathering feedback.
>
> Note: Flink 2.0 Preview is not a stable release and should not be used in
> production environments.
>
> The release is available for download at:
> https://flink.apache.org/downloads.html
>
> Please checkout the release blog post for an overview of this release:
> https://flink.apache.org/2024/10/23/preview-release-of-apache-flink-2.0/
>
> The full release notes are available in jira:
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?
> projectId=12315522&version=12355070
>
> We would like to thank all contributors of the Apache Flink community who
> made this release possible!
>
> Best,
>
> Becket, Jark, Martijn and Xintong
>
>

-- 

Best regards,

Weijie


[ANNOUNCE] Apache Flink 2.0 Preview released

2024-10-23 文章 Xintong Song
The Apache Flink community is very happy to announce the release of Apache
Flink 2.0 Preview.

Apache Flink® is an open-source unified stream and batch data processing
framework for distributed, high-performing, always-available, and accurate
data applications.

This release is a preview of the upcoming Flink 2.0 release. The purpose is
to facilitate early adaptation to the breaking changes for our users and
partner projects (e.g., connectors), and to offer a sneak peek into the
exciting new features while gathering feedback.

Note: Flink 2.0 Preview is not a stable release and should not be used in
production environments.

The release is available for download at:
https://flink.apache.org/downloads.html

Please checkout the release blog post for an overview of this release:
https://flink.apache.org/2024/10/23/preview-release-of-apache-flink-2.0/

The full release notes are available in jira:
https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12355070

We would like to thank all contributors of the Apache Flink community who
made this release possible!

Best,

Becket, Jark, Martijn and Xintong


Re: Flink has a large number of data and appears to be suspended after restarting.

2024-10-07 文章 rui chen
TM上没有报错,在监控上看不处理数据了,cp也超时。

Xuyang  于2024年9月30日周一 10:18写道:

> Hi, chen.
>
> 你可以把TM日志报错的栈和上下文贴上来吗?
>
>
>
>
> --
>
> Best!
> Xuyang
>
>
>
>
>
> At 2024-09-29 10:00:44, "rui chen"  wrote:
> >1.A single piece of data is 500kb
> >2.The job restarts after a tm fails
>


Flink has a large number of data and appears to be suspended after restarting.

2024-09-28 文章 rui chen
1.A single piece of data is 500kb
2.The job restarts after a tm fails


Join Us for Flink Forward Asia 2024 in Shanghai (Nov 29-30) & Jakarta (Dec 5)!

2024-09-27 文章 Xintong Song
Dear Flink Community,


We are excited to share some important news with you!


Flink Forward Asia 2024 is coming up with two major events: the first in
Shanghai on November 29-30, and the second in Jakarta on December 5. These
gatherings will focus on the latest developments, future plans, and
production practices within the Apache Flink and Apache Paimon communities.


For the first time since its inception in 2018, Flink Forward Asia is
expanding into Southeast Asia, marking a significant milestone for the
global Apache Flink community.


Additionally, we’re pleased to announce the launch of our new website for
Flink Forward Asia. You can find more information about the events at
asia.flink-forward.org.


We look forward to open discussions on the latest advancements and
innovative applications of Flink technology, and we encourage greater
participation and influence within our community.


Don’t miss out! Register now for Flink Forward Asia Jakarta 2024 or submit
your presentations here: asia.flink-forward.org/jakarta-2024.


You can also register for Flink Forward Asia Shanghai 2024 or submit your
presentations here: asia.flink-forward.org/shanghai-2024.


We hope to see you there!


Best,

Xintong


Re: 开源flink cep是否支持动态规则配置

2024-09-12 文章 Feng Jin
 目前还未支持。


https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=195730308


Best,
Feng


On Thu, Sep 12, 2024 at 10:20 AM 王凯 <2813732...@qq.com.invalid> wrote:

> 请问下各位大佬开源flink CEP是否支持动态规则配置
>
>
>
>
> 王凯
> 2813732...@qq.com
>
>
>
>  


Re:回复:flink对象重用问题询问

2024-08-29 文章 Xuyang
Hi,

应该是可以的。




--

Best!
Xuyang





在 2024-08-29 15:00:54,"刘仲诺" <2313678...@qq.com.INVALID> 写道:
>您好,我目前使用的是DataStream API,代码如下:
>public class BytesProcessor extends ProcessWindowFunctionLong>, BytesResult, String, TimeWindow> {
>    private final BytesResult bytesResult = new BytesResult();
>    @Override
>    public void process(String key,
>                      
>  ProcessWindowFunctionTimeWindow>.Context context,
>                      
>  Iterable                      
>  Collector        Tuple2accumulator.iterator().next();
>//        BytesResult bytesResult = new BytesResult();
>        bytesResult.setChannelId(key);
>        bytesResult.setLastHopRecvBytes(result.f0);
>        bytesResult.setNextHopSendBytes(result.f1);
>        
>bytesResult.setCurrentTime(context.window().getStart());
>        if (!Objects.equals(result.f0, result.f1)) {
>            out.collect(bytesResult);
>        }
>    }
>}
>
>我在这里用算子实例的bytesResult变量代替了每次输出新建的bytesResult,这样的做法是否安全呢
>
>
>
>
>-- 原始邮件 --
>发件人:   
> "user-zh" 
>   
>发送时间: 2024年8月29日(星期四) 下午2:56
>收件人: "user-zh"
>主题: Re:flink对象重用问题询问
>
>
>
>Hi,
>
>在算子output时,是可以复用row的,在flink sql所转化的operator中,就有很多类似的做法,具体可以参考[1][2]
>
>
>
>
>[1] 
>https://github.com/apache/flink/blob/576ec2b9361a3f8d58fb22b998b0ca7c3c8cf10e/flink-table/flink-table-runtime/src/main/java/org/apache/flink/table/runtime/operators/join/stream/StreamingJoinOperator.java#L45
>
>[2] 
>https://github.com/apache/flink/blob/576ec2b9361a3f8d58fb22b998b0ca7c3c8cf10e/flink-table/flink-table-runtime/src/main/java/org/apache/flink/table/runtime/operators/aggregate/GroupAggFunction.java#L66
>
>
>
>
>--
>
>    Best!
>    Xuyang
>
>
>
>
>
>在 2024-08-29 12:42:28,"刘仲诺" <2313678...@qq.com.INVALID> 写道:
>>您好,目前我正在开发flink流计算程序,请问flink在算子函数中是否允许对象重用呢,比如我在构造输出记录时,不是每次都新建一条记录,而是只新建一个记录对象作为算子函数实例的成员,在构造输出记录时只更改这个对象的属性然后进行输出,请问这样的做法在flink中是安全的吗?我想进行这种操作的主要目的是减少对象创建


Re: Flink SQL 中如何将回撤流转为append流

2024-08-21 文章 xiaohui zhang
修改下游的sink connector,在execute的时候把-D、-U的record去掉


flink 1.20 MDC功能使用 logback 存在严重问题-无法正常启动

2024-08-20 文章 xuhaiLong




flink 1.20 添加了 MDC[1] 支持,使用 logback 的时候存在bug,logback context 会返回 null 值,下游处理会引起 
npe


issue: [flink 1.20 not suport logback] 
https://issues.apache.org/jira/browse/FLINK-36104




MDC [1]: 
https://nightlies.apache.org/flink/flink-docs-release-1.20/docs/deployment/advanced/logging/#structured-logging
  

Flink SQL 中如何将回撤流转为append流

2024-08-19 文章 焦童
在Flink SQL 中如何将 retract 流中-D、-U 直接过滤  只下发+I 和 
+U数据,通过写udf可以做到吗,那在udf中怎么判断当前数据是什么类型呢(+I or -D?)

Re: flink on yarn 模式,在jar任务中,怎么获取rest port

2024-08-05 文章 xiaohui zhang
通过yarn提交时,提交成功后,yarn client会返回 application master的地址和端口,从返回信息里面获取就可以

wjw_bigdata  于2024年8月1日周四 14:24写道:

> 退订
>
>
>
>
>
>
>  回复的原邮件 
> | 发件人 | Lei Wang |
> | 发送日期 | 2024年8月1日 14:08 |
> | 收件人 |  |
> | 主题 | Re: flink on yarn 模式,在jar任务中,怎么获取rest port |
> 在 flink-conf.yaml 中可以指定 rest.port, 可指定一个范围
>
>
> On Wed, Jul 31, 2024 at 8:44 PM melin li  wrote:
>
> flink on yarn 模式, rest port 是随机的,需要获取rest port,有什么好办法?
>
>


回复: flink on yarn 模式,在jar任务中,怎么获取rest port

2024-07-31 文章 wjw_bigdata
退订






 回复的原邮件 
| 发件人 | Lei Wang |
| 发送日期 | 2024年8月1日 14:08 |
| 收件人 |  |
| 主题 | Re: flink on yarn 模式,在jar任务中,怎么获取rest port |
在 flink-conf.yaml 中可以指定 rest.port, 可指定一个范围


On Wed, Jul 31, 2024 at 8:44 PM melin li  wrote:

flink on yarn 模式, rest port 是随机的,需要获取rest port,有什么好办法?



Re: flink on yarn 模式,在jar任务中,怎么获取rest port

2024-07-31 文章 Lei Wang
在 flink-conf.yaml 中可以指定 rest.port, 可指定一个范围


On Wed, Jul 31, 2024 at 8:44 PM melin li  wrote:

> flink on yarn 模式, rest port 是随机的,需要获取rest port,有什么好办法?
>


flink on yarn 模式,在jar任务中,怎么获取rest port

2024-07-31 文章 melin li
flink on yarn 模式, rest port 是随机的,需要获取rest port,有什么好办法?


Re: flink 任务运行抛ClassNotFoundException

2024-07-18 文章 Yanquan Lv
你好,
假设 xxx.shade. 是你用于 shade 的前缀。
grep -rn 'org.apache.hudi.com.xx.xx.xxx.A'  和grep -rn
'xxx.shade.org.apache.hudi.com.xx.xx.xxx.A'  出来的结果一致吗?

℡小新的蜡笔不见嘞、 <1515827...@qq.com.invalid> 于2024年7月18日周四 20:14写道:

> 您好,感谢您的回复。
> 我理解应该是都做了 shade 处理,我这边用了您的 grep -rn 命令查看了下没问题。而且,这个
> 'org.apache.hudi.com.xx.xx.xxx.A'  在我的任务 jar 里面确实是存在的
>
>
>
>
> -- 原始邮件 --
> 发件人:
>   "user-zh"
> <
> decq12y...@gmail.com>;
> 发送时间: 2024年7月18日(星期四) 晚上7:55
> 收件人: "user-zh"
> 主题: Re: flink 任务运行抛ClassNotFoundException
>
>
>
> 你好,这个类被 shade 了,但是调用这个类的其他类可能在不同的 jar 包,没有都被 shade 处理。可以 grep -rn
> 'org.apache.hudi.com.xx.xx.xxx.A' 看看所有调用这个类的包是不是都做了 shade 处理。
>
> ℡小新的蜡笔不见嘞、 <1515827...@qq.com.invalid> 于2024年7月18日周四 18:31写道:
>
> > 请问,Flink 任务运行期间 偶尔会抛出 ClassNotFoundException
> 异常,这个一般是什么原因,以及怎么解决呢?信息如下:
> > * 这个类确实存在于 任务Jar 里面
> > * 这个类是经过 shade 后的,因为 Flink 集群里面集成了这个依赖,所以需要将相关的类经过shade 处理
> > * 这个问题偶尔出现,出现后可能会导致任务重启,并且重启后,任务可能恢复正常也可能继续因为这种异常继续失败
> > * 当前集群是 session standalone 方式的
> > * child first / parent first 方式都试过后,还是有这个问题
> >
> >
> > 异常栈如下(JM节点):
> > Caused by: java.lang.ClassNotFoundException:
> > org.apache.hudi.com.xx.xx.xxx.A
> > at
> java.net.URLClassLoader.findClass(URLClassLoader.java:382)
> > at
> java.lang.ClassLoader.loadClass(ClassLoader.java:424)
> > at
> >
> org.apache.flink.util.FlinkUserCodeClassLoader.loadClassWithoutExceptionHandling(FlinkUserCodeClassLoader.java:97)
> > at
> >
> org.apache.flink.util.ParentFirstClassLoader.loadClassWithoutExceptionHandling(ParentFirstClassLoader.java:65)
> > at
> >
> org.apache.flink.util.FlinkUserCodeClassLoader.loadClass(FlinkUserCodeClassLoader.java:81)
> > at
> java.lang.ClassLoader.loadClass(ClassLoader.java:357)
> > ... 59 more
> >
> >
> >
> > 感谢大家


Re: flink 任务运行抛ClassNotFoundException

2024-07-18 文章 Yanquan Lv
你好,这个类被 shade 了,但是调用这个类的其他类可能在不同的 jar 包,没有都被 shade 处理。可以 grep -rn
'org.apache.hudi.com.xx.xx.xxx.A' 看看所有调用这个类的包是不是都做了 shade 处理。

℡小新的蜡笔不见嘞、 <1515827...@qq.com.invalid> 于2024年7月18日周四 18:31写道:

> 请问,Flink 任务运行期间 偶尔会抛出 ClassNotFoundException 异常,这个一般是什么原因,以及怎么解决呢?信息如下:
> * 这个类确实存在于 任务Jar 里面
> * 这个类是经过 shade 后的,因为 Flink 集群里面集成了这个依赖,所以需要将相关的类经过shade 处理
> * 这个问题偶尔出现,出现后可能会导致任务重启,并且重启后,任务可能恢复正常也可能继续因为这种异常继续失败
> * 当前集群是 session standalone 方式的
> * child first / parent first 方式都试过后,还是有这个问题
>
>
> 异常栈如下(JM节点):
> Caused by: java.lang.ClassNotFoundException:
> org.apache.hudi.com.xx.xx.xxx.A
> at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
> at
> org.apache.flink.util.FlinkUserCodeClassLoader.loadClassWithoutExceptionHandling(FlinkUserCodeClassLoader.java:97)
> at
> org.apache.flink.util.ParentFirstClassLoader.loadClassWithoutExceptionHandling(ParentFirstClassLoader.java:65)
> at
> org.apache.flink.util.FlinkUserCodeClassLoader.loadClass(FlinkUserCodeClassLoader.java:81)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
> ... 59 more
>
>
>
> 感谢大家


Re: 通过 InputFormatSourceFunction 实现flink 实时读取 ftp 的文件时,获取下一个 split 切片失败,

2024-07-16 文章 YH Zhu
退订

Px New <15701181132mr@gmail.com> 于2024年7月16日周二 22:52写道:

> 通过老的API 也就是 InputFormatSourceFunction、InputFormat
> 实现了一版,但发现第一批文件(任务启动时也已存在的文件)会正常处理,但我新上传文件后,这里一直为空,有解决思路吗?请问
>
> [image: image.png]
> 
> 或者有其他实现 ftp 目录实时读取的实现吗?尽可能满足
> 1. 实时读取 ftp 文件
> 2. 支持持续监测目录及递归子目录与文件3.
> 3. 支持并行读取以及大文件的切分
> 4. 文件种类可能有 json、txt、zip 等,支持读取不同类型文件内的数据
> 5. 支持断点续传以及状态的保存
>
>


通过 InputFormatSourceFunction 实现flink 实时读取 ftp 的文件时,获取下一个 split 切片失败,

2024-07-16 文章 Px New
通过老的API 也就是 InputFormatSourceFunction、InputFormat
实现了一版,但发现第一批文件(任务启动时也已存在的文件)会正常处理,但我新上传文件后,这里一直为空,有解决思路吗?请问

[image: image.png]

或者有其他实现 ftp 目录实时读取的实现吗?尽可能满足
1. 实时读取 ftp 文件
2. 支持持续监测目录及递归子目录与文件3.
3. 支持并行读取以及大文件的切分
4. 文件种类可能有 json、txt、zip 等,支持读取不同类型文件内的数据
5. 支持断点续传以及状态的保存


回复:Flink Standalone-ZK-HA模式下,CLi任务提交

2024-07-13 文章 love_h1...@126.com
猜测是两个JM同时都在向ZK的rest_service_lock节点上写入自身地址,导致Flink客户端的任务有的提交到了一个JM,另一些任务提交到了另一个JM

通过手动修改ZK节点可以复现上述情况。
无法只通过重启ZK完全复现当时的集群, 不清楚上述情况的根本原因,是否有相似BUG出现



 回复的原邮件 
| 发件人 | Zhanghao Chen |
| 日期 | 2024年07月13日 12:41 |
| 收件人 | user-zh@flink.apache.org |
| 抄送至 | |
| 主题 | Re: Flink Standalone-ZK-HA模式下,CLi任务提交 |
从日志看,ZK 集群滚动的时候发生了切主,两个 JM 都先后成为过 Leader,但是并没有同时是 Leader。

Best,
Zhanghao Chen

From: love_h1...@126.com 
Sent: Friday, July 12, 2024 17:17
To: user-zh@flink.apache.org 
Subject: Flink Standalone-ZK-HA模式下,CLi任务提交

版本:Flink 1.11.6版本,Standalone HA模式,ZooKeeper 3.5.8版本
操作:
1. 只cancel了所有正在运行的Job,没有Stop Flink集群
2. 滚动重启Zookeeper集群
3. 使用 Flink run 命令提交多个Job
现象:
1. 部分Job提交失败,错误信息为 The rpc invocation size 721919700 exceeds the maximum akka 
framesize.
2. Flink 集群有两个JobManager节点的日志中出现了任务接收和执行的信息
疑问:
1. 使用Flink run 命令提交任务会提交到Flink 集群中的两个JobManager节点么
2. 重启Zookeeper集群会导致Flink集群中出现两个Leader角色的JobManager,这是否是一个特殊场景下的BUG





Re: Flink Standalone-ZK-HA模式下,CLi任务提交

2024-07-12 文章 Zhanghao Chen
从日志看,ZK 集群滚动的时候发生了切主,两个 JM 都先后成为过 Leader,但是并没有同时是 Leader。

Best,
Zhanghao Chen

From: love_h1...@126.com 
Sent: Friday, July 12, 2024 17:17
To: user-zh@flink.apache.org 
Subject: Flink Standalone-ZK-HA模式下,CLi任务提交

版本:Flink 1.11.6版本,Standalone HA模式,ZooKeeper 3.5.8版本
操作:
 1. 只cancel了所有正在运行的Job,没有Stop Flink集群
 2. 滚动重启Zookeeper集群
 3. 使用 Flink run 命令提交多个Job
现象:
1. 部分Job提交失败,错误信息为 The rpc invocation size 721919700 exceeds the maximum akka 
framesize.
2. Flink 集群有两个JobManager节点的日志中出现了任务接收和执行的信息
疑问:
1. 使用Flink run 命令提交任务会提交到Flink 集群中的两个JobManager节点么
2. 重启Zookeeper集群会导致Flink集群中出现两个Leader角色的JobManager,这是否是一个特殊场景下的BUG





回复:Flink在HA模式,重启ZK集群,客户端任务提交异常

2024-07-11 文章 wjw_bigd...@163.com
退订



 回复的原邮件 
| 发件人 | love_h1...@126.com |
| 日期 | 2024年07月11日 16:10 |
| 收件人 | user-zh@flink.apache.org |
| 抄送至 | |
| 主题 | Flink在HA模式,重启ZK集群,客户端任务提交异常 |
问题现象:
Flink 1.11.6版本,Standalone HA模式, 滚动重启了ZK集群;在Flink集群的一个节点上使用flink run 命令提交多个任务;
部分任务提交失败,异常信息如下:
[Flink-DispatcherRestEndpoint-thread-2] - [WARN ] - 
[org.apache.flink.runtime.rpc.akka.AkkaInvocationHandler.createRpcInvocationMessage(line:290)]
 - Could not create remote rpc invocation message. Failing rpc invocation 
because...
java.io.IOException: The rpc invocation size 12532388 exceeds the maximum akka 
framesize.


日志信息:
集群中A点的JobManager日志有获得主角色的日志信息
17:19:45,433 - [flink-akka.actor.default-dispatcher-22] - [INFO ] - 
[org.apache.flink.runtime.resourcemanager.ResourceManager.tryAcceptLeadership(line:1118)]
 - ResourceManager 
akka.tcp://flink@10.10.160.57:46746/user/rpc/resourcemanager_0 was granted 
leadership with fencing token ad84d46e902e0cf6da92179447af4e00
17:19:45,434 - [main-EventThread] - [INFO ] - 
[org.apache.flink.runtime.webmonitor.WebMonitorEndpoint.grantLeadership(line:931)]
 - http://XXX:XXX was granted leadership with 
leaderSessionID=f60df688-372d-416b-a965-989a59b37feb
17:19:45,437 - [flink-akka.actor.default-dispatcher-22] - [INFO ] - 
[org.apache.flink.runtime.resourcemanager.slotmanager.SlotManagerImpl.start(line:287)]
 - Starting the SlotManager.
17:19:45,480 - [main-EventThread] - [INFO ] - 
[org.apache.flink.runtime.dispatcher.runner.AbstractDispatcherLeaderProcess.startInternal(line:97)]
 - Start SessionDispatcherLeaderProcess.XXX
17:19:45,489 - [cluster-io-thread-1] - [INFO ] - 
[org.apache.flink.runtime.rpc.akka.AkkaRpcService.startServer(line:232)] - 
Starting RPC endpoint for 
org.apache.flink.runtime.dispatcher.StandaloneDispatcher at 
akka://flink/user/rpc/dispatcher_1 .
17:19:45,495 - [flink-akka.actor.default-dispatcher-23] - [INFO ] - 
[org.apache.flink.runtime.resourcemanager.ResourceManager.registerTaskExecutorInternal(line:891)]
 - Registering TaskManager with ResourceID XX 
(akka.tcp://flink@X:XX/user/rpc/taskmanager_0) at ResourceManager

Flink集群中有两个节点(A和B)接收到了Job提交请求,两个节点的日志中均有如下信息
[flink-akka.actor.default-dispatcher-33] - [INFO ] - 
[org.apache.flink.runtime.jobmaster.JobMaster.connectToResourceManager(line:1107)]
 - Connecting to ResourceManager 
akka.tcp://flink@X.X.X.X:46746/user/rpc/resourcemanager_0(ad84d46e902e0cf6da92179447af4e00)
集群中有4个JobManager节点日志出现了 Start SessionDispatcherLeaderProcess日志,但几乎都跟随了Stopping 
SessionDispatcherLeaderProcess日志,但(A和B)点没有Stopping 
SessionDispatcherLeaderProcess信息
[main-EventThread] - [INFO ] - 
[org.apache.flink.runtime.dispatcher.runner.AbstractDispatcherLeaderProcess.startInternal(line:97)]
 - Start SessionDispatcherLeaderProcess.
[Curator-ConnectionStateManager-0] - [INFO ] - 
[org.apache.flink.runtime.dispatcher.runner.AbstractDispatcherLeaderProcess.closeInternal(line:134)]
 - Stopping SessionDispatcherLeaderProcess.






Flink在HA模式,重启ZK集群,客户端任务提交异常

2024-07-11 文章 love_h1...@126.com
问题现象:
Flink 1.11.6版本,Standalone HA模式, 滚动重启了ZK集群;在Flink集群的一个节点上使用flink run 命令提交多个任务;
部分任务提交失败,异常信息如下: 
[Flink-DispatcherRestEndpoint-thread-2] - [WARN ] - 
[org.apache.flink.runtime.rpc.akka.AkkaInvocationHandler.createRpcInvocationMessage(line:290)]
 - Could not create remote rpc invocation message. Failing rpc invocation 
because...
java.io.IOException: The rpc invocation size 12532388 exceeds the maximum akka 
framesize.


日志信息:
集群中A点的JobManager日志有获得主角色的日志信息
17:19:45,433 - [flink-akka.actor.default-dispatcher-22] - [INFO ] - 
[org.apache.flink.runtime.resourcemanager.ResourceManager.tryAcceptLeadership(line:1118)]
 - ResourceManager 
akka.tcp://flink@10.10.160.57:46746/user/rpc/resourcemanager_0 was granted 
leadership with fencing token ad84d46e902e0cf6da92179447af4e00
17:19:45,434 - [main-EventThread] - [INFO ] - 
[org.apache.flink.runtime.webmonitor.WebMonitorEndpoint.grantLeadership(line:931)]
 - http://XXX:XXX was granted leadership with 
leaderSessionID=f60df688-372d-416b-a965-989a59b37feb
17:19:45,437 - [flink-akka.actor.default-dispatcher-22] - [INFO ] - 
[org.apache.flink.runtime.resourcemanager.slotmanager.SlotManagerImpl.start(line:287)]
 - Starting the SlotManager.
17:19:45,480 - [main-EventThread] - [INFO ] - 
[org.apache.flink.runtime.dispatcher.runner.AbstractDispatcherLeaderProcess.startInternal(line:97)]
 - Start SessionDispatcherLeaderProcess.XXX
17:19:45,489 - [cluster-io-thread-1] - [INFO ] - 
[org.apache.flink.runtime.rpc.akka.AkkaRpcService.startServer(line:232)] - 
Starting RPC endpoint for 
org.apache.flink.runtime.dispatcher.StandaloneDispatcher at 
akka://flink/user/rpc/dispatcher_1 .
17:19:45,495 - [flink-akka.actor.default-dispatcher-23] - [INFO ] - 
[org.apache.flink.runtime.resourcemanager.ResourceManager.registerTaskExecutorInternal(line:891)]
 - Registering TaskManager with ResourceID XX 
(akka.tcp://flink@X:XX/user/rpc/taskmanager_0) at ResourceManager

Flink集群中有两个节点(A和B)接收到了Job提交请求,两个节点的日志中均有如下信息
[flink-akka.actor.default-dispatcher-33] - [INFO ] - 
[org.apache.flink.runtime.jobmaster.JobMaster.connectToResourceManager(line:1107)]
 - Connecting to ResourceManager 
akka.tcp://flink@X.X.X.X:46746/user/rpc/resourcemanager_0(ad84d46e902e0cf6da92179447af4e00)
集群中有4个JobManager节点日志出现了 Start SessionDispatcherLeaderProcess日志,但几乎都跟随了Stopping 
SessionDispatcherLeaderProcess日志,但(A和B)点没有Stopping 
SessionDispatcherLeaderProcess信息
 [main-EventThread] - [INFO ] - 
[org.apache.flink.runtime.dispatcher.runner.AbstractDispatcherLeaderProcess.startInternal(line:97)]
 - Start SessionDispatcherLeaderProcess.
 [Curator-ConnectionStateManager-0] - [INFO ] - 
[org.apache.flink.runtime.dispatcher.runner.AbstractDispatcherLeaderProcess.closeInternal(line:134)]
 - Stopping SessionDispatcherLeaderProcess.






Re: [ANNOUNCE] Apache Flink Kubernetes Operator 1.9.0 released

2024-07-03 文章 Őrhidi Mátyás
Thank you, Gyula! 🥳
Cheers
On Wed, Jul 3, 2024 at 8:00 AM Gyula Fóra  wrote:

> The Apache Flink community is very happy to announce the release of Apache
> Flink Kubernetes Operator 1.9.0.
>
> The Flink Kubernetes Operator allows users to manage their Apache Flink
> applications and their lifecycle through native k8s tooling like kubectl.
>
> Release blogpost:
> https://flink.apache.org/2024/07/02/apache-flink-kubernetes-operator-1.9.0-release-announcement/
>
> The release is available for download at:
> https://flink.apache.org/downloads.html
>
> Maven artifacts for Flink Kubernetes Operator can be found at:
> https://search.maven.org/artifact/org.apache.flink/flink-kubernetes-operator
>
> Official Docker image for Flink Kubernetes Operator can be found at:
> https://hub.docker.com/r/apache/flink-kubernetes-operator
>
> The full release notes are available in Jira:
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12354417
>
> We would like to thank all contributors of the Apache Flink community who
> made this release possible!
>
> Regards,
> Gyula Fora
>


[ANNOUNCE] Apache Flink Kubernetes Operator 1.9.0 released

2024-07-03 文章 Gyula Fóra
The Apache Flink community is very happy to announce the release of Apache
Flink Kubernetes Operator 1.9.0.

The Flink Kubernetes Operator allows users to manage their Apache Flink
applications and their lifecycle through native k8s tooling like kubectl.

Release blogpost:
https://flink.apache.org/2024/07/02/apache-flink-kubernetes-operator-1.9.0-release-announcement/

The release is available for download at:
https://flink.apache.org/downloads.html

Maven artifacts for Flink Kubernetes Operator can be found at:
https://search.maven.org/artifact/org.apache.flink/flink-kubernetes-operator

Official Docker image for Flink Kubernetes Operator can be found at:
https://hub.docker.com/r/apache/flink-kubernetes-operator

The full release notes are available in Jira:
https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12354417

We would like to thank all contributors of the Apache Flink community who
made this release possible!

Regards,
Gyula Fora


Flink AsyncWriter如何进行固定速率的限速?这一块似乎有bug

2024-06-24 文章 jinzhuguang
Flink 1.16.0

搜索到社区有相关文章,其中的实例如下:
https://flink.apache.org/2022/11/25/optimising-the-throughput-of-async-sinks-using-a-custom-ratelimitingstrategy/#rationale-behind-the-ratelimitingstrategy-interface


public class TokenBucketRateLimitingStrategy implements RateLimitingStrategy {



private final Bucket bucket;



public TokenBucketRateLimitingStrategy() {

Refill refill = Refill.intervally(1, Duration.ofSeconds(1));

Bandwidth limit = Bandwidth.classic(10, refill);

this.bucket = Bucket4j.builder()

.addLimit(limit)

.build();

}



// ... (information methods not needed)



@Override

public boolean shouldBlock(RequestInfo requestInfo) {

return bucket.tryConsume(requestInfo.getBatchSize());

}



}



我但是这个shouldblock的返回值似乎是反的,我实际使用后发现会发现异步的线程池的队列会很快被打满,抛出RejectedExecutionException。



Re: Flink如何做到动态关联join多张维度表中的n张表?

2024-06-19 文章 xiaohui zhang
lookup join可以关联多张维表,但是维表的更新不会触发历史数据刷新。
多维表关联的时候,需要考虑多次关联导致的延迟,以及查询tps对维表数据库的压力。

斗鱼 <1227581...@qq.com.invalid> 于2024年6月19日周三 23:12写道:

> 好的,感谢大佬的回复,之前有了解到Flink的Lookup join好像可以实现类似逻辑,只是不知道Lookup join会不会支持多张动态维度表呢?
>
>
> 斗鱼
> 1227581...@qq.com
>
>
>
>  
>
>
>
>
> -- 原始邮件 --
> 发件人:
>   "user-zh"
> <
> xhzhang...@gmail.com>;
> 发送时间: 2024年6月19日(星期三) 下午5:55
> 收件人: "user-zh"
> 主题: Re: Flink如何做到动态关联join多张维度表中的n张表?
>
>
>
>
> 维表更新后要刷新历史的事实表吗?这个用flink来做的话,几乎不太可能实现,尤其是涉及到多个维表,相当于每次维表又更新了,就要从整个历史数据里面找到关联的数据,重新写入。不管是状态存储,还是更新数据量,需要的资源都太高,无法处理。
> 在我们目前的实时宽表应用里面,实时表部分一般都是流水类的,取到的维表信息,就应该是业务事实发生时的数据。
> 维表更新后刷新事实的,一般都是夜间批量再更新。如果有强实时更新需求的,只能在查询时再关联维表取最新值
>
> 王旭 
> > 互相交流哈,我们也在做类似的改造
> > 1.不确定需要关联几张维表的话,是否可以直接都关联了,然后再根据驱动数据中的字段判断要取哪几张维度表的数据,类似left join
> >
> >
> 2.维表变化后对应的结果表也要刷新这个场景,你有提到维表数据是亿级别,可想而知事实表数据更大,如果要反向关联全量事实表的数据,感觉不太适合用流处理;如果只是刷新部分的话,倒是可以将n天内的数据暂存至外部存储介质中
> >
> >
> >
> >  回复的原邮件 
> > | 发件人 | 斗鱼<1227581...@qq.com.INVALID> |
> > | 日期 | 2024年06月16日 21:08 |
> > | 收件人 | user-zh > | 抄送至 | |
> > | 主题 | 回复:Flink如何做到动态关联join多张维度表中的n张表? |
> >
> >
> 大佬,目前我们还处在调研阶段,SQL或datastream都可以,目前我们DWD或维度表设计是存在ClickHouse/Doris,目前在设计未来的架构,还没实现,只是想向各位大佬取经,麻烦大佬帮忙指教下
> >
> >
> > 斗鱼
> > 1227581...@qq.com
> >
> >
> >
> > &nbsp;
> >
> >
> >
> >
> > ------&nbsp;原始邮件&nbsp;--
> > 发件人:
> >  
> "user-zh"
> >
> <
> > xwwan...@163.com&gt;;
> > 发送时间:&nbsp;2024年6月16日(星期天) 晚上9:03
> > 收件人:&nbsp;"user-zh" >
> > 主题:&nbsp;回复:Flink如何做到动态关联join多张维度表中的n张表?
> >
> >
> >
> > 你好,请问你们是用flink sql api还是datastream api实现这个场景的
> >
> >
> >
> >  回复的原邮件 
> > | 发件人 | 斗鱼<1227581...@qq.com.INVALID&gt; |
> > | 日期 | 2024年06月16日 20:35 |
> > | 收件人 | user-zh > | 抄送至 | |
> > | 主题 | Flink如何做到动态关联join多张维度表中的n张表? |
> > 请教下各位大佬,目前我们遇到一个场景:
> > 1、需要往DWD事实表里写入数据的同时,往Kafka里面写该DWD表的记录信息,该信息
> > 2、该Kafka信息会包含一个维度表数据类型的字符串数组
> >
> >
> 3、Flink在做实时消费Kafka中数据,根据类型数组,关联不同的维度表,如数组包含【1,2】,则Flink读取Kafka消息后,将DWD的数据关联维度表1和维度表2后,写入DWS表
> >
> >
> >
> >
> 想请问大佬如何实现根据该数组信息动态关联维度表,这些维度表数据量都挺大的,亿级别的数据,需要能满足维度表变化后,关联后的DWS数据也能变化,不知道是否有什么技术方案能实现,有的话麻烦大佬帮忙给个简单示例或者参考链接,感谢!
> >
> >
> >
> >
> >
> >
> >
> > |
> > |
> > 斗鱼
> > 1227581...@qq.com
> > |
> > &nbsp;


Re: Flink如何做到动态关联join多张维度表中的n张表?

2024-06-19 文章 xiaohui zhang
维表更新后要刷新历史的事实表吗?这个用flink来做的话,几乎不太可能实现,尤其是涉及到多个维表,相当于每次维表又更新了,就要从整个历史数据里面找到关联的数据,重新写入。不管是状态存储,还是更新数据量,需要的资源都太高,无法处理。
在我们目前的实时宽表应用里面,实时表部分一般都是流水类的,取到的维表信息,就应该是业务事实发生时的数据。
维表更新后刷新事实的,一般都是夜间批量再更新。如果有强实时更新需求的,只能在查询时再关联维表取最新值

王旭  于2024年6月16日周日 21:20写道:

> 互相交流哈,我们也在做类似的改造
> 1.不确定需要关联几张维表的话,是否可以直接都关联了,然后再根据驱动数据中的字段判断要取哪几张维度表的数据,类似left join
>
> 2.维表变化后对应的结果表也要刷新这个场景,你有提到维表数据是亿级别,可想而知事实表数据更大,如果要反向关联全量事实表的数据,感觉不太适合用流处理;如果只是刷新部分的话,倒是可以将n天内的数据暂存至外部存储介质中
>
>
>
>  回复的原邮件 
> | 发件人 | 斗鱼<1227581...@qq.com.INVALID> |
> | 日期 | 2024年06月16日 21:08 |
> | 收件人 | user-zh |
> | 抄送至 | |
> | 主题 | 回复:Flink如何做到动态关联join多张维度表中的n张表? |
>
> 大佬,目前我们还处在调研阶段,SQL或datastream都可以,目前我们DWD或维度表设计是存在ClickHouse/Doris,目前在设计未来的架构,还没实现,只是想向各位大佬取经,麻烦大佬帮忙指教下
>
>
> 斗鱼
> 1227581...@qq.com
>
>
>
>  
>
>
>
>
> -- 原始邮件 --
> 发件人:
>   "user-zh"
> <
> xwwan...@163.com>;
> 发送时间: 2024年6月16日(星期天) 晚上9:03
> 收件人: "user-zh"
> 主题: 回复:Flink如何做到动态关联join多张维度表中的n张表?
>
>
>
> 你好,请问你们是用flink sql api还是datastream api实现这个场景的
>
>
>
>  回复的原邮件 
> | 发件人 | 斗鱼<1227581...@qq.com.INVALID> |
> | 日期 | 2024年06月16日 20:35 |
> | 收件人 | user-zh | 抄送至 | |
> | 主题 | Flink如何做到动态关联join多张维度表中的n张表? |
> 请教下各位大佬,目前我们遇到一个场景:
> 1、需要往DWD事实表里写入数据的同时,往Kafka里面写该DWD表的记录信息,该信息
> 2、该Kafka信息会包含一个维度表数据类型的字符串数组
>
> 3、Flink在做实时消费Kafka中数据,根据类型数组,关联不同的维度表,如数组包含【1,2】,则Flink读取Kafka消息后,将DWD的数据关联维度表1和维度表2后,写入DWS表
>
>
>
> 想请问大佬如何实现根据该数组信息动态关联维度表,这些维度表数据量都挺大的,亿级别的数据,需要能满足维度表变化后,关联后的DWS数据也能变化,不知道是否有什么技术方案能实现,有的话麻烦大佬帮忙给个简单示例或者参考链接,感谢!
>
>
>
>
>
>
>
> |
> |
> 斗鱼
> 1227581...@qq.com
> |
>  


Re: [ANNOUNCE] Apache Flink CDC 3.1.1 released

2024-06-18 文章 Paul Lam
Well done! Thanks a lot for your hard work!

Best,
Paul Lam

> 2024年6月19日 09:47,Leonard Xu  写道:
> 
> Congratulations! Thanks Qingsheng for the release work and all contributors 
> involved.
> 
> Best,
> Leonard 
> 
>> 2024年6月18日 下午11:50,Qingsheng Ren  写道:
>> 
>> The Apache Flink community is very happy to announce the release of Apache
>> Flink CDC 3.1.1.
>> 
>> Apache Flink CDC is a distributed data integration tool for real time data
>> and batch data, bringing the simplicity and elegance of data integration
>> via YAML to describe the data movement and transformation in a data
>> pipeline.
>> 
>> Please check out the release blog post for an overview of the release:
>> https://flink.apache.org/2024/06/18/apache-flink-cdc-3.1.1-release-announcement/
>> 
>> The release is available for download at:
>> https://flink.apache.org/downloads.html
>> 
>> Maven artifacts for Flink CDC can be found at:
>> https://search.maven.org/search?q=g:org.apache.flink%20cdc
>> 
>> The full release notes are available in Jira:
>> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12354763
>> 
>> We would like to thank all contributors of the Apache Flink community who
>> made this release possible!
>> 
>> Regards,
>> Qingsheng Ren
> 



Re: [ANNOUNCE] Apache Flink CDC 3.1.1 released

2024-06-18 文章 Leonard Xu
Congratulations! Thanks Qingsheng for the release work and all contributors 
involved.

Best,
Leonard 

> 2024年6月18日 下午11:50,Qingsheng Ren  写道:
> 
> The Apache Flink community is very happy to announce the release of Apache
> Flink CDC 3.1.1.
> 
> Apache Flink CDC is a distributed data integration tool for real time data
> and batch data, bringing the simplicity and elegance of data integration
> via YAML to describe the data movement and transformation in a data
> pipeline.
> 
> Please check out the release blog post for an overview of the release:
> https://flink.apache.org/2024/06/18/apache-flink-cdc-3.1.1-release-announcement/
> 
> The release is available for download at:
> https://flink.apache.org/downloads.html
> 
> Maven artifacts for Flink CDC can be found at:
> https://search.maven.org/search?q=g:org.apache.flink%20cdc
> 
> The full release notes are available in Jira:
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12354763
> 
> We would like to thank all contributors of the Apache Flink community who
> made this release possible!
> 
> Regards,
> Qingsheng Ren



[ANNOUNCE] Apache Flink CDC 3.1.1 released

2024-06-18 文章 Qingsheng Ren
The Apache Flink community is very happy to announce the release of Apache
Flink CDC 3.1.1.

Apache Flink CDC is a distributed data integration tool for real time data
and batch data, bringing the simplicity and elegance of data integration
via YAML to describe the data movement and transformation in a data
pipeline.

Please check out the release blog post for an overview of the release:
https://flink.apache.org/2024/06/18/apache-flink-cdc-3.1.1-release-announcement/

The release is available for download at:
https://flink.apache.org/downloads.html

Maven artifacts for Flink CDC can be found at:
https://search.maven.org/search?q=g:org.apache.flink%20cdc

The full release notes are available in Jira:
https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12354763

We would like to thank all contributors of the Apache Flink community who
made this release possible!

Regards,
Qingsheng Ren


flink checkpoint 延迟的性能问题讨论

2024-06-16 文章 15868861416
各位大佬,
背景:
实际测试flink读Kafka 数据写入hudi, checkpoint的间隔时间是1min, 
state.backend分别为filesystem,测试结果如下:



写hudi的checkpoint 的延迟





写iceberg得延迟:



疑问: hudi的checkpoint的文件数据比iceberg要大很多,如何降低flink写hudi的checkpoint的延迟?


| |
博星
|
|
15868861...@163.com
|



??????Flink????????????????join??????????????n??????

2024-06-16 文章 ????

1.left
 join
2.n??



  
| ?? | <1227581...@qq.com.INVALID> |
|  | 2024??06??16?? 21:08 |
| ?? | user-zh |
| ?? | |
|  | ??Flinkjoin??n?? |
??SQL??datastreamDWD??ClickHouse/Doris??



1227581...@qq.com



 




--  --
??: 
   "user-zh"



??????Flink????????????????join??????????????n??????

2024-06-16 文章 ????
??flink sql apidatastream api??



  
| ?? | <1227581...@qq.com.INVALID> |
|  | 2024??06??16?? 20:35 |
| ?? | user-zh |
| ?? | |
| ???? | Flinkjoin??n?? |
??
1DWD??KafkaDWD
2Kafka????
3??FlinkKafka1,2??FlinkKafka??DWD12DWS??


DWS







|
|

1227581...@qq.com
|
 

Flink????????????????join??????????????n??????

2024-06-16 文章 ????
??
1DWD??KafkaDWD
2Kafka
3??FlinkKafka1,2??FlinkKafka??DWD12DWS??


DWS










1227581...@qq.com



 

Re: flink cdc 3.0 schema变更问题

2024-06-12 文章 Yanquan Lv
你好,DataStream 的方式需要设置 includeSchemaChanges(true) 参数,并且设置自定义的
deserializer,参考这个链接[1]。
如果不想使用 json 的方式,希望自定义 deserializer,从 SourceRecord 里提取 ddl
的方式可以参考这个链接[2]提供的方案。

[1]
https://nightlies.apache.org/flink/flink-cdc-docs-master/docs/faq/faq/#q6-i-want-to-get-ddl-events-in-the-database-what-should-i-do-is-there-a-demo
[2] https://developer.aliyun.com/article/1093413#slide-2

zapjone  于2024年6月13日周四 12:29写道:

> 大佬们好:
> 想请假下,在flink
> cdc3.0中支持schema变更,但看到是pipeline方式的,因业务问题需要使用datastream进行特殊处理,所以想请假下,在flink
> cdc 3.0中datastream api中怎么使用schema变更呢?或者相关文档呢?


Re: flink cdc 3.0 schema变更问题

2024-06-12 文章 Xiqian YU
Zapjone 好,

目前的 Schema Evolution 的实现依赖传递 CDC Event 事件的 Pipeline 连接器和框架。如果您希望插入自定义算子逻辑,建议参考 
flink-cdc-composer 模块中的 FlinkPipelineComposer 类构建算子链作业的方式,并在其中插入自定义的 Operator 
以实现您的业务逻辑。

另外,对于一些简单的处理逻辑,如果能够使用 YAML 作业的 Route(路由)、Transform(变换)功能表述的话,直接编写对应的 YAML 
规则会更简单。

祝好!

Regards,
yux

De : zapjone 
Date : jeudi, 13 juin 2024 à 12:29
À : user-zh@flink.apache.org 
Objet : flink cdc 3.0 schema变更问题
大佬们好:
想请假下,在flink 
cdc3.0中支持schema变更,但看到是pipeline方式的,因业务问题需要使用datastream进行特殊处理,所以想请假下,在flink cdc 
3.0中datastream api中怎么使用schema变更呢?或者相关文档呢?


flink cdc 3.0 schema变更问题

2024-06-12 文章 zapjone
大佬们好:
想请假下,在flink 
cdc3.0中支持schema变更,但看到是pipeline方式的,因业务问题需要使用datastream进行特殊处理,所以想请假下,在flink cdc 
3.0中datastream api中怎么使用schema变更呢?或者相关文档呢?

Re:Re:Re: 请问flink sql作业如何给kafka source table消费限速?

2024-06-05 文章 Xuyang
Hi, 现在flink sql还没有办法限流。有需求的话可以建一个jira[1],在社区推进下。




[1] https://issues.apache.org/jira/projects/FLINK/issues




--

Best!
Xuyang





在 2024-06-05 15:33:30,"casel.chen"  写道:
>flink sql作业要如何配置进行限流消费呢?以防止打爆存储系统
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>在 2024-06-05 14:46:23,"Alex Ching"  写道:
>>从代码上看,Flink
>>内部是有限速的组件的。org.apache.flink.api.common.io.ratelimiting.GuavaFlinkConnectorRateLimiter,
>>但是并没有在connector中使用。
>>
>>casel.chen  于2024年6月5日周三 14:36写道:
>>
>>> kafka本身是支持消费限流的[1],但这些消费限流参数在flink kafka sql
>>> connector中不起作用,请问这是为什么?如何才能给flink kafka source table消费限速? 谢谢!
>>>
>>>
>>> [1] https://blog.csdn.net/qq_37774171/article/details/122816246


Re:Re: 请问flink sql作业如何给kafka source table消费限速?

2024-06-05 文章 casel.chen
flink sql作业要如何配置进行限流消费呢?以防止打爆存储系统

















在 2024-06-05 14:46:23,"Alex Ching"  写道:
>从代码上看,Flink
>内部是有限速的组件的。org.apache.flink.api.common.io.ratelimiting.GuavaFlinkConnectorRateLimiter,
>但是并没有在connector中使用。
>
>casel.chen  于2024年6月5日周三 14:36写道:
>
>> kafka本身是支持消费限流的[1],但这些消费限流参数在flink kafka sql
>> connector中不起作用,请问这是为什么?如何才能给flink kafka source table消费限速? 谢谢!
>>
>>
>> [1] https://blog.csdn.net/qq_37774171/article/details/122816246


Re: 请问flink sql作业如何给kafka source table消费限速?

2024-06-04 文章 Alex Ching
从代码上看,Flink
内部是有限速的组件的。org.apache.flink.api.common.io.ratelimiting.GuavaFlinkConnectorRateLimiter,
但是并没有在connector中使用。

casel.chen  于2024年6月5日周三 14:36写道:

> kafka本身是支持消费限流的[1],但这些消费限流参数在flink kafka sql
> connector中不起作用,请问这是为什么?如何才能给flink kafka source table消费限速? 谢谢!
>
>
> [1] https://blog.csdn.net/qq_37774171/article/details/122816246


请问flink sql作业如何给kafka source table消费限速?

2024-06-04 文章 casel.chen
kafka本身是支持消费限流的[1],但这些消费限流参数在flink kafka sql connector中不起作用,请问这是为什么?如何才能给flink 
kafka source table消费限速? 谢谢!


[1] https://blog.csdn.net/qq_37774171/article/details/122816246

答复: Flink Datastream实现删除操作

2024-06-04 文章 Xiqian YU
您好,

Iceberg 为 Flink 实现的 connector 同时支持 DataStream API 和 Table API[1]。其 DataStream 
API 提供 Append(默认行为)、Overwrite、Upsert 三种可选的模式,您可以使用下面的 Java 代码片段实现:

首先创建对应数据行 Schema 格式的反序列化器,例如,可以使用 RowDataDebeziumDeserializeSchema 的生成器来快速构造一个:


private RowDataDebeziumDeserializeSchema getDeserializer(
DataType dataType) {
LogicalType logicalType = TypeConversions.fromDataToLogicalType(dataType);
InternalTypeInfo typeInfo = InternalTypeInfo.of(logicalType);
return RowDataDebeziumDeserializeSchema.newBuilder()
.setPhysicalRowType((RowType) dataType.getLogicalType())
.setResultTypeInfo(typeInfo)
.build();
}

然后,您可以使用该反序列化器创建 MySQL 数据源:

MySqlSource mySqlSource =
MySqlSource.builder()
// 其他参数配置略
.deserializer(getDeserializer({{ ROW_DATA_TYPE_HERE }}))
.build();

并创建一个 Iceberg 数据源:

Configuration hadoopConf = new Configuration();
TableLoader tableLoader = 
TableLoader.fromHadoopTable("hdfs://nn:8020/warehouse/path", hadoopConf);

FlinkSink.forRowData(mysqlSource)
.tableLoader(tableLoader)
// 此处可以追加 .overwrite(true) 或 .upsert(true)
// 来配置 Overwrite 或 Upsert 行为
.append();

P.S. 在接下来的 Flink CDC 版本中,预计会为 3.0 版本新增的 Pipeline 作业[2]提供写入 Iceberg 
的能力,使用起来更方便快捷。如果能够满足您的需求,也请多多尝试。

祝好!

Regards,
yux

[1] https://iceberg.apache.org/docs/1.5.2/flink-writes/#writing-with-datastream
[2] 
https://nightlies.apache.org/flink/flink-cdc-docs-master/docs/get-started/introduction/



发件人: zapjone 
日期: 星期二, 2024年6月4日 18:34
收件人: user-zh@flink.apache.org 
主题: Flink Datastream实现删除操作
各位大佬好:
想请教下,在使用mysql-cdc到iceberg,通过sql方式可以实现自动更新和删除功能。但在使用datastream 
api进行处理后,注册成临时表,怎么实现类似于sql方式的自动更新和删除呢?


Flink Datastream实现删除操作

2024-06-04 文章 zapjone
各位大佬好:
想请教下,在使用mysql-cdc到iceberg,通过sql方式可以实现自动更新和删除功能。但在使用datastream 
api进行处理后,注册成临时表,怎么实现类似于sql方式的自动更新和删除呢?

Re: 【求助】关于 Flink ML 迭代中使用keyBy算子报错

2024-06-03 文章 Xiqian YU
您好!
看起来这个问题与 FLINK-35066[1] 有关,该问题描述了在 IterationBody 内实现自定义的RichCoProcessFunction 或 
CoFlatMapFunction 算子时遇到的拆包问题,可以追溯到这个[2]邮件列表中的问题报告。看起来这个问题也同样影响您使用的 
RichCoMapFunction 算子。
该问题已被此 Pull Request[3] 解决,并已合入 master 主分支。按照文档[4]尝试在本地编译 2.4-SNAPSHOT 
快照版本并执行您的代码,看起来能够正常工作。
鉴于这是一个 Flink ML 2.3 版本中的已知问题,您可以尝试在本地编译自己的快照版本,或是等待 Flink ML 2.4 的发布并更新依赖版本。
祝好!

Regards,
yux

[1] https://issues.apache.org/jira/browse/FLINK-35066
[2] https://lists.apache.org/thread/bgkw1g2tdgnp1xy1clsqtcfs3h18pkd6
[3] https://github.com/apache/flink-ml/pull/260
[4] https://github.com/apache/flink-ml#building-the-project


De : w...@shanghaitech.edu.cn 
Date : vendredi, 31 mai 2024 à 17:34
À : user-zh@flink.apache.org 
Objet : 【求助】关于 Flink ML 迭代中使用keyBy算子报错

尊敬的Flink开发者您好,

我在使用Flink ML模块的迭代功能时遇到了一个问题,当我在迭代体内使用keyBy算子时,会出现以下错误:

Caused by: java.lang.ClassCastException: 
org.apache.flink.iteration.IterationRecord cannot be cast to java.lang.String
我已经查阅文档,但还是没有头绪,所以希望能得到您的帮助,非常感谢。

我已在下方附上了最小可复现代码、报错信息以及我的运行环境信息。



以下是最小复现代码

~~~java
package myflinkml;

import org.apache.flink.iteration.DataStreamList;
import org.apache.flink.iteration.IterationBody;
import org.apache.flink.iteration.IterationBodyResult;
import org.apache.flink.iteration.Iterations;
import org.apache.flink.streaming.api.datastream.DataStream;
import org.apache.flink.streaming.api.datastream.SingleOutputStreamOperator;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.streaming.api.functions.co.RichCoMapFunction;

public class BugDemo {
public static void main(String[] args) throws Exception {
StreamExecutionEnvironment env = 
StreamExecutionEnvironment.getExecutionEnvironment();
env.setParallelism(1);

DataStream textStream = env.fromElements("Hello", "Flink");
DataStream intStream = env.fromElements(1, 2, 3);

Iterations.iterateUnboundedStreams(
DataStreamList.of(intStream),
DataStreamList.of(textStream),
new Body()

).get(0).print();

env.execute();
}

private static class Body implements IterationBody {

@Override
public IterationBodyResult process(DataStreamList dsl1, DataStreamList 
dsl2) {
DataStream intStream = dsl1.get(0);
DataStream textStream = dsl2.get(0);

// 迭代输出流
DataStream outStream = textStream
.connect(intStream)
.keyBy(x -> 1, x -> 1)  // 添加这行就报错!!
.map(new RichCoMapFunction() {

@Override
public String map1(String value) throws Exception {
return "Strings: " + value;
}

@Override
public String map2(Integer value) throws Exception {
return "Integer: " + value;
}
});

// 迭代反馈流
SingleOutputStreamOperator feedBackStream = 
intStream.map(x -> x - 1).filter(x -> x > 0);

return new IterationBodyResult(DataStreamList.of(feedBackStream), 
DataStreamList.of(outStream));
}
}
}

~~~

运行报错输出:

Exception in thread "main" 
org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
at 
org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:144)
at 
org.apache.flink.runtime.minicluster.MiniClusterJobClient.lambda$getJobExecutionResult$3(MiniClusterJobClient.java:141)
at java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:616)
at 
java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:591)
at 
java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
at java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
at 
org.apache.flink.runtime.rpc.akka.AkkaInvocationHandler.lambda$invokeRpc$1(AkkaInvocationHandler.java:267)
at 
java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)
at 
java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)
at 
java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
at java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
at org.apache.flink.util.concurrent.FutureUtils.doForward(FutureUtils.java:1300)
at 
org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.lambda$null$1(ClassLoadingUtils.java:93)
at 
org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:68)
at 
org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.lambda$guardCompletionWithContextClassLoader$2(ClassLoadingUtils.java:92)
at 
java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)
at 
java.util.concurrent.Complet

flink sqlgateway 提交sql作业如何设置组账号

2024-05-28 文章 阿华田


flink sqlgateway 
提交sql作业,发现sqlgateway服务启动后,默认是当前机器的租户信息进行任务提交到yarn集群,由于公司的hadoop集群设置了租户权限,需要设置提交的用户信息,各位大佬,flink
 sqlgateway 提交sql作业如何设置组账号
| |
阿华田
|
|
a15733178...@163.com
|
签名由网易邮箱大师定制



Flink SQL消费kafka topic有办法限速么?

2024-05-27 文章 casel.chen
Flink SQL消费kafka topic有办法限速么?场景是消费kafka 
topic数据写入下游mongodb,在业务高峰期时下游mongodb写入压力大,希望能够限速消费kafka,请问要如何实现?

Re:咨询Flink 1.19文档中关于iterate操作

2024-05-20 文章 Xuyang
Hi, 

目前Iterate api在1.19版本上废弃了,不再支持,具体可以参考[1][2]。Flip[1]中提供了另一种替代的办法[3]




[1] 
https://cwiki.apache.org/confluence/display/FLINK/FLIP-357%3A+Deprecate+Iteration+API+of+DataStream

[2] https://issues.apache.org/jira/browse/FLINK-33144

[3] https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=184615300




--

Best!
Xuyang





在 2024-05-20 22:39:37,""  写道:
>尊敬的Flink开发团队:
>
>您好!
>
>我目前正在学习如何使用Apache Flink的DataStream API实现迭代算法,例如图的单源最短路径。在Flink 
>1.18版本的文档中,我注意到有关于iterate操作的介绍,具体请见:https://nightlies.apache.org/flink/flink-docs-release-1.18/docs/dev/datastream/overview/#iterations
>
>但是,我发现Flink 
>1.19版本的文档中不再提及iterate操作。这让我有些困惑。不知道在最新版本中,这是否意味着iterate操作不再被支持?如果是这样的话,请问我该如何在数据流上进行迭代计算?
>
>非常感谢您的时间和帮助,期待您的回复。
>
>谢谢!
>
>李智诚


咨询Flink 1.19文档中关于iterate操作

2024-05-20 文章 www
尊敬的Flink开发团队:

您好!

我目前正在学习如何使用Apache Flink的DataStream API实现迭代算法,例如图的单源最短路径。在Flink 
1.18版本的文档中,我注意到有关于iterate操作的介绍,具体请见:https://nightlies.apache.org/flink/flink-docs-release-1.18/docs/dev/datastream/overview/#iterations

但是,我发现Flink 
1.19版本的文档中不再提及iterate操作。这让我有些困惑。不知道在最新版本中,这是否意味着iterate操作不再被支持?如果是这样的话,请问我该如何在数据流上进行迭代计算?

非常感谢您的时间和帮助,期待您的回复。

谢谢!

李智诚

Re: Re: [ANNOUNCE] Apache Flink CDC 3.1.0 released

2024-05-19 文章 Jingsong Li
CC to the Paimon community.

Best,
Jingsong

On Mon, May 20, 2024 at 9:55 AM Jingsong Li  wrote:
>
> Amazing, congrats!
>
> Best,
> Jingsong
>
> On Sat, May 18, 2024 at 3:10 PM 大卫415 <2446566...@qq.com.invalid> wrote:
> >
> > 退订
> >
> >
> >
> >
> >
> >
> >
> > Original Email
> >
> >
> >
> > Sender:"gongzhongqiang"< gongzhongqi...@apache.org >;
> >
> > Sent Time:2024/5/17 23:10
> >
> > To:"Qingsheng Ren"< re...@apache.org >;
> >
> > Cc recipient:"dev"< d...@flink.apache.org >;"user"< 
> > u...@flink.apache.org >;"user-zh"< user-zh@flink.apache.org >;"Apache 
> > Announce List"< annou...@apache.org >;
> >
> > Subject:Re: [ANNOUNCE] Apache Flink CDC 3.1.0 released
> >
> >
> > Congratulations !
> > Thanks for all contributors.
> >
> >
> > Best,
> >
> > Zhongqiang Gong
> >
> > Qingsheng Ren  于 2024年5月17日周五 17:33写道:
> >
> > > The Apache Flink community is very happy to announce the release of
> > > Apache Flink CDC 3.1.0.
> > >
> > > Apache Flink CDC is a distributed data integration tool for real time
> > > data and batch data, bringing the simplicity and elegance of data
> > > integration via YAML to describe the data movement and transformation
> > > in a data pipeline.
> > >
> > > Please check out the release blog post for an overview of the release:
> > >
> > > 
> > https://flink.apache.org/2024/05/17/apache-flink-cdc-3.1.0-release-announcement/
> > >
> > > The release is available for download at:
> > > https://flink.apache.org/downloads.html
> > >
> > > Maven artifacts for Flink CDC can be found at:
> > > https://search.maven.org/search?q=g:org.apache.flink%20cdc
> > >
> > > The full release notes are available in Jira:
> > >
> > > 
> > https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12354387
> > >
> > > We would like to thank all contributors of the Apache Flink community
> > > who made this release possible!
> > >
> > > Regards,
> > > Qingsheng Ren
> > >


Re: Re: [ANNOUNCE] Apache Flink CDC 3.1.0 released

2024-05-19 文章 Jingsong Li
Amazing, congrats!

Best,
Jingsong

On Sat, May 18, 2024 at 3:10 PM 大卫415 <2446566...@qq.com.invalid> wrote:
>
> 退订
>
>
>
>
>
>
>
> Original Email
>
>
>
> Sender:"gongzhongqiang"< gongzhongqi...@apache.org >;
>
> Sent Time:2024/5/17 23:10
>
> To:"Qingsheng Ren"< re...@apache.org >;
>
> Cc recipient:"dev"< d...@flink.apache.org >;"user"< u...@flink.apache.org 
> >;"user-zh"< user-zh@flink.apache.org >;"Apache Announce List"< 
> annou...@apache.org >;
>
> Subject:Re: [ANNOUNCE] Apache Flink CDC 3.1.0 released
>
>
> Congratulations !
> Thanks for all contributors.
>
>
> Best,
>
> Zhongqiang Gong
>
> Qingsheng Ren  于 2024年5月17日周五 17:33写道:
>
> > The Apache Flink community is very happy to announce the release of
> > Apache Flink CDC 3.1.0.
> >
> > Apache Flink CDC is a distributed data integration tool for real time
> > data and batch data, bringing the simplicity and elegance of data
> > integration via YAML to describe the data movement and transformation
> > in a data pipeline.
> >
> > Please check out the release blog post for an overview of the release:
> >
> > 
> https://flink.apache.org/2024/05/17/apache-flink-cdc-3.1.0-release-announcement/
> >
> > The release is available for download at:
> > https://flink.apache.org/downloads.html
> >
> > Maven artifacts for Flink CDC can be found at:
> > https://search.maven.org/search?q=g:org.apache.flink%20cdc
> >
> > The full release notes are available in Jira:
> >
> > 
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12354387
> >
> > We would like to thank all contributors of the Apache Flink community
> > who made this release possible!
> >
> > Regards,
> > Qingsheng Ren
> >


Re: [ANNOUNCE] Apache Flink CDC 3.1.0 released

2024-05-17 文章 gongzhongqiang
Congratulations !
Thanks for all contributors.


Best,

Zhongqiang Gong

Qingsheng Ren  于 2024年5月17日周五 17:33写道:

> The Apache Flink community is very happy to announce the release of
> Apache Flink CDC 3.1.0.
>
> Apache Flink CDC is a distributed data integration tool for real time
> data and batch data, bringing the simplicity and elegance of data
> integration via YAML to describe the data movement and transformation
> in a data pipeline.
>
> Please check out the release blog post for an overview of the release:
>
> https://flink.apache.org/2024/05/17/apache-flink-cdc-3.1.0-release-announcement/
>
> The release is available for download at:
> https://flink.apache.org/downloads.html
>
> Maven artifacts for Flink CDC can be found at:
> https://search.maven.org/search?q=g:org.apache.flink%20cdc
>
> The full release notes are available in Jira:
>
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12354387
>
> We would like to thank all contributors of the Apache Flink community
> who made this release possible!
>
> Regards,
> Qingsheng Ren
>


Re: [ANNOUNCE] Apache Flink CDC 3.1.0 released

2024-05-17 文章 Hang Ruan
Congratulations!

Thanks for the great work.

Best,
Hang

Qingsheng Ren  于2024年5月17日周五 17:33写道:

> The Apache Flink community is very happy to announce the release of
> Apache Flink CDC 3.1.0.
>
> Apache Flink CDC is a distributed data integration tool for real time
> data and batch data, bringing the simplicity and elegance of data
> integration via YAML to describe the data movement and transformation
> in a data pipeline.
>
> Please check out the release blog post for an overview of the release:
>
> https://flink.apache.org/2024/05/17/apache-flink-cdc-3.1.0-release-announcement/
>
> The release is available for download at:
> https://flink.apache.org/downloads.html
>
> Maven artifacts for Flink CDC can be found at:
> https://search.maven.org/search?q=g:org.apache.flink%20cdc
>
> The full release notes are available in Jira:
>
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12354387
>
> We would like to thank all contributors of the Apache Flink community
> who made this release possible!
>
> Regards,
> Qingsheng Ren
>


Re: [ANNOUNCE] Apache Flink CDC 3.1.0 released

2024-05-17 文章 Leonard Xu
Congratulations !

Thanks Qingsheng for the great work and all contributors involved !!

Best,
Leonard


> 2024年5月17日 下午5:32,Qingsheng Ren  写道:
> 
> The Apache Flink community is very happy to announce the release of
> Apache Flink CDC 3.1.0.
> 
> Apache Flink CDC is a distributed data integration tool for real time
> data and batch data, bringing the simplicity and elegance of data
> integration via YAML to describe the data movement and transformation
> in a data pipeline.
> 
> Please check out the release blog post for an overview of the release:
> https://flink.apache.org/2024/05/17/apache-flink-cdc-3.1.0-release-announcement/
> 
> The release is available for download at:
> https://flink.apache.org/downloads.html
> 
> Maven artifacts for Flink CDC can be found at:
> https://search.maven.org/search?q=g:org.apache.flink%20cdc
> 
> The full release notes are available in Jira:
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12354387
> 
> We would like to thank all contributors of the Apache Flink community
> who made this release possible!
> 
> Regards,
> Qingsheng Ren



[ANNOUNCE] Apache Flink CDC 3.1.0 released

2024-05-17 文章 Qingsheng Ren
The Apache Flink community is very happy to announce the release of
Apache Flink CDC 3.1.0.

Apache Flink CDC is a distributed data integration tool for real time
data and batch data, bringing the simplicity and elegance of data
integration via YAML to describe the data movement and transformation
in a data pipeline.

Please check out the release blog post for an overview of the release:
https://flink.apache.org/2024/05/17/apache-flink-cdc-3.1.0-release-announcement/

The release is available for download at:
https://flink.apache.org/downloads.html

Maven artifacts for Flink CDC can be found at:
https://search.maven.org/search?q=g:org.apache.flink%20cdc

The full release notes are available in Jira:
https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12354387

We would like to thank all contributors of the Apache Flink community
who made this release possible!

Regards,
Qingsheng Ren


Re: Flink 1.18.1 ,重启状态恢复

2024-05-16 文章 Yanfei Lei
看起来和 FLINK-34063 / FLINK-33863 是同样的问题,您可以升级到1.18.2 试试看。
[1] https://issues.apache.org/jira/browse/FLINK-33863
[2] https://issues.apache.org/jira/browse/FLINK-34063

陈叶超  于2024年5月16日周四 16:38写道:
>
> 升级到 flink 1.18.1 ,任务重启状态恢复的话,遇到如下报错:
> 2024-04-09 13:03:48
> java.lang.Exception: Exception while creating StreamOperatorStateContext.
> at 
> org.apache.flink.streaming.api.operators.StreamTaskStateInitializerImpl.streamOperatorStateContext(StreamTaskStateInitializerImpl.java:258)
> at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator.initializeState(AbstractStreamOperator.java:256)
> at 
> org.apache.flink.streaming.runtime.tasks.RegularOperatorChain.initializeStateAndOpenOperators(RegularOperatorChain.java:106)
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.restoreGates(StreamTask.java:753)
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$1.call(StreamTaskActionExecutor.java:55)
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.restoreInternal(StreamTask.java:728)
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.restore(StreamTask.java:693)
> at 
> org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:953)
> at 
> org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:922)
> at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:746)
> at org.apache.flink.runtime.taskmanager.Task.run(Task.java:562)
> at java.lang.Thread.run(Thread.java:750)
> Caused by: org.apache.flink.util.FlinkException: Could not restore operator 
> state backend for 
> RowDataStoreWriteOperator_8d96fc510e75de3baf03ef7367db7d42_(2/2) from any of 
> the 1 provided restore options.
> at 
> org.apache.flink.streaming.api.operators.BackendRestorerProcedure.createAndRestore(BackendRestorerProcedure.java:160)
> at 
> org.apache.flink.streaming.api.operators.StreamTaskStateInitializerImpl.operatorStateBackend(StreamTaskStateInitializerImpl.java:289)
> at 
> org.apache.flink.streaming.api.operators.StreamTaskStateInitializerImpl.streamOperatorStateContext(StreamTaskStateInitializerImpl.java:176)
> ... 11 more
> Caused by: org.apache.flink.runtime.state.BackendBuildingException: Failed 
> when trying to restore operator state backend
> at 
> org.apache.flink.runtime.state.DefaultOperatorStateBackendBuilder.build(DefaultOperatorStateBackendBuilder.java:88)
> at 
> org.apache.flink.contrib.streaming.state.EmbeddedRocksDBStateBackend.createOperatorStateBackend(EmbeddedRocksDBStateBackend.java:533)
> at 
> org.apache.flink.contrib.streaming.state.RocksDBStateBackend.createOperatorStateBackend(RocksDBStateBackend.java:380)
> at 
> org.apache.flink.streaming.api.operators.StreamTaskStateInitializerImpl.lambda$operatorStateBackend$0(StreamTaskStateInitializerImpl.java:280)
> at 
> org.apache.flink.streaming.api.operators.BackendRestorerProcedure.attemptCreateAndRestore(BackendRestorerProcedure.java:168)
> at 
> org.apache.flink.streaming.api.operators.BackendRestorerProcedure.createAndRestore(BackendRestorerProcedure.java:135)
> ... 13 more
> Caused by: java.io.IOException: invalid stream header
> at 
> org.xerial.snappy.SnappyFramedInputStream.(SnappyFramedInputStream.java:235)
> at 
> org.xerial.snappy.SnappyFramedInputStream.(SnappyFramedInputStream.java:145)
> at 
> org.xerial.snappy.SnappyFramedInputStream.(SnappyFramedInputStream.java:129)
> at 
> org.apache.flink.runtime.state.SnappyStreamCompressionDecorator.decorateWithCompression(SnappyStreamCompressionDecorator.java:53)
> at 
> org.apache.flink.runtime.state.StreamCompressionDecorator.decorateWithCompression(StreamCompressionDecorator.java:60)
> at 
> org.apache.flink.runtime.state.CompressibleFSDataInputStream.(CompressibleFSDataInputStream.java:39)
> at 
> org.apache.flink.runtime.state.OperatorStateRestoreOperation.restore(OperatorStateRestoreOperation.java:185)
> at 
> org.apache.flink.runtime.state.DefaultOperatorStateBackendBuilder.build(DefaultOperatorStateBackendBuilder.java:85)
> ... 18 more
>


-- 
Best,
Yanfei


Get access to unmatching events in Apache Flink Cep

2024-05-16 文章 Anton Sidorov
Hello!

I have a Flink Job with CEP pattern.

Pattern example:

// Strict Contiguity
// a b+ c d e
Pattern.begin("a", AfterMatchSkipStrategy.skipPastLastEvent()).where(...)
.next("b").where(...).oneOrMore()
.next("c").where(...)
.next("d").where(...)
.next("e").where(...);

I have events with wrong order stream on input:

a b d c e

On output I haven`t any matching. But I want have access to events, that
not matching.

Can I have access to middle NFA state in CEP pattern, or get some other way
to view unmatching events?

Example project with CEP pattern on github
<https://github.com/A-Kinski/apache-flink-cep/tree/main>, and my question
on SO
<https://stackoverflow.com/questions/78483004/get-access-to-unmatching-events-in-apache-flink-cep>

Thanks in advance


Flink 1.18.1 ,重启状态恢复

2024-05-16 文章 陈叶超
升级到 flink 1.18.1 ,任务重启状态恢复的话,遇到如下报错:
2024-04-09 13:03:48
java.lang.Exception: Exception while creating StreamOperatorStateContext.
at 
org.apache.flink.streaming.api.operators.StreamTaskStateInitializerImpl.streamOperatorStateContext(StreamTaskStateInitializerImpl.java:258)
at 
org.apache.flink.streaming.api.operators.AbstractStreamOperator.initializeState(AbstractStreamOperator.java:256)
at 
org.apache.flink.streaming.runtime.tasks.RegularOperatorChain.initializeStateAndOpenOperators(RegularOperatorChain.java:106)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.restoreGates(StreamTask.java:753)
at 
org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$1.call(StreamTaskActionExecutor.java:55)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.restoreInternal(StreamTask.java:728)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.restore(StreamTask.java:693)
at 
org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:953)
at 
org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:922)
at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:746)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:562)
at java.lang.Thread.run(Thread.java:750)
Caused by: org.apache.flink.util.FlinkException: Could not restore operator 
state backend for 
RowDataStoreWriteOperator_8d96fc510e75de3baf03ef7367db7d42_(2/2) from any of 
the 1 provided restore options.
at 
org.apache.flink.streaming.api.operators.BackendRestorerProcedure.createAndRestore(BackendRestorerProcedure.java:160)
at 
org.apache.flink.streaming.api.operators.StreamTaskStateInitializerImpl.operatorStateBackend(StreamTaskStateInitializerImpl.java:289)
at 
org.apache.flink.streaming.api.operators.StreamTaskStateInitializerImpl.streamOperatorStateContext(StreamTaskStateInitializerImpl.java:176)
... 11 more
Caused by: org.apache.flink.runtime.state.BackendBuildingException: Failed when 
trying to restore operator state backend
at 
org.apache.flink.runtime.state.DefaultOperatorStateBackendBuilder.build(DefaultOperatorStateBackendBuilder.java:88)
at 
org.apache.flink.contrib.streaming.state.EmbeddedRocksDBStateBackend.createOperatorStateBackend(EmbeddedRocksDBStateBackend.java:533)
at 
org.apache.flink.contrib.streaming.state.RocksDBStateBackend.createOperatorStateBackend(RocksDBStateBackend.java:380)
at 
org.apache.flink.streaming.api.operators.StreamTaskStateInitializerImpl.lambda$operatorStateBackend$0(StreamTaskStateInitializerImpl.java:280)
at 
org.apache.flink.streaming.api.operators.BackendRestorerProcedure.attemptCreateAndRestore(BackendRestorerProcedure.java:168)
at 
org.apache.flink.streaming.api.operators.BackendRestorerProcedure.createAndRestore(BackendRestorerProcedure.java:135)
... 13 more
Caused by: java.io.IOException: invalid stream header
at 
org.xerial.snappy.SnappyFramedInputStream.(SnappyFramedInputStream.java:235)
at 
org.xerial.snappy.SnappyFramedInputStream.(SnappyFramedInputStream.java:145)
at 
org.xerial.snappy.SnappyFramedInputStream.(SnappyFramedInputStream.java:129)
at 
org.apache.flink.runtime.state.SnappyStreamCompressionDecorator.decorateWithCompression(SnappyStreamCompressionDecorator.java:53)
at 
org.apache.flink.runtime.state.StreamCompressionDecorator.decorateWithCompression(StreamCompressionDecorator.java:60)
at 
org.apache.flink.runtime.state.CompressibleFSDataInputStream.(CompressibleFSDataInputStream.java:39)
at 
org.apache.flink.runtime.state.OperatorStateRestoreOperation.restore(OperatorStateRestoreOperation.java:185)
at 
org.apache.flink.runtime.state.DefaultOperatorStateBackendBuilder.build(DefaultOperatorStateBackendBuilder.java:85)
... 18 more



Re:Re: use flink 1.19 JDBC Driver can find jdbc connector

2024-05-15 文章 Xuyang
Hi, 

> 现在可以用中文了?

我看你发的是中文答疑邮箱




> 就是opt目录里面的gateway.jar直接编辑Factory文件把connector注册就行了

你的意思是,之前报错类似"找不到一个jdbc 
connector",然后直接在gateway的jar包里的META-INF/services内的Factory文件(SPI文件)内加入jdbc 
connector的Factory实现类就好了吗?




如果是这个问题就有点奇怪,因为本身flink-connector-jdbc的spi文件就已经将相关的类写进去了[1],按理说放到lib目录下,就会spi发现的




[1] 
https://github.com/apache/flink-connector-jdbc/blob/bde28e6a92ffa75ae45bc8df6be55d299ff995a2/flink-connector-jdbc/src/main/resources/META-INF/services/org.apache.flink.table.factories.Factory#L16




--

Best!
Xuyang





在 2024-05-15 15:51:49,abc15...@163.com 写道:
>现在可以用中文了?就是opt目录里面的gateway.jar直接编辑Factory文件把connector注册就行了
>
>
>> 在 2024年5月15日,15:36,Xuyang  写道:
>> 
>> Hi, 看起来你之前的问题是jdbc driver找不到,可以简单描述下你的解决的方法吗?“注册connection数的数量”有点不太好理解。
>> 
>> 
>> 
>> 
>> 如果确实有类似的问题、并且通过这种手段解决了的话,可以建一个improvement的jira issue[1]来帮助社区跟踪、改善这个问题,感谢!
>> 
>> 
>> 
>> 
>> [1] https://issues.apache.org/jira/projects/FLINK/summary
>> 
>> 
>> 
>> 
>> --
>> 
>>Best!
>>Xuyang
>> 
>> 
>> 
>> 
>> 
>>> 在 2024-05-10 12:26:22,abc15...@163.com 写道:
>>> I've solved it. You need to register the number of connections in the jar 
>>> of gateway. But this is inconvenient, and I still hope to improve it.
>>> 发自我的 iPhone
>>> 
>>>>> 在 2024年5月10日,11:56,Xuyang  写道:
>>>> 
>>>> Hi, can you print the classloader and verify if the jdbc connector exists 
>>>> in it?
>>>> 
>>>> 
>>>> 
>>>> 
>>>> --
>>>> 
>>>>   Best!
>>>>   Xuyang
>>>> 
>>>> 
>>>> 
>>>> 
>>>> 
>>>> At 2024-05-09 17:48:33, "McClone"  wrote:
>>>>> I put flink-connector-jdbc into flink\lib.use flink 1.19 JDBC Driver can 
>>>>> not  find jdbc connector,but use sql-client is normal.


Re: use flink 1.19 JDBC Driver can find jdbc connector

2024-05-15 文章 abc15606
现在可以用中文了?就是opt目录里面的gateway.jar直接编辑Factory文件把connector注册就行了


> 在 2024年5月15日,15:36,Xuyang  写道:
> 
> Hi, 看起来你之前的问题是jdbc driver找不到,可以简单描述下你的解决的方法吗?“注册connection数的数量”有点不太好理解。
> 
> 
> 
> 
> 如果确实有类似的问题、并且通过这种手段解决了的话,可以建一个improvement的jira issue[1]来帮助社区跟踪、改善这个问题,感谢!
> 
> 
> 
> 
> [1] https://issues.apache.org/jira/projects/FLINK/summary
> 
> 
> 
> 
> --
> 
>Best!
>Xuyang
> 
> 
> 
> 
> 
>> 在 2024-05-10 12:26:22,abc15...@163.com 写道:
>> I've solved it. You need to register the number of connections in the jar of 
>> gateway. But this is inconvenient, and I still hope to improve it.
>> 发自我的 iPhone
>> 
>>>> 在 2024年5月10日,11:56,Xuyang  写道:
>>> 
>>> Hi, can you print the classloader and verify if the jdbc connector exists 
>>> in it?
>>> 
>>> 
>>> 
>>> 
>>> --
>>> 
>>>   Best!
>>>   Xuyang
>>> 
>>> 
>>> 
>>> 
>>> 
>>> At 2024-05-09 17:48:33, "McClone"  wrote:
>>>> I put flink-connector-jdbc into flink\lib.use flink 1.19 JDBC Driver can 
>>>> not  find jdbc connector,but use sql-client is normal.



Re:请问如何贡献Flink Hologres连接器?

2024-05-15 文章 Xuyang
Hi, 

我觉得如果只是从贡献的角度来说,支持flink hologres 
connector是没问题的,hologres目前作为比较热门的数据库,肯定是有很多需求的,并且现在aliyun 
github官方也基于此提供了开源的flink hologres connector[1]。





但是涉及到aliyun等公司商业化的ververica-connector-hologres包,如果想直接开源的话,在我的角度最好事先确认下面几点,不然可能会隐含一些法律风险

  1. jar包的提供方(aliyun等公司)是否知情、且愿意开源,不然直接拿着商业化的东西给出来有点不太好

2. jar包内的协议是否满足开源的协议,而不是商业化的协议




我推荐如果真要开源,可以基于开源github仓库的flink hologres connector[1]来贡献(比如现在我看目前它最高支持flink 
1.17,可以试试贡献支持到1.18、1.19等等)




[1] https://github.com/aliyun/alibabacloud-hologres-connectors




--

Best!
Xuyang





在 2024-05-14 11:24:37,"casel.chen"  写道:
>我们有使用阿里云商业版Hologres数据库,同时我们有自研的Flink实时计算平台,为了实现在Hologres上实时建仓,我们基于开源Apache 
>Flink 1.17.1结合阿里云maven仓库的ververica-connector-hologres包[1]和开源的holo 
>client[2]开发了hologres 
>connector,修复了一些jar依赖问题。目前我们已经在生产环境使用了一段时间,暂时没有发现问题,现在想将它贡献给社区。
>
>
>请问:
>1. 贡献Flink Hologres连接器是否合规?
>2. 如果合规的话,PR应该提到哪个项目代码仓库?
>3. 还是说要像 https://flink-packages.org/categories/connectors 
>这样链接到自己的github仓库?如果是的话要怎么在flink-packages.org上面注册呢?
>
>
>[1] 
>https://repo1.maven.org/maven2/com/alibaba/ververica/ververica-connector-hologres/1.17-vvr-8.0.4-1/
>[2] 
>https://github.com/aliyun/alibabacloud-hologres-connectors/tree/master/holo-client


Re:Re: use flink 1.19 JDBC Driver can find jdbc connector

2024-05-15 文章 Xuyang
Hi, 看起来你之前的问题是jdbc driver找不到,可以简单描述下你的解决的方法吗?“注册connection数的数量”有点不太好理解。




如果确实有类似的问题、并且通过这种手段解决了的话,可以建一个improvement的jira issue[1]来帮助社区跟踪、改善这个问题,感谢!




[1] https://issues.apache.org/jira/projects/FLINK/summary




--

Best!
Xuyang





在 2024-05-10 12:26:22,abc15...@163.com 写道:
>I've solved it. You need to register the number of connections in the jar of 
>gateway. But this is inconvenient, and I still hope to improve it.
>发自我的 iPhone
>
>> 在 2024年5月10日,11:56,Xuyang  写道:
>> 
>> Hi, can you print the classloader and verify if the jdbc connector exists 
>> in it?
>> 
>> 
>> 
>> 
>> --
>> 
>>Best!
>>    Xuyang
>> 
>> 
>> 
>> 
>> 
>> At 2024-05-09 17:48:33, "McClone"  wrote:
>>> I put flink-connector-jdbc into flink\lib.use flink 1.19 JDBC Driver can 
>>> not  find jdbc connector,but use sql-client is normal.


请问如何贡献Flink Hologres连接器?

2024-05-13 文章 casel.chen
我们有使用阿里云商业版Hologres数据库,同时我们有自研的Flink实时计算平台,为了实现在Hologres上实时建仓,我们基于开源Apache 
Flink 1.17.1结合阿里云maven仓库的ververica-connector-hologres包[1]和开源的holo 
client[2]开发了hologres 
connector,修复了一些jar依赖问题。目前我们已经在生产环境使用了一段时间,暂时没有发现问题,现在想将它贡献给社区。


请问:
1. 贡献Flink Hologres连接器是否合规?
2. 如果合规的话,PR应该提到哪个项目代码仓库?
3. 还是说要像 https://flink-packages.org/categories/connectors 
这样链接到自己的github仓库?如果是的话要怎么在flink-packages.org上面注册呢?


[1] 
https://repo1.maven.org/maven2/com/alibaba/ververica/ververica-connector-hologres/1.17-vvr-8.0.4-1/
[2] 
https://github.com/aliyun/alibabacloud-hologres-connectors/tree/master/holo-client

Re: use flink 1.19 JDBC Driver can find jdbc connector

2024-05-13 文章 kellygeorg...@163.com
退订



 Replied Message 
| From | abc15...@163.com |
| Date | 05/10/2024 12:26 |
| To | user-zh@flink.apache.org |
| Cc | |
| Subject | Re: use flink 1.19 JDBC Driver can find jdbc connector |
I've solved it. You need to register the number of connections in the jar of 
gateway. But this is inconvenient, and I still hope to improve it.
发自我的 iPhone

> 在 2024年5月10日,11:56,Xuyang  写道:
>
> Hi, can you print the classloader and verify if the jdbc connector exists in 
> it?
>
>
>
>
> --
>
>Best!
>Xuyang
>
>
>
>
>
> At 2024-05-09 17:48:33, "McClone"  wrote:
>> I put flink-connector-jdbc into flink\lib.use flink 1.19 JDBC Driver can not 
>>  find jdbc connector,but use sql-client is normal.


Re: use flink 1.19 JDBC Driver can find jdbc connector

2024-05-09 文章 abc15606
I've solved it. You need to register the number of connections in the jar of 
gateway. But this is inconvenient, and I still hope to improve it.
发自我的 iPhone

> 在 2024年5月10日,11:56,Xuyang  写道:
> 
> Hi, can you print the classloader and verify if the jdbc connector exists in 
> it?
> 
> 
> 
> 
> --
> 
>Best!
>Xuyang
> 
> 
> 
> 
> 
> At 2024-05-09 17:48:33, "McClone"  wrote:
>> I put flink-connector-jdbc into flink\lib.use flink 1.19 JDBC Driver can not 
>>  find jdbc connector,but use sql-client is normal.



Re:use flink 1.19 JDBC Driver can find jdbc connector

2024-05-09 文章 Xuyang
Hi, can you print the classloader and verify if the jdbc connector exists in it?




--

Best!
Xuyang





At 2024-05-09 17:48:33, "McClone"  wrote:
>I put flink-connector-jdbc into flink\lib.use flink 1.19 JDBC Driver can not  
>find jdbc connector,but use sql-client is normal.


请问有没有公司可以提供开源Flink维保服务?

2024-05-09 文章 LIU Xiao
如题


use flink 1.19 JDBC Driver can find jdbc connector

2024-05-09 文章 McClone
I put flink-connector-jdbc into flink\lib.use flink 1.19 JDBC Driver can not  
find jdbc connector,but use sql-client is normal.

Re: Flink sql retract to append

2024-04-30 文章 Zijun Zhao
以处理时间为升序,处理结果肯定不会出现回撤的,因为往后的时间不会比当前时间小了,你可以在试试这个去重

On Tue, Apr 30, 2024 at 3:35 PM 焦童  wrote:

> 谢谢你的建议  但是top-1也会产生回撤信息
>
> > 2024年4月30日 15:27,ha.fen...@aisino.com 写道:
> >
> > 可以参考这个
> >
> https://nightlies.apache.org/flink/flink-docs-release-1.19/zh/docs/dev/table/sql/queries/deduplication/
> > 1.11版本不知道是不是支持
> >
> > From: 焦童
> > Date: 2024-04-30 11:25
> > To: user-zh
> > Subject: Flink sql retract to append
> > Hello ,
> > 我使用Flink 1.11 版本 sql  进行数据去重(通过 group by
> 形式)但是这会产生回撤流,下游存储不支持回撤流信息仅支持append,在DataStream
> 中我可以通过状态进行去重,但是在sql中如何做到去重且不产生回撤流呢。谢谢各位
>
>


Re: Flink sql retract to append

2024-04-30 文章 焦童
谢谢你的建议  但是top-1也会产生回撤信息  

> 2024年4月30日 15:27,ha.fen...@aisino.com 写道:
> 
> 可以参考这个
> https://nightlies.apache.org/flink/flink-docs-release-1.19/zh/docs/dev/table/sql/queries/deduplication/
> 1.11版本不知道是不是支持
> 
> From: 焦童
> Date: 2024-04-30 11:25
> To: user-zh
> Subject: Flink sql retract to append
> Hello ,
> 我使用Flink 1.11 版本 sql  进行数据去重(通过 group by 
> 形式)但是这会产生回撤流,下游存储不支持回撤流信息仅支持append,在DataStream 
> 中我可以通过状态进行去重,但是在sql中如何做到去重且不产生回撤流呢。谢谢各位



Flink sql retract to append

2024-04-29 文章 焦童
Hello ,
 我使用Flink 1.11 版本 sql  进行数据去重(通过 group by 
形式)但是这会产生回撤流,下游存储不支持回撤流信息仅支持append,在DataStream 
中我可以通过状态进行去重,但是在sql中如何做到去重且不产生回撤流呢。谢谢各位

Flink 截止到1.18,是否有办法在Table API上添加uid?

2024-04-24 文章 Guanlin Zhang
Hi Team,

我们这边的业务使用 Flink MySQL CDC到 OpenSearch并且使用TABLE API: INSERT INTO t1 SELECT * 
FROM t2 这种方式。

由于我们这边可能会在运行过程中添加额外的Operator,我们有办法在使用snapshot 恢复后保留之前src和sink 
operator的状态么?我看到在DataStream API可以通过设定uid。Table API有同样的方法吗?我看到Flink 
jira:https://issues.apache.org/jira/browse/FLINK-28861 
可以设置table.exec.uid.generation=PLAN_ONLY。请问默认配置下,中间添加transformation 
operator或者其他变更后从snapshot恢复会保留之前的状态么?




Re: Flink流批一体应用在实时数仓数据核对场景下有哪些注意事项?

2024-04-18 文章 Yunfeng Zhou
流模式和批模式在watermark和一些算子语义等方面上有一些不同,但没看到Join和Window算子上有什么差异,这方面应该在batch
mode下应该是支持的。具体的两种模式的比较可以看一下这个文档

https://nightlies.apache.org/flink/flink-docs-master/zh/docs/dev/datastream/execution_mode/

On Thu, Apr 18, 2024 at 9:44 AM casel.chen  wrote:
>
> 有人尝试这么实践过么?可以给一些建议么?谢谢!
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> 在 2024-04-15 11:15:34,"casel.chen"  写道:
> >我最近在调研Flink实时数仓数据质量保障,需要定期(每10/20/30分钟)跑批核对实时数仓产生的数据,传统方式是通过spark作业跑批,如Apache
> > DolphinScheduler的数据质量模块。
> >但这种方式的最大缺点是需要使用spark sql重写flink 
> >sql业务逻辑,难以确保二者一致性。所以我在考虑能否使用Flink流批一体特性,复用flink 
> >sql,只需要将数据源从cdc或kafka换成hologres或starrocks表,再新建跑批结果表,最后只需要比较相同时间段内实时结果表和跑批结果表的数据即可。不过有几点疑问:
> >1. 原实时flink sql表定义中包含的watermark, process_time和event_time这些字段可以复用在batch 
> >mode下么?
> >2. 实时双流关联例如interval join和temporal join能够用于batch mode下么?
> >3. 实时流作业中的窗口函数能够复用于batch mode下么?
> >4. 其他需要关注的事项有哪些?


Flink流批一体应用在实时数仓数据核对场景下有哪些注意事项?

2024-04-14 文章 casel.chen
我最近在调研Flink实时数仓数据质量保障,需要定期(每10/20/30分钟)跑批核对实时数仓产生的数据,传统方式是通过spark作业跑批,如Apache 
DolphinScheduler的数据质量模块。
但这种方式的最大缺点是需要使用spark sql重写flink sql业务逻辑,难以确保二者一致性。所以我在考虑能否使用Flink流批一体特性,复用flink 
sql,只需要将数据源从cdc或kafka换成hologres或starrocks表,再新建跑批结果表,最后只需要比较相同时间段内实时结果表和跑批结果表的数据即可。不过有几点疑问:
1. 原实时flink sql表定义中包含的watermark, process_time和event_time这些字段可以复用在batch mode下么?
2. 实时双流关联例如interval join和temporal join能够用于batch mode下么?
3. 实时流作业中的窗口函数能够复用于batch mode下么?
4. 其他需要关注的事项有哪些?

Re:Unable to use Table API in AWS Managed Flink 1.18

2024-04-10 文章 Xuyang
Hi, Perez.
Flink use SPI to find the jdbc connector in the classloader and when starting, 
the dir '${FLINK_ROOT}/lib' will be added 
into the classpath. That is why in AWS the exception throws. IMO there are two 
ways to solve this question.


1. upload the connector jar to AWS to let the classloader keep this jar. As for 
how to upload connector jars, you need to check 
the relevant documents of AWS.
2. package the jdbc connector jar into your job jar and submit it again.




--

Best!
Xuyang




At 2024-04-10 17:32:19, "Enrique Alberto Perez Delgado" 
 wrote:

Hi all,


I am using AWS Managed Flink 1.18, where I am getting this error when trying to 
submit my job:


```
Caused by: org.apache.flink.table.api.ValidationException: Cannot discover a 
connector using option: 'connector'='jdbc' at 
org.apache.flink.table.factories.FactoryUtil.enrichNoMatchingConnectorError(FactoryUtil.java:798)
 at 
org.apache.flink.table.factories.FactoryUtil.discoverTableFactory(FactoryUtil.java:772)
 at 
org.apache.flink.table.factories.FactoryUtil.createDynamicTableSink(FactoryUtil.java:317)
 ... 32 more Caused by: org.apache.flink.table.api.ValidationException: Could 
not find any factory for identifier 'jdbc' that implements 
'org.apache.flink.table.factories.DynamicTableFactory' in the classpath.
```


I used to get this error when testing locally until I added the 
`flink-connector-jdbc-3.1.2-1.18`.jar to `/opt/flink/lib` in my local docker 
image, which I thought would be provided by AWS. apparently, it isn’t. Has 
anyone encountered this error before?


I highly appreciate any help you could give me,


Best regards, 


Enrique Perez
Data Engineer
HelloFresh SE | Prinzenstraße 89 | 10969 Berlin, Germany
Phone:  +4917625622422











| |
HelloFresh SE, Berlin (Sitz der Gesellschaft) | Vorstände: Dominik S. Richter 
(Vorsitzender), Thomas W. Griesel, Christian Gärtner, Edward Boyes | 
Vorsitzender des Aufsichtsrats: John H. Rittenhouse | Eingetragen beim 
Amtsgericht Charlottenburg, HRB 182382 B | USt-Id Nr.: DE 302210417

CONFIDENTIALITY NOTICE: This message (including any attachments) is 
confidential and may be privileged. It may be read, copied and used only by the 
intended recipient. If you have received it in error please contact the sender 
(by return e-mail) immediately and delete this message. Any unauthorized use or 
dissemination of this message in whole or in parts is strictly prohibited.

Unable to use Table API in AWS Managed Flink 1.18

2024-04-10 文章 Enrique Alberto Perez Delgado
Hi all,

I am using AWS Managed Flink 1.18, where I am getting this error when trying to 
submit my job:

```
Caused by: org.apache.flink.table.api.ValidationException: Cannot discover a 
connector using option: 'connector'='jdbc'
at 
org.apache.flink.table.factories.FactoryUtil.enrichNoMatchingConnectorError(FactoryUtil.java:798)
at 
org.apache.flink.table.factories.FactoryUtil.discoverTableFactory(FactoryUtil.java:772)
at 
org.apache.flink.table.factories.FactoryUtil.createDynamicTableSink(FactoryUtil.java:317)
... 32 more
Caused by: org.apache.flink.table.api.ValidationException: Could not find any 
factory for identifier 'jdbc' that implements 
'org.apache.flink.table.factories.DynamicTableFactory' in the classpath.
```

I used to get this error when testing locally until I added the 
`flink-connector-jdbc-3.1.2-1.18`.jar to `/opt/flink/lib` in my local docker 
image, which I thought would be provided by AWS. apparently, it isn’t. Has 
anyone encountered this error before?

I highly appreciate any help you could give me,

Best regards, 

Enrique Perez
Data Engineer
HelloFresh SE | Prinzenstraße 89 | 10969 Berlin, Germany
Phone:  +4917625622422





-- 




 
<https://www.hellofresh.com/jobs/?utm_medium=email&utm_source=email_signature>


HelloFresh SE, Berlin (Sitz der Gesellschaft) | Vorstände: Dominik S. 
Richter (Vorsitzender), Thomas W. Griesel, Christian Gärtner, Edward Boyes 
| Vorsitzender des Aufsichtsrats: John H. Rittenhouse | Eingetragen beim 
Amtsgericht Charlottenburg, HRB 182382 B | USt-Id Nr.: DE 302210417

*CONFIDENTIALITY NOTICE:* This message (including any attachments) is 
confidential and may be privileged. It may be read, copied and used only by 
the intended recipient. If you have received it in error please contact the 
sender (by return e-mail) immediately and delete this message. Any 
unauthorized use or dissemination of this message in whole or in parts is 
strictly prohibited.




Re: flink 已完成job等一段时间会消失

2024-04-09 文章 gongzhongqiang
你好:

如果想长期保留已完成的任务,推荐使用  History Server :
https://nightlies.apache.org/flink/flink-docs-release-1.19/docs/deployment/config/#history-server

Best,

Zhongqiang Gong

ha.fen...@aisino.com  于2024年4月9日周二 10:39写道:

> 在WEBUI里面,已完成的任务会在completed jobs里面能够看到,过了一会再进去看数据就没有了,是有什么配置自动删除吗?
>


回复:flink 已完成job等一段时间会消失

2024-04-08 文章 spoon_lz
有一个过期时间的配置
https://nightlies.apache.org/flink/flink-docs-release-1.19/docs/deployment/config/#jobstore-expiration-time



| |
spoon_lz
|
|
spoon...@126.com
|


 回复的原邮件 
| 发件人 | ha.fen...@aisino.com |
| 发送日期 | 2024年04月9日 10:38 |
| 收件人 | user-zh |
| 主题 | flink 已完成job等一段时间会消失 |
在WEBUI里面,已完成的任务会在completed jobs里面能够看到,过了一会再进去看数据就没有了,是有什么配置自动删除吗?


Re: flink cdc metrics 问题

2024-04-07 文章 Shawn Huang
你好,目前flink cdc没有提供未消费binlog数据条数这样的指标,你可以通过 currentFetchEventTimeLag
这个指标(表示消费到的binlog数据中时间与当前时间延迟)来判断当前消费情况。

[1]
https://github.com/apache/flink-cdc/blob/master/flink-cdc-connect/flink-cdc-source-connectors/flink-connector-mysql-cdc/src/main/java/org/apache/flink/cdc/connectors/mysql/source/metrics/MySqlSourceReaderMetrics.java

Best,
Shawn Huang


casel.chen  于2024年4月8日周一 12:01写道:

> 请问flink cdc对外有暴露一些监控metrics么?
> 我希望能够监控到使用flink cdc的实时作业当前未消费的binlog数据条数,类似于kafka topic消费积压监控。
> 想通过这个监控防止flink cdc实时作业消费慢而被套圈(最大binlog条数如何获取?)


flink cdc metrics 问题

2024-04-07 文章 casel.chen
请问flink cdc对外有暴露一些监控metrics么?
我希望能够监控到使用flink cdc的实时作业当前未消费的binlog数据条数,类似于kafka topic消费积压监控。
想通过这个监控防止flink cdc实时作业消费慢而被套圈(最大binlog条数如何获取?)

Re: [ANNOUNCE] Apache Flink Kubernetes Operator 1.8.0 released

2024-03-25 文章 Rui Fan
Congratulations! Thanks Max for the release and all involved for the great
work!

A gentle reminder to users: the maven artifact has just been released and
will take some time to complete.

Best,
Rui

On Mon, Mar 25, 2024 at 6:35 PM Maximilian Michels  wrote:

> The Apache Flink community is very happy to announce the release of
> the Apache Flink Kubernetes Operator version 1.8.0.
>
> The Flink Kubernetes Operator allows users to manage their Apache
> Flink applications on Kubernetes through all aspects of their
> lifecycle.
>
> Release highlights:
> - Flink Autotuning automatically adjusts TaskManager memory
> - Flink Autoscaling metrics and decision accuracy improved
> - Improve standalone Flink Autoscaling
> - Savepoint trigger nonce for savepoint-based restarts
> - Operator stability improvements for cluster shutdown
>
> Blog post:
> https://flink.apache.org/2024/03/21/apache-flink-kubernetes-operator-1.8.0-release-announcement/
>
> The release is available for download at:
> https://flink.apache.org/downloads.html
>
> Maven artifacts for Flink Kubernetes Operator can be found at:
>
> https://search.maven.org/artifact/org.apache.flink/flink-kubernetes-operator
>
> Official Docker image for Flink Kubernetes Operator can be found at:
> https://hub.docker.com/r/apache/flink-kubernetes-operator
>
> The full release notes are available in Jira:
>
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?version=12353866&projectId=12315522
>
> We would like to thank the Apache Flink community and its contributors
> who made this release possible!
>
> Cheers,
> Max
>


[ANNOUNCE] Apache Flink Kubernetes Operator 1.8.0 released

2024-03-25 文章 Maximilian Michels
The Apache Flink community is very happy to announce the release of
the Apache Flink Kubernetes Operator version 1.8.0.

The Flink Kubernetes Operator allows users to manage their Apache
Flink applications on Kubernetes through all aspects of their
lifecycle.

Release highlights:
- Flink Autotuning automatically adjusts TaskManager memory
- Flink Autoscaling metrics and decision accuracy improved
- Improve standalone Flink Autoscaling
- Savepoint trigger nonce for savepoint-based restarts
- Operator stability improvements for cluster shutdown

Blog post: 
https://flink.apache.org/2024/03/21/apache-flink-kubernetes-operator-1.8.0-release-announcement/

The release is available for download at:
https://flink.apache.org/downloads.html

Maven artifacts for Flink Kubernetes Operator can be found at:
https://search.maven.org/artifact/org.apache.flink/flink-kubernetes-operator

Official Docker image for Flink Kubernetes Operator can be found at:
https://hub.docker.com/r/apache/flink-kubernetes-operator

The full release notes are available in Jira:
https://issues.apache.org/jira/secure/ReleaseNote.jspa?version=12353866&projectId=12315522

We would like to thank the Apache Flink community and its contributors
who made this release possible!

Cheers,
Max


Re: [ANNOUNCE] Donation Flink CDC into Apache Flink has Completed

2024-03-21 文章 gongzhongqiang
Congrattulations! Thanks for the great work!


Best,
Zhongqiang Gong

Leonard Xu  于2024年3月20日周三 21:36写道:

> Hi devs and users,
>
> We are thrilled to announce that the donation of Flink CDC as a
> sub-project of Apache Flink has completed. We invite you to explore the new
> resources available:
>
> - GitHub Repository: https://github.com/apache/flink-cdc
> - Flink CDC Documentation:
> https://nightlies.apache.org/flink/flink-cdc-docs-stable
>
> After Flink community accepted this donation[1], we have completed
> software copyright signing, code repo migration, code cleanup, website
> migration, CI migration and github issues migration etc.
> Here I am particularly grateful to Hang Ruan, Zhongqaing Gong, Qingsheng
> Ren, Jiabao Sun, LvYanquan, loserwang1024 and other contributors for their
> contributions and help during this process!
>
>
> For all previous contributors: The contribution process has slightly
> changed to align with the main Flink project. To report bugs or suggest new
> features, please open tickets
> Apache Jira (https://issues.apache.org/jira).  Note that we will no
> longer accept GitHub issues for these purposes.
>
>
> Welcome to explore the new repository and documentation. Your feedback and
> contributions are invaluable as we continue to improve Flink CDC.
>
> Thanks everyone for your support and happy exploring Flink CDC!
>
> Best,
> Leonard
> [1] https://lists.apache.org/thread/cw29fhsp99243yfo95xrkw82s5s418ob
>
>


Re: [ANNOUNCE] Donation Flink CDC into Apache Flink has Completed

2024-03-20 文章 Zakelly Lan
Congratulations!


Best,
Zakelly

On Thu, Mar 21, 2024 at 12:05 PM weijie guo 
wrote:

> Congratulations! Well done.
>
>
> Best regards,
>
> Weijie
>
>
> Feng Jin  于2024年3月21日周四 11:40写道:
>
>> Congratulations!
>>
>>
>> Best,
>> Feng
>>
>>
>> On Thu, Mar 21, 2024 at 11:37 AM Ron liu  wrote:
>>
>> > Congratulations!
>> >
>> > Best,
>> > Ron
>> >
>> > Jark Wu  于2024年3月21日周四 10:46写道:
>> >
>> > > Congratulations and welcome!
>> > >
>> > > Best,
>> > > Jark
>> > >
>> > > On Thu, 21 Mar 2024 at 10:35, Rui Fan <1996fan...@gmail.com> wrote:
>> > >
>> > > > Congratulations!
>> > > >
>> > > > Best,
>> > > > Rui
>> > > >
>> > > > On Thu, Mar 21, 2024 at 10:25 AM Hang Ruan 
>> > > wrote:
>> > > >
>> > > > > Congrattulations!
>> > > > >
>> > > > > Best,
>> > > > > Hang
>> > > > >
>> > > > > Lincoln Lee  于2024年3月21日周四 09:54写道:
>> > > > >
>> > > > >>
>> > > > >> Congrats, thanks for the great work!
>> > > > >>
>> > > > >>
>> > > > >> Best,
>> > > > >> Lincoln Lee
>> > > > >>
>> > > > >>
>> > > > >> Peter Huang  于2024年3月20日周三 22:48写道:
>> > > > >>
>> > > > >>> Congratulations
>> > > > >>>
>> > > > >>>
>> > > > >>> Best Regards
>> > > > >>> Peter Huang
>> > > > >>>
>> > > > >>> On Wed, Mar 20, 2024 at 6:56 AM Huajie Wang > >
>> > > > wrote:
>> > > > >>>
>> > > > >>>>
>> > > > >>>> Congratulations
>> > > > >>>>
>> > > > >>>>
>> > > > >>>>
>> > > > >>>> Best,
>> > > > >>>> Huajie Wang
>> > > > >>>>
>> > > > >>>>
>> > > > >>>>
>> > > > >>>> Leonard Xu  于2024年3月20日周三 21:36写道:
>> > > > >>>>
>> > > > >>>>> Hi devs and users,
>> > > > >>>>>
>> > > > >>>>> We are thrilled to announce that the donation of Flink CDC as
>> a
>> > > > >>>>> sub-project of Apache Flink has completed. We invite you to
>> > explore
>> > > > the new
>> > > > >>>>> resources available:
>> > > > >>>>>
>> > > > >>>>> - GitHub Repository: https://github.com/apache/flink-cdc
>> > > > >>>>> - Flink CDC Documentation:
>> > > > >>>>> https://nightlies.apache.org/flink/flink-cdc-docs-stable
>> > > > >>>>>
>> > > > >>>>> After Flink community accepted this donation[1], we have
>> > completed
>> > > > >>>>> software copyright signing, code repo migration, code cleanup,
>> > > > website
>> > > > >>>>> migration, CI migration and github issues migration etc.
>> > > > >>>>> Here I am particularly grateful to Hang Ruan, Zhongqaing Gong,
>> > > > >>>>> Qingsheng Ren, Jiabao Sun, LvYanquan, loserwang1024 and other
>> > > > contributors
>> > > > >>>>> for their contributions and help during this process!
>> > > > >>>>>
>> > > > >>>>>
>> > > > >>>>> For all previous contributors: The contribution process has
>> > > slightly
>> > > > >>>>> changed to align with the main Flink project. To report bugs
>> or
>> > > > suggest new
>> > > > >>>>> features, please open tickets
>> > > > >>>>> Apache Jira (https://issues.apache.org/jira).  Note that we
>> will
>> > > no
>> > > > >>>>> longer accept GitHub issues for these purposes.
>> > > > >>>>>
>> > > > >>>>>
>> > > > >>>>> Welcome to explore the new repository and documentation. Your
>> > > > feedback
>> > > > >>>>> and contributions are invaluable as we continue to improve
>> Flink
>> > > CDC.
>> > > > >>>>>
>> > > > >>>>> Thanks everyone for your support and happy exploring Flink
>> CDC!
>> > > > >>>>>
>> > > > >>>>> Best,
>> > > > >>>>> Leonard
>> > > > >>>>> [1]
>> > > https://lists.apache.org/thread/cw29fhsp99243yfo95xrkw82s5s418ob
>> > > > >>>>>
>> > > > >>>>>
>> > > >
>> > >
>> >
>>
>


Re: [ANNOUNCE] Donation Flink CDC into Apache Flink has Completed

2024-03-20 文章 weijie guo
Congratulations! Well done.


Best regards,

Weijie


Feng Jin  于2024年3月21日周四 11:40写道:

> Congratulations!
>
>
> Best,
> Feng
>
>
> On Thu, Mar 21, 2024 at 11:37 AM Ron liu  wrote:
>
> > Congratulations!
> >
> > Best,
> > Ron
> >
> > Jark Wu  于2024年3月21日周四 10:46写道:
> >
> > > Congratulations and welcome!
> > >
> > > Best,
> > > Jark
> > >
> > > On Thu, 21 Mar 2024 at 10:35, Rui Fan <1996fan...@gmail.com> wrote:
> > >
> > > > Congratulations!
> > > >
> > > > Best,
> > > > Rui
> > > >
> > > > On Thu, Mar 21, 2024 at 10:25 AM Hang Ruan 
> > > wrote:
> > > >
> > > > > Congrattulations!
> > > > >
> > > > > Best,
> > > > > Hang
> > > > >
> > > > > Lincoln Lee  于2024年3月21日周四 09:54写道:
> > > > >
> > > > >>
> > > > >> Congrats, thanks for the great work!
> > > > >>
> > > > >>
> > > > >> Best,
> > > > >> Lincoln Lee
> > > > >>
> > > > >>
> > > > >> Peter Huang  于2024年3月20日周三 22:48写道:
> > > > >>
> > > > >>> Congratulations
> > > > >>>
> > > > >>>
> > > > >>> Best Regards
> > > > >>> Peter Huang
> > > > >>>
> > > > >>> On Wed, Mar 20, 2024 at 6:56 AM Huajie Wang 
> > > > wrote:
> > > > >>>
> > > > >>>>
> > > > >>>> Congratulations
> > > > >>>>
> > > > >>>>
> > > > >>>>
> > > > >>>> Best,
> > > > >>>> Huajie Wang
> > > > >>>>
> > > > >>>>
> > > > >>>>
> > > > >>>> Leonard Xu  于2024年3月20日周三 21:36写道:
> > > > >>>>
> > > > >>>>> Hi devs and users,
> > > > >>>>>
> > > > >>>>> We are thrilled to announce that the donation of Flink CDC as a
> > > > >>>>> sub-project of Apache Flink has completed. We invite you to
> > explore
> > > > the new
> > > > >>>>> resources available:
> > > > >>>>>
> > > > >>>>> - GitHub Repository: https://github.com/apache/flink-cdc
> > > > >>>>> - Flink CDC Documentation:
> > > > >>>>> https://nightlies.apache.org/flink/flink-cdc-docs-stable
> > > > >>>>>
> > > > >>>>> After Flink community accepted this donation[1], we have
> > completed
> > > > >>>>> software copyright signing, code repo migration, code cleanup,
> > > > website
> > > > >>>>> migration, CI migration and github issues migration etc.
> > > > >>>>> Here I am particularly grateful to Hang Ruan, Zhongqaing Gong,
> > > > >>>>> Qingsheng Ren, Jiabao Sun, LvYanquan, loserwang1024 and other
> > > > contributors
> > > > >>>>> for their contributions and help during this process!
> > > > >>>>>
> > > > >>>>>
> > > > >>>>> For all previous contributors: The contribution process has
> > > slightly
> > > > >>>>> changed to align with the main Flink project. To report bugs or
> > > > suggest new
> > > > >>>>> features, please open tickets
> > > > >>>>> Apache Jira (https://issues.apache.org/jira).  Note that we
> will
> > > no
> > > > >>>>> longer accept GitHub issues for these purposes.
> > > > >>>>>
> > > > >>>>>
> > > > >>>>> Welcome to explore the new repository and documentation. Your
> > > > feedback
> > > > >>>>> and contributions are invaluable as we continue to improve
> Flink
> > > CDC.
> > > > >>>>>
> > > > >>>>> Thanks everyone for your support and happy exploring Flink CDC!
> > > > >>>>>
> > > > >>>>> Best,
> > > > >>>>> Leonard
> > > > >>>>> [1]
> > > https://lists.apache.org/thread/cw29fhsp99243yfo95xrkw82s5s418ob
> > > > >>>>>
> > > > >>>>>
> > > >
> > >
> >
>


Re: [ANNOUNCE] Donation Flink CDC into Apache Flink has Completed

2024-03-20 文章 Feng Jin
Congratulations!


Best,
Feng


On Thu, Mar 21, 2024 at 11:37 AM Ron liu  wrote:

> Congratulations!
>
> Best,
> Ron
>
> Jark Wu  于2024年3月21日周四 10:46写道:
>
> > Congratulations and welcome!
> >
> > Best,
> > Jark
> >
> > On Thu, 21 Mar 2024 at 10:35, Rui Fan <1996fan...@gmail.com> wrote:
> >
> > > Congratulations!
> > >
> > > Best,
> > > Rui
> > >
> > > On Thu, Mar 21, 2024 at 10:25 AM Hang Ruan 
> > wrote:
> > >
> > > > Congrattulations!
> > > >
> > > > Best,
> > > > Hang
> > > >
> > > > Lincoln Lee  于2024年3月21日周四 09:54写道:
> > > >
> > > >>
> > > >> Congrats, thanks for the great work!
> > > >>
> > > >>
> > > >> Best,
> > > >> Lincoln Lee
> > > >>
> > > >>
> > > >> Peter Huang  于2024年3月20日周三 22:48写道:
> > > >>
> > > >>> Congratulations
> > > >>>
> > > >>>
> > > >>> Best Regards
> > > >>> Peter Huang
> > > >>>
> > > >>> On Wed, Mar 20, 2024 at 6:56 AM Huajie Wang 
> > > wrote:
> > > >>>
> > > >>>>
> > > >>>> Congratulations
> > > >>>>
> > > >>>>
> > > >>>>
> > > >>>> Best,
> > > >>>> Huajie Wang
> > > >>>>
> > > >>>>
> > > >>>>
> > > >>>> Leonard Xu  于2024年3月20日周三 21:36写道:
> > > >>>>
> > > >>>>> Hi devs and users,
> > > >>>>>
> > > >>>>> We are thrilled to announce that the donation of Flink CDC as a
> > > >>>>> sub-project of Apache Flink has completed. We invite you to
> explore
> > > the new
> > > >>>>> resources available:
> > > >>>>>
> > > >>>>> - GitHub Repository: https://github.com/apache/flink-cdc
> > > >>>>> - Flink CDC Documentation:
> > > >>>>> https://nightlies.apache.org/flink/flink-cdc-docs-stable
> > > >>>>>
> > > >>>>> After Flink community accepted this donation[1], we have
> completed
> > > >>>>> software copyright signing, code repo migration, code cleanup,
> > > website
> > > >>>>> migration, CI migration and github issues migration etc.
> > > >>>>> Here I am particularly grateful to Hang Ruan, Zhongqaing Gong,
> > > >>>>> Qingsheng Ren, Jiabao Sun, LvYanquan, loserwang1024 and other
> > > contributors
> > > >>>>> for their contributions and help during this process!
> > > >>>>>
> > > >>>>>
> > > >>>>> For all previous contributors: The contribution process has
> > slightly
> > > >>>>> changed to align with the main Flink project. To report bugs or
> > > suggest new
> > > >>>>> features, please open tickets
> > > >>>>> Apache Jira (https://issues.apache.org/jira).  Note that we will
> > no
> > > >>>>> longer accept GitHub issues for these purposes.
> > > >>>>>
> > > >>>>>
> > > >>>>> Welcome to explore the new repository and documentation. Your
> > > feedback
> > > >>>>> and contributions are invaluable as we continue to improve Flink
> > CDC.
> > > >>>>>
> > > >>>>> Thanks everyone for your support and happy exploring Flink CDC!
> > > >>>>>
> > > >>>>> Best,
> > > >>>>> Leonard
> > > >>>>> [1]
> > https://lists.apache.org/thread/cw29fhsp99243yfo95xrkw82s5s418ob
> > > >>>>>
> > > >>>>>
> > >
> >
>


Re: [ANNOUNCE] Donation Flink CDC into Apache Flink has Completed

2024-03-20 文章 Ron liu
Congratulations!

Best,
Ron

Jark Wu  于2024年3月21日周四 10:46写道:

> Congratulations and welcome!
>
> Best,
> Jark
>
> On Thu, 21 Mar 2024 at 10:35, Rui Fan <1996fan...@gmail.com> wrote:
>
> > Congratulations!
> >
> > Best,
> > Rui
> >
> > On Thu, Mar 21, 2024 at 10:25 AM Hang Ruan 
> wrote:
> >
> > > Congrattulations!
> > >
> > > Best,
> > > Hang
> > >
> > > Lincoln Lee  于2024年3月21日周四 09:54写道:
> > >
> > >>
> > >> Congrats, thanks for the great work!
> > >>
> > >>
> > >> Best,
> > >> Lincoln Lee
> > >>
> > >>
> > >> Peter Huang  于2024年3月20日周三 22:48写道:
> > >>
> > >>> Congratulations
> > >>>
> > >>>
> > >>> Best Regards
> > >>> Peter Huang
> > >>>
> > >>> On Wed, Mar 20, 2024 at 6:56 AM Huajie Wang 
> > wrote:
> > >>>
> > >>>>
> > >>>> Congratulations
> > >>>>
> > >>>>
> > >>>>
> > >>>> Best,
> > >>>> Huajie Wang
> > >>>>
> > >>>>
> > >>>>
> > >>>> Leonard Xu  于2024年3月20日周三 21:36写道:
> > >>>>
> > >>>>> Hi devs and users,
> > >>>>>
> > >>>>> We are thrilled to announce that the donation of Flink CDC as a
> > >>>>> sub-project of Apache Flink has completed. We invite you to explore
> > the new
> > >>>>> resources available:
> > >>>>>
> > >>>>> - GitHub Repository: https://github.com/apache/flink-cdc
> > >>>>> - Flink CDC Documentation:
> > >>>>> https://nightlies.apache.org/flink/flink-cdc-docs-stable
> > >>>>>
> > >>>>> After Flink community accepted this donation[1], we have completed
> > >>>>> software copyright signing, code repo migration, code cleanup,
> > website
> > >>>>> migration, CI migration and github issues migration etc.
> > >>>>> Here I am particularly grateful to Hang Ruan, Zhongqaing Gong,
> > >>>>> Qingsheng Ren, Jiabao Sun, LvYanquan, loserwang1024 and other
> > contributors
> > >>>>> for their contributions and help during this process!
> > >>>>>
> > >>>>>
> > >>>>> For all previous contributors: The contribution process has
> slightly
> > >>>>> changed to align with the main Flink project. To report bugs or
> > suggest new
> > >>>>> features, please open tickets
> > >>>>> Apache Jira (https://issues.apache.org/jira).  Note that we will
> no
> > >>>>> longer accept GitHub issues for these purposes.
> > >>>>>
> > >>>>>
> > >>>>> Welcome to explore the new repository and documentation. Your
> > feedback
> > >>>>> and contributions are invaluable as we continue to improve Flink
> CDC.
> > >>>>>
> > >>>>> Thanks everyone for your support and happy exploring Flink CDC!
> > >>>>>
> > >>>>> Best,
> > >>>>> Leonard
> > >>>>> [1]
> https://lists.apache.org/thread/cw29fhsp99243yfo95xrkw82s5s418ob
> > >>>>>
> > >>>>>
> >
>


Re: [ANNOUNCE] Donation Flink CDC into Apache Flink has Completed

2024-03-20 文章 shuai xu
Congratulations!


Best!
Xushuai

> 2024年3月21日 10:54,Yanquan Lv  写道:
> 
> Congratulations and  Looking forward to future versions!
> 
> Jark Wu  于2024年3月21日周四 10:47写道:
> 
>> Congratulations and welcome!
>> 
>> Best,
>> Jark
>> 
>> On Thu, 21 Mar 2024 at 10:35, Rui Fan <1996fan...@gmail.com> wrote:
>> 
>>> Congratulations!
>>> 
>>> Best,
>>> Rui
>>> 
>>> On Thu, Mar 21, 2024 at 10:25 AM Hang Ruan 
>> wrote:
>>> 
>>>> Congrattulations!
>>>> 
>>>> Best,
>>>> Hang
>>>> 
>>>> Lincoln Lee  于2024年3月21日周四 09:54写道:
>>>> 
>>>>> 
>>>>> Congrats, thanks for the great work!
>>>>> 
>>>>> 
>>>>> Best,
>>>>> Lincoln Lee
>>>>> 
>>>>> 
>>>>> Peter Huang  于2024年3月20日周三 22:48写道:
>>>>> 
>>>>>> Congratulations
>>>>>> 
>>>>>> 
>>>>>> Best Regards
>>>>>> Peter Huang
>>>>>> 
>>>>>> On Wed, Mar 20, 2024 at 6:56 AM Huajie Wang 
>>> wrote:
>>>>>> 
>>>>>>> 
>>>>>>> Congratulations
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>> Best,
>>>>>>> Huajie Wang
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>> Leonard Xu  于2024年3月20日周三 21:36写道:
>>>>>>> 
>>>>>>>> Hi devs and users,
>>>>>>>> 
>>>>>>>> We are thrilled to announce that the donation of Flink CDC as a
>>>>>>>> sub-project of Apache Flink has completed. We invite you to explore
>>> the new
>>>>>>>> resources available:
>>>>>>>> 
>>>>>>>> - GitHub Repository: https://github.com/apache/flink-cdc
>>>>>>>> - Flink CDC Documentation:
>>>>>>>> https://nightlies.apache.org/flink/flink-cdc-docs-stable
>>>>>>>> 
>>>>>>>> After Flink community accepted this donation[1], we have completed
>>>>>>>> software copyright signing, code repo migration, code cleanup,
>>> website
>>>>>>>> migration, CI migration and github issues migration etc.
>>>>>>>> Here I am particularly grateful to Hang Ruan, Zhongqaing Gong,
>>>>>>>> Qingsheng Ren, Jiabao Sun, LvYanquan, loserwang1024 and other
>>> contributors
>>>>>>>> for their contributions and help during this process!
>>>>>>>> 
>>>>>>>> 
>>>>>>>> For all previous contributors: The contribution process has
>> slightly
>>>>>>>> changed to align with the main Flink project. To report bugs or
>>> suggest new
>>>>>>>> features, please open tickets
>>>>>>>> Apache Jira (https://issues.apache.org/jira).  Note that we will
>> no
>>>>>>>> longer accept GitHub issues for these purposes.
>>>>>>>> 
>>>>>>>> 
>>>>>>>> Welcome to explore the new repository and documentation. Your
>>> feedback
>>>>>>>> and contributions are invaluable as we continue to improve Flink
>> CDC.
>>>>>>>> 
>>>>>>>> Thanks everyone for your support and happy exploring Flink CDC!
>>>>>>>> 
>>>>>>>> Best,
>>>>>>>> Leonard
>>>>>>>> [1]
>> https://lists.apache.org/thread/cw29fhsp99243yfo95xrkw82s5s418ob
>>>>>>>> 
>>>>>>>> 
>>> 
>> 



Re: [ANNOUNCE] Donation Flink CDC into Apache Flink has Completed

2024-03-20 文章 Yanquan Lv
Congratulations and  Looking forward to future versions!

Jark Wu  于2024年3月21日周四 10:47写道:

> Congratulations and welcome!
>
> Best,
> Jark
>
> On Thu, 21 Mar 2024 at 10:35, Rui Fan <1996fan...@gmail.com> wrote:
>
> > Congratulations!
> >
> > Best,
> > Rui
> >
> > On Thu, Mar 21, 2024 at 10:25 AM Hang Ruan 
> wrote:
> >
> > > Congrattulations!
> > >
> > > Best,
> > > Hang
> > >
> > > Lincoln Lee  于2024年3月21日周四 09:54写道:
> > >
> > >>
> > >> Congrats, thanks for the great work!
> > >>
> > >>
> > >> Best,
> > >> Lincoln Lee
> > >>
> > >>
> > >> Peter Huang  于2024年3月20日周三 22:48写道:
> > >>
> > >>> Congratulations
> > >>>
> > >>>
> > >>> Best Regards
> > >>> Peter Huang
> > >>>
> > >>> On Wed, Mar 20, 2024 at 6:56 AM Huajie Wang 
> > wrote:
> > >>>
> > >>>>
> > >>>> Congratulations
> > >>>>
> > >>>>
> > >>>>
> > >>>> Best,
> > >>>> Huajie Wang
> > >>>>
> > >>>>
> > >>>>
> > >>>> Leonard Xu  于2024年3月20日周三 21:36写道:
> > >>>>
> > >>>>> Hi devs and users,
> > >>>>>
> > >>>>> We are thrilled to announce that the donation of Flink CDC as a
> > >>>>> sub-project of Apache Flink has completed. We invite you to explore
> > the new
> > >>>>> resources available:
> > >>>>>
> > >>>>> - GitHub Repository: https://github.com/apache/flink-cdc
> > >>>>> - Flink CDC Documentation:
> > >>>>> https://nightlies.apache.org/flink/flink-cdc-docs-stable
> > >>>>>
> > >>>>> After Flink community accepted this donation[1], we have completed
> > >>>>> software copyright signing, code repo migration, code cleanup,
> > website
> > >>>>> migration, CI migration and github issues migration etc.
> > >>>>> Here I am particularly grateful to Hang Ruan, Zhongqaing Gong,
> > >>>>> Qingsheng Ren, Jiabao Sun, LvYanquan, loserwang1024 and other
> > contributors
> > >>>>> for their contributions and help during this process!
> > >>>>>
> > >>>>>
> > >>>>> For all previous contributors: The contribution process has
> slightly
> > >>>>> changed to align with the main Flink project. To report bugs or
> > suggest new
> > >>>>> features, please open tickets
> > >>>>> Apache Jira (https://issues.apache.org/jira).  Note that we will
> no
> > >>>>> longer accept GitHub issues for these purposes.
> > >>>>>
> > >>>>>
> > >>>>> Welcome to explore the new repository and documentation. Your
> > feedback
> > >>>>> and contributions are invaluable as we continue to improve Flink
> CDC.
> > >>>>>
> > >>>>> Thanks everyone for your support and happy exploring Flink CDC!
> > >>>>>
> > >>>>> Best,
> > >>>>> Leonard
> > >>>>> [1]
> https://lists.apache.org/thread/cw29fhsp99243yfo95xrkw82s5s418ob
> > >>>>>
> > >>>>>
> >
>


Re: [ANNOUNCE] Donation Flink CDC into Apache Flink has Completed

2024-03-20 文章 Jark Wu
Congratulations and welcome!

Best,
Jark

On Thu, 21 Mar 2024 at 10:35, Rui Fan <1996fan...@gmail.com> wrote:

> Congratulations!
>
> Best,
> Rui
>
> On Thu, Mar 21, 2024 at 10:25 AM Hang Ruan  wrote:
>
> > Congrattulations!
> >
> > Best,
> > Hang
> >
> > Lincoln Lee  于2024年3月21日周四 09:54写道:
> >
> >>
> >> Congrats, thanks for the great work!
> >>
> >>
> >> Best,
> >> Lincoln Lee
> >>
> >>
> >> Peter Huang  于2024年3月20日周三 22:48写道:
> >>
> >>> Congratulations
> >>>
> >>>
> >>> Best Regards
> >>> Peter Huang
> >>>
> >>> On Wed, Mar 20, 2024 at 6:56 AM Huajie Wang 
> wrote:
> >>>
> >>>>
> >>>> Congratulations
> >>>>
> >>>>
> >>>>
> >>>> Best,
> >>>> Huajie Wang
> >>>>
> >>>>
> >>>>
> >>>> Leonard Xu  于2024年3月20日周三 21:36写道:
> >>>>
> >>>>> Hi devs and users,
> >>>>>
> >>>>> We are thrilled to announce that the donation of Flink CDC as a
> >>>>> sub-project of Apache Flink has completed. We invite you to explore
> the new
> >>>>> resources available:
> >>>>>
> >>>>> - GitHub Repository: https://github.com/apache/flink-cdc
> >>>>> - Flink CDC Documentation:
> >>>>> https://nightlies.apache.org/flink/flink-cdc-docs-stable
> >>>>>
> >>>>> After Flink community accepted this donation[1], we have completed
> >>>>> software copyright signing, code repo migration, code cleanup,
> website
> >>>>> migration, CI migration and github issues migration etc.
> >>>>> Here I am particularly grateful to Hang Ruan, Zhongqaing Gong,
> >>>>> Qingsheng Ren, Jiabao Sun, LvYanquan, loserwang1024 and other
> contributors
> >>>>> for their contributions and help during this process!
> >>>>>
> >>>>>
> >>>>> For all previous contributors: The contribution process has slightly
> >>>>> changed to align with the main Flink project. To report bugs or
> suggest new
> >>>>> features, please open tickets
> >>>>> Apache Jira (https://issues.apache.org/jira).  Note that we will no
> >>>>> longer accept GitHub issues for these purposes.
> >>>>>
> >>>>>
> >>>>> Welcome to explore the new repository and documentation. Your
> feedback
> >>>>> and contributions are invaluable as we continue to improve Flink CDC.
> >>>>>
> >>>>> Thanks everyone for your support and happy exploring Flink CDC!
> >>>>>
> >>>>> Best,
> >>>>> Leonard
> >>>>> [1] https://lists.apache.org/thread/cw29fhsp99243yfo95xrkw82s5s418ob
> >>>>>
> >>>>>
>


Re:Re: [ANNOUNCE] Donation Flink CDC into Apache Flink has Completed

2024-03-20 文章 Xuyang
Cheers!




--

Best!
Xuyang

在 2024-03-21 10:28:45,"Rui Fan" <1996fan...@gmail.com> 写道:
>Congratulations!
>
>Best,
>Rui
>
>On Thu, Mar 21, 2024 at 10:25 AM Hang Ruan  wrote:
>
>> Congrattulations!
>>
>> Best,
>> Hang
>>
>> Lincoln Lee  于2024年3月21日周四 09:54写道:
>>
>>>
>>> Congrats, thanks for the great work!
>>>
>>>
>>> Best,
>>> Lincoln Lee
>>>
>>>
>>> Peter Huang  于2024年3月20日周三 22:48写道:
>>>
>>>> Congratulations
>>>>
>>>>
>>>> Best Regards
>>>> Peter Huang
>>>>
>>>> On Wed, Mar 20, 2024 at 6:56 AM Huajie Wang  wrote:
>>>>
>>>>>
>>>>> Congratulations
>>>>>
>>>>>
>>>>>
>>>>> Best,
>>>>> Huajie Wang
>>>>>
>>>>>
>>>>>
>>>>> Leonard Xu  于2024年3月20日周三 21:36写道:
>>>>>
>>>>>> Hi devs and users,
>>>>>>
>>>>>> We are thrilled to announce that the donation of Flink CDC as a
>>>>>> sub-project of Apache Flink has completed. We invite you to explore the 
>>>>>> new
>>>>>> resources available:
>>>>>>
>>>>>> - GitHub Repository: https://github.com/apache/flink-cdc
>>>>>> - Flink CDC Documentation:
>>>>>> https://nightlies.apache.org/flink/flink-cdc-docs-stable
>>>>>>
>>>>>> After Flink community accepted this donation[1], we have completed
>>>>>> software copyright signing, code repo migration, code cleanup, website
>>>>>> migration, CI migration and github issues migration etc.
>>>>>> Here I am particularly grateful to Hang Ruan, Zhongqaing Gong,
>>>>>> Qingsheng Ren, Jiabao Sun, LvYanquan, loserwang1024 and other 
>>>>>> contributors
>>>>>> for their contributions and help during this process!
>>>>>>
>>>>>>
>>>>>> For all previous contributors: The contribution process has slightly
>>>>>> changed to align with the main Flink project. To report bugs or suggest 
>>>>>> new
>>>>>> features, please open tickets
>>>>>> Apache Jira (https://issues.apache.org/jira).  Note that we will no
>>>>>> longer accept GitHub issues for these purposes.
>>>>>>
>>>>>>
>>>>>> Welcome to explore the new repository and documentation. Your feedback
>>>>>> and contributions are invaluable as we continue to improve Flink CDC.
>>>>>>
>>>>>> Thanks everyone for your support and happy exploring Flink CDC!
>>>>>>
>>>>>> Best,
>>>>>> Leonard
>>>>>> [1] https://lists.apache.org/thread/cw29fhsp99243yfo95xrkw82s5s418ob
>>>>>>
>>>>>>


Re: [ANNOUNCE] Donation Flink CDC into Apache Flink has Completed

2024-03-20 文章 Rui Fan
Congratulations!

Best,
Rui

On Thu, Mar 21, 2024 at 10:25 AM Hang Ruan  wrote:

> Congrattulations!
>
> Best,
> Hang
>
> Lincoln Lee  于2024年3月21日周四 09:54写道:
>
>>
>> Congrats, thanks for the great work!
>>
>>
>> Best,
>> Lincoln Lee
>>
>>
>> Peter Huang  于2024年3月20日周三 22:48写道:
>>
>>> Congratulations
>>>
>>>
>>> Best Regards
>>> Peter Huang
>>>
>>> On Wed, Mar 20, 2024 at 6:56 AM Huajie Wang  wrote:
>>>
>>>>
>>>> Congratulations
>>>>
>>>>
>>>>
>>>> Best,
>>>> Huajie Wang
>>>>
>>>>
>>>>
>>>> Leonard Xu  于2024年3月20日周三 21:36写道:
>>>>
>>>>> Hi devs and users,
>>>>>
>>>>> We are thrilled to announce that the donation of Flink CDC as a
>>>>> sub-project of Apache Flink has completed. We invite you to explore the 
>>>>> new
>>>>> resources available:
>>>>>
>>>>> - GitHub Repository: https://github.com/apache/flink-cdc
>>>>> - Flink CDC Documentation:
>>>>> https://nightlies.apache.org/flink/flink-cdc-docs-stable
>>>>>
>>>>> After Flink community accepted this donation[1], we have completed
>>>>> software copyright signing, code repo migration, code cleanup, website
>>>>> migration, CI migration and github issues migration etc.
>>>>> Here I am particularly grateful to Hang Ruan, Zhongqaing Gong,
>>>>> Qingsheng Ren, Jiabao Sun, LvYanquan, loserwang1024 and other contributors
>>>>> for their contributions and help during this process!
>>>>>
>>>>>
>>>>> For all previous contributors: The contribution process has slightly
>>>>> changed to align with the main Flink project. To report bugs or suggest 
>>>>> new
>>>>> features, please open tickets
>>>>> Apache Jira (https://issues.apache.org/jira).  Note that we will no
>>>>> longer accept GitHub issues for these purposes.
>>>>>
>>>>>
>>>>> Welcome to explore the new repository and documentation. Your feedback
>>>>> and contributions are invaluable as we continue to improve Flink CDC.
>>>>>
>>>>> Thanks everyone for your support and happy exploring Flink CDC!
>>>>>
>>>>> Best,
>>>>> Leonard
>>>>> [1] https://lists.apache.org/thread/cw29fhsp99243yfo95xrkw82s5s418ob
>>>>>
>>>>>


Re: [ANNOUNCE] Donation Flink CDC into Apache Flink has Completed

2024-03-20 文章 Hang Ruan
Congrattulations!

Best,
Hang

Lincoln Lee  于2024年3月21日周四 09:54写道:

>
> Congrats, thanks for the great work!
>
>
> Best,
> Lincoln Lee
>
>
> Peter Huang  于2024年3月20日周三 22:48写道:
>
>> Congratulations
>>
>>
>> Best Regards
>> Peter Huang
>>
>> On Wed, Mar 20, 2024 at 6:56 AM Huajie Wang  wrote:
>>
>>>
>>> Congratulations
>>>
>>>
>>>
>>> Best,
>>> Huajie Wang
>>>
>>>
>>>
>>> Leonard Xu  于2024年3月20日周三 21:36写道:
>>>
>>>> Hi devs and users,
>>>>
>>>> We are thrilled to announce that the donation of Flink CDC as a
>>>> sub-project of Apache Flink has completed. We invite you to explore the new
>>>> resources available:
>>>>
>>>> - GitHub Repository: https://github.com/apache/flink-cdc
>>>> - Flink CDC Documentation:
>>>> https://nightlies.apache.org/flink/flink-cdc-docs-stable
>>>>
>>>> After Flink community accepted this donation[1], we have completed
>>>> software copyright signing, code repo migration, code cleanup, website
>>>> migration, CI migration and github issues migration etc.
>>>> Here I am particularly grateful to Hang Ruan, Zhongqaing Gong,
>>>> Qingsheng Ren, Jiabao Sun, LvYanquan, loserwang1024 and other contributors
>>>> for their contributions and help during this process!
>>>>
>>>>
>>>> For all previous contributors: The contribution process has slightly
>>>> changed to align with the main Flink project. To report bugs or suggest new
>>>> features, please open tickets
>>>> Apache Jira (https://issues.apache.org/jira).  Note that we will no
>>>> longer accept GitHub issues for these purposes.
>>>>
>>>>
>>>> Welcome to explore the new repository and documentation. Your feedback
>>>> and contributions are invaluable as we continue to improve Flink CDC.
>>>>
>>>> Thanks everyone for your support and happy exploring Flink CDC!
>>>>
>>>> Best,
>>>> Leonard
>>>> [1] https://lists.apache.org/thread/cw29fhsp99243yfo95xrkw82s5s418ob
>>>>
>>>>


Re: [ANNOUNCE] Donation Flink CDC into Apache Flink has Completed

2024-03-20 文章 Lincoln Lee
Congrats, thanks for the great work!


Best,
Lincoln Lee


Peter Huang  于2024年3月20日周三 22:48写道:

> Congratulations
>
>
> Best Regards
> Peter Huang
>
> On Wed, Mar 20, 2024 at 6:56 AM Huajie Wang  wrote:
>
>>
>> Congratulations
>>
>>
>>
>> Best,
>> Huajie Wang
>>
>>
>>
>> Leonard Xu  于2024年3月20日周三 21:36写道:
>>
>>> Hi devs and users,
>>>
>>> We are thrilled to announce that the donation of Flink CDC as a
>>> sub-project of Apache Flink has completed. We invite you to explore the new
>>> resources available:
>>>
>>> - GitHub Repository: https://github.com/apache/flink-cdc
>>> - Flink CDC Documentation:
>>> https://nightlies.apache.org/flink/flink-cdc-docs-stable
>>>
>>> After Flink community accepted this donation[1], we have completed
>>> software copyright signing, code repo migration, code cleanup, website
>>> migration, CI migration and github issues migration etc.
>>> Here I am particularly grateful to Hang Ruan, Zhongqaing Gong, Qingsheng
>>> Ren, Jiabao Sun, LvYanquan, loserwang1024 and other contributors for their
>>> contributions and help during this process!
>>>
>>>
>>> For all previous contributors: The contribution process has slightly
>>> changed to align with the main Flink project. To report bugs or suggest new
>>> features, please open tickets
>>> Apache Jira (https://issues.apache.org/jira).  Note that we will no
>>> longer accept GitHub issues for these purposes.
>>>
>>>
>>> Welcome to explore the new repository and documentation. Your feedback
>>> and contributions are invaluable as we continue to improve Flink CDC.
>>>
>>> Thanks everyone for your support and happy exploring Flink CDC!
>>>
>>> Best,
>>> Leonard
>>> [1] https://lists.apache.org/thread/cw29fhsp99243yfo95xrkw82s5s418ob
>>>
>>>


Re: [ANNOUNCE] Donation Flink CDC into Apache Flink has Completed

2024-03-20 文章 Huajie Wang
Congratulations



Best,
Huajie Wang



Leonard Xu  于2024年3月20日周三 21:36写道:

> Hi devs and users,
>
> We are thrilled to announce that the donation of Flink CDC as a
> sub-project of Apache Flink has completed. We invite you to explore the new
> resources available:
>
> - GitHub Repository: https://github.com/apache/flink-cdc
> - Flink CDC Documentation:
> https://nightlies.apache.org/flink/flink-cdc-docs-stable
>
> After Flink community accepted this donation[1], we have completed
> software copyright signing, code repo migration, code cleanup, website
> migration, CI migration and github issues migration etc.
> Here I am particularly grateful to Hang Ruan, Zhongqaing Gong, Qingsheng
> Ren, Jiabao Sun, LvYanquan, loserwang1024 and other contributors for their
> contributions and help during this process!
>
>
> For all previous contributors: The contribution process has slightly
> changed to align with the main Flink project. To report bugs or suggest new
> features, please open tickets
> Apache Jira (https://issues.apache.org/jira).  Note that we will no
> longer accept GitHub issues for these purposes.
>
>
> Welcome to explore the new repository and documentation. Your feedback and
> contributions are invaluable as we continue to improve Flink CDC.
>
> Thanks everyone for your support and happy exploring Flink CDC!
>
> Best,
> Leonard
> [1] https://lists.apache.org/thread/cw29fhsp99243yfo95xrkw82s5s418ob
>
>


[ANNOUNCE] Donation Flink CDC into Apache Flink has Completed

2024-03-20 文章 Leonard Xu
Hi devs and users,

We are thrilled to announce that the donation of Flink CDC as a sub-project of 
Apache Flink has completed. We invite you to explore the new resources 
available:

- GitHub Repository: https://github.com/apache/flink-cdc
- Flink CDC Documentation: 
https://nightlies.apache.org/flink/flink-cdc-docs-stable

After Flink community accepted this donation[1], we have completed software 
copyright signing, code repo migration, code cleanup, website migration, CI 
migration and github issues migration etc. 
Here I am particularly grateful to Hang Ruan, Zhongqaing Gong, Qingsheng Ren, 
Jiabao Sun, LvYanquan, loserwang1024 and other contributors for their 
contributions and help during this process!


For all previous contributors: The contribution process has slightly changed to 
align with the main Flink project. To report bugs or suggest new features, 
please open tickets 
Apache Jira (https://issues.apache.org/jira).  Note that we will no longer 
accept GitHub issues for these purposes.


Welcome to explore the new repository and documentation. Your feedback and 
contributions are invaluable as we continue to improve Flink CDC.

Thanks everyone for your support and happy exploring Flink CDC!

Best,
Leonard
[1] https://lists.apache.org/thread/cw29fhsp99243yfo95xrkw82s5s418ob



  1   2   3   4   5   6   7   8   9   10   >