Re: Restore from checkpoint

2024-05-17 Thread jiadong.lu
Hi Phil, AFAIK, the error indicated your path was incorrect. your should use '/opt/flink/checkpoints/1875588e19b1d8709ee62be1cdcc' or 'file:///opt/flink/checkpoints/1875588e19b1d8709ee62be1cdcc' instead. Best. Jiadong.Lu On 5/18/24 2:37 AM, Phil Stavridis wrote: Hi, I am trying to test how

Re: [ANNOUNCE] Apache Flink CDC 3.1.0 released

2024-05-17 Thread Muhammet Orazov via user
Amazing, congrats! Thanks for your efforts! Best, Muhammet On 2024-05-17 09:32, Qingsheng Ren wrote: The Apache Flink community is very happy to announce the release of Apache Flink CDC 3.1.0. Apache Flink CDC is a distributed data integration tool for real time data and batch data, bringing

Restore from checkpoint

2024-05-17 Thread Phil Stavridis
Hi, I am trying to test how the checkpoints work for restoring state, but not sure how to run a new instance of a flink job, after I have cancelled it, using the checkpoints which I store in the filesystem of the job manager, e.g. /opt/flink/checkpoints. I have tried passing the checkpoint as

Re: problem with the heartbeat interval feature

2024-05-17 Thread Thomas Peyric
thanks Hongshun for your response ! Le ven. 17 mai 2024 à 07:51, Hongshun Wang a écrit : > Hi Thomas, > > In debezium dos says: For the connector to detect and process events from > a heartbeat table, you must add the table to the PostgreSQL publication > specified by the publication.name >

Re: [ANNOUNCE] Apache Flink CDC 3.1.0 released

2024-05-17 Thread gongzhongqiang
Congratulations ! Thanks for all contributors. Best, Zhongqiang Gong Qingsheng Ren 于 2024年5月17日周五 17:33写道: > The Apache Flink community is very happy to announce the release of > Apache Flink CDC 3.1.0. > > Apache Flink CDC is a distributed data integration tool for real time > data and batch

RockDb - Failed to clip DB after initialization - end key comes before start key

2024-05-17 Thread Francesco Leone
Hi, We are facing a new issue related to RockDb when deploying a new version of our job, which is adding 3 more operators. We are using flink 1.17.1 with RockDb on Java 11. We get an exception from another pre-existing operator during its initialization. That operator and the new ones have differ

Re: [ANNOUNCE] Apache Flink CDC 3.1.0 released

2024-05-17 Thread Hang Ruan
Congratulations! Thanks for the great work. Best, Hang Qingsheng Ren 于2024年5月17日周五 17:33写道: > The Apache Flink community is very happy to announce the release of > Apache Flink CDC 3.1.0. > > Apache Flink CDC is a distributed data integration tool for real time > data and batch data, bringing

Re: [ANNOUNCE] Apache Flink CDC 3.1.0 released

2024-05-17 Thread Leonard Xu
Congratulations ! Thanks Qingsheng for the great work and all contributors involved !! Best, Leonard > 2024年5月17日 下午5:32,Qingsheng Ren 写道: > > The Apache Flink community is very happy to announce the release of > Apache Flink CDC 3.1.0. > > Apache Flink CDC is a distributed data integration

[ANNOUNCE] Apache Flink CDC 3.1.0 released

2024-05-17 Thread Qingsheng Ren
The Apache Flink community is very happy to announce the release of Apache Flink CDC 3.1.0. Apache Flink CDC is a distributed data integration tool for real time data and batch data, bringing the simplicity and elegance of data integration via YAML to describe the data movement and transformation

Re: Get access to unmatching events in Apache Flink Cep

2024-05-17 Thread Anton Sidorov
Ok, thanks for the reply. пт, 17 мая 2024 г. в 09:22, Biao Geng : > Hi Anton, > > I am afraid that currently there is no such API to access the middle NFA > state in your case. For patterns that contain 'within()' condition, the > timeout events could be retrieved via TimedOutPartialMatchHandler

Re: SSL Kafka PyFlink

2024-05-17 Thread Evgeniy Lyutikov via user
Hi Phil You need specify keystore with CA location [1] [1] https://nightlies.apache.org/flink/flink-docs-release-1.19/docs/connectors/table/kafka/#security От: gongzhongqiang Отправлено: 17 мая 2024 г. 10:44:18 Кому: Phil Stavridis Копия: user@flink.apache.org

Re: What is the best way to aggregate data over a long window

2024-05-17 Thread Sachin Mittal
Hi, I am doing the following 1. Use reduce function where the data type of output after windowing is the same as the input. 2. Where the output of data type after windowing is different from that of input I use the aggregate function. For example: SingleOutputStreamOperator data = reducedPlaye

Re: Re: Re: Flink kafka connector for v 1.19.0

2024-05-17 Thread Niklas Wilcke
Hi Hang, thanks for pointing me to the mail thread. That is indeed interesting. Can we maybe ping someone to get this done? Can I do something about it? Becoming a PMC member might be difficult. :) Are still three PMC votes outstanding? I'm not entirely sure how to properly check who is part of

Re: What is the best way to aggregate data over a long window

2024-05-17 Thread gongzhongqiang
Hi Sachin, We can optimize this problem in the following ways: - use org.apache.flink.streaming.api.datastream.WindowedStream#aggregate(org.apache.flink.api.common.functions.AggregateFunction) to reduce number of data - use TTL to clean data which are not need - enble incremental checkpoint - us