Hi Phil,
AFAIK, the error indicated your path was incorrect.
your should use '/opt/flink/checkpoints/1875588e19b1d8709ee62be1cdcc' or
'file:///opt/flink/checkpoints/1875588e19b1d8709ee62be1cdcc' instead.
Best.
Jiadong.Lu
On 5/18/24 2:37 AM, Phil Stavridis wrote:
Hi,
I am trying to test how
Amazing, congrats!
Thanks for your efforts!
Best,
Muhammet
On 2024-05-17 09:32, Qingsheng Ren wrote:
The Apache Flink community is very happy to announce the release of
Apache Flink CDC 3.1.0.
Apache Flink CDC is a distributed data integration tool for real time
data and batch data, bringing
Hi,
I am trying to test how the checkpoints work for restoring state, but not sure
how to run a new instance of a flink job, after I have cancelled it, using the
checkpoints which I store in the filesystem of the job manager, e.g.
/opt/flink/checkpoints.
I have tried passing the checkpoint as
thanks Hongshun for your response !
Le ven. 17 mai 2024 à 07:51, Hongshun Wang a
écrit :
> Hi Thomas,
>
> In debezium dos says: For the connector to detect and process events from
> a heartbeat table, you must add the table to the PostgreSQL publication
> specified by the publication.name
>
Congratulations !
Thanks for all contributors.
Best,
Zhongqiang Gong
Qingsheng Ren 于 2024年5月17日周五 17:33写道:
> The Apache Flink community is very happy to announce the release of
> Apache Flink CDC 3.1.0.
>
> Apache Flink CDC is a distributed data integration tool for real time
> data and batch
Hi,
We are facing a new issue related to RockDb when deploying a new version of
our job, which is adding 3 more operators. We are using flink 1.17.1 with
RockDb on Java 11.
We get an exception from another pre-existing operator during its
initialization. That operator and the new ones have differ
Congratulations!
Thanks for the great work.
Best,
Hang
Qingsheng Ren 于2024年5月17日周五 17:33写道:
> The Apache Flink community is very happy to announce the release of
> Apache Flink CDC 3.1.0.
>
> Apache Flink CDC is a distributed data integration tool for real time
> data and batch data, bringing
Congratulations !
Thanks Qingsheng for the great work and all contributors involved !!
Best,
Leonard
> 2024年5月17日 下午5:32,Qingsheng Ren 写道:
>
> The Apache Flink community is very happy to announce the release of
> Apache Flink CDC 3.1.0.
>
> Apache Flink CDC is a distributed data integration
The Apache Flink community is very happy to announce the release of
Apache Flink CDC 3.1.0.
Apache Flink CDC is a distributed data integration tool for real time
data and batch data, bringing the simplicity and elegance of data
integration via YAML to describe the data movement and transformation
Ok, thanks for the reply.
пт, 17 мая 2024 г. в 09:22, Biao Geng :
> Hi Anton,
>
> I am afraid that currently there is no such API to access the middle NFA
> state in your case. For patterns that contain 'within()' condition, the
> timeout events could be retrieved via TimedOutPartialMatchHandler
Hi Phil
You need specify keystore with CA location [1]
[1]
https://nightlies.apache.org/flink/flink-docs-release-1.19/docs/connectors/table/kafka/#security
От: gongzhongqiang
Отправлено: 17 мая 2024 г. 10:44:18
Кому: Phil Stavridis
Копия: user@flink.apache.org
Hi,
I am doing the following
1. Use reduce function where the data type of output after windowing is the
same as the input.
2. Where the output of data type after windowing is different from that of
input I use the aggregate function. For example:
SingleOutputStreamOperator data =
reducedPlaye
Hi Hang,
thanks for pointing me to the mail thread. That is indeed interesting. Can we
maybe ping someone to get this done? Can I do something about it? Becoming a
PMC member might be difficult. :)
Are still three PMC votes outstanding? I'm not entirely sure how to properly
check who is part of
Hi Sachin,
We can optimize this problem in the following ways:
-
use
org.apache.flink.streaming.api.datastream.WindowedStream#aggregate(org.apache.flink.api.common.functions.AggregateFunction)
to reduce number of data
- use TTL to clean data which are not need
- enble incremental checkpoint
- us
14 matches
Mail list logo