As noted in the SO, it's a bit confusing to me how the `checkpointing.mode`
delivery guarantees with the ones for the different sinks, and in
particular with the kafka one.
Based on the error I had, I understand that if I use `EXACTLY_ONCE` for the
checkpoints and I indicate nothing in the kafka
dob_dim_account 维表如果使用 jdbc 的 connector, flink 会在初始化的时候一次性读取所有的数据,
后续数据库中更新并不会触发 flink 计算。
要解决这个问题, dob_dim_account 需要变成流表。
Zhiwen Sun
On Thu, Nov 17, 2022 at 1:56 PM Jason_H wrote:
> hi,你好
> 这种方式,需要使用cdc,但是我们的现在方案里领导不考虑使用cdc,只想用flinksql去解决这个问题
>
>
> | |
> Jason_H
> |
> |
>
Hi Yogi,
I think the problem comes from poetry depending on the metadata in PyPI.
This problem has been reported in
https://issues.apache.org/jira/browse/FLINK-29817 and I will fix it in
1.16.1.
Best,
Xingbo
Yogi Devendra 于2022年11月17日周四 06:21写道:
> Dear community/maintainers,
>
> Thanks for
hi,你好
这种方式,需要使用cdc,但是我们的现在方案里领导不考虑使用cdc,只想用flinksql去解决这个问题
| |
Jason_H
|
|
hyb_he...@163.com
|
Replied Message
| From | 任召金 |
| Date | 11/15/2022 09:52 |
| To | user-zh |
| Subject | Re: flinksql join |
hello,你可以试下,将mysql的数据通过CDC变成流数据,然后跟主流inner join,注意状态的TTL
Dear community/maintainers,
Thanks for the lovely platform of Apache Flink.
I got following error when add apache-flink 1.16.0 dependency in my python
project.
Given below is the stack trace for further investigation.
When I tried using lower version (1.15.2) for the same; I was able to move
I gave a talk about that setup:
https://www.youtube.com/watch?v=tiGxEGPyqCg_channel=FlinkForward
The documentation suggests using unaligned checkpoints in case of
backpressure (
Hello,
I am doing some tests with the operator and, if I'm not mistaken, using
last-state upgrade means that, when something is changed in the CR, no
savepoint is taken and the pods are simply terminated. Is that a
requirement from Flink HA? I would have thought last-state would still use
Ah I see, cool, thanks.
Regards,
Alexis.
Am Mi., 16. Nov. 2022 um 15:50 Uhr schrieb Gyula Fóra :
> This has been changed in the current snapshot release:
> https://issues.apache.org/jira/browse/FLINK-28979
>
> It will be part of the 1.3.0 version.
>
> On Wed, Nov 16, 2022 at 3:32 PM Alexis
This has been changed in the current snapshot release:
https://issues.apache.org/jira/browse/FLINK-28979
It will be part of the 1.3.0 version.
On Wed, Nov 16, 2022 at 3:32 PM Alexis Sarda-Espinosa <
sarda.espin...@gmail.com> wrote:
> Hello,
>
> Is there a particular reason the operator doesn't
Hello,
Is there a recommended configuration for the restore mode of jobs managed
by the operator?
Since the documentation states that the operator keeps a savepoint history
to perform cleanup, I imagine restore mode should always be NO_CLAIM, but I
just want to confirm.
Regards,
Alexis.
Hello,
Is there a particular reason the operator doesn't set owner references for
the Deployments it creates as a result of a FlinkDeployment CR? This makes
tracking in the Argo CD UI impossible. (To be clear, I mean a reference
from the Deployment to the FlinkDeployment).
Regards,
Alexis.
Yes. I do use RocksDB for (incremental) checkpointing. During each checkpoint
15-20GB of state gets created (new state added, some expired). I make use of
FIFO compaction.
I’m a bit surprised you were able to run with 10+TB state without unaligned
checkpoints because the performance in my
Hi Piotr,
the option you mention is applicable only for the deprecated
KafkaProducer, is there an equivalent to the modern KafkaSink? I found
this article comparing the behavior of the two:
Hi Michael,
yeah, it will be addressed in Flink-28867.
Best regards,
Jing
On Wed, Nov 16, 2022 at 2:58 AM liu ron wrote:
> It will be addressed in FLINK-28867.
>
> Best,
> Ron
>
> Benenson, Michael via user 于2022年11月16日周三 08:47写道:
>
>> Thanks, Jing
>>
>>
>>
>> Do you know, if this problem
14 matches
Mail list logo