Hi Harish
Jiabao has helped troubleshoot the issue[1] and fixed it very efficiently less
than 24 hours, Thanks Jiabao!
You can built mongodb connector base on latest main branch, or you can wait the
next connector release.
Best,
Leonard
[1]https://issues.apache.org/jira/browse/FLINK-32446
Hi Harish,
Thanks to report this issue. There are currently 5 ways to write:
1. Flush only on checkpoint
'sink.buffer-flush.interval' = '-1' and 'sink.buffer-flush.max-rows' = '-1'
2. Flush for for every single element
'sink.buffer-flush.interval' = '0' or 'sink.buffer-flush.max-rows' = '1'
3.
Hi,
I am using the flink version 1.7.1 and flink-mongodb-sql-connector version
1.0.1-1.17.
Below is the data pipeline flow.
Source 1 (Kafka topic using Kafka connector) -> Window Aggregation (legacy
grouped window aggregation) -> Sink (Kafka topic using upsert-kafka connector)
-> Sink(Mongdb