When enabling exactly_once on my kafka sink, I am seeing extremely long
initialization times (over 1 hour), especially after restoring from a
savepoint. In the logs I see the job constantly initializing thousands of
kafka producers like this:
2023-01-31 14:39:58,150 INFO org.apache.kafka.clients.p
Is it possible to use the Flink Debezium format with the changelogs
generated by the Debezium MongoDB Connector? I have tried multiple
configurations (with and without json schema included), and I always
receive a java.io.IOException: Corrupt Debezium JSON message. Could it be
related to the fact t
/ need to recover.
>
> Best regards,
>
> Martijn
>
> [1]
> https://nightlies.apache.org/flink/flink-docs-release-1.14/docs/deployment/filesystems/gcs/
>
> On Mon, 14 Mar 2022 at 16:00, Bobby Richard
> wrote:
>
>> Thanks Martijn, are there any alternatives to write to
ttps://twitter.com/MartijnVisser82
>
> [1] https://issues.apache.org/jira/browse/FLINK-11838
> [2]
> https://nightlies.apache.org/flink/flink-docs-release-1.14/api/java/org/apache/flink/core/fs/RecoverableWriter.html
>
>
> On Mon, 14 Mar 2022 at 15:21, Bobby Richard
> wrote:
I am receiving the following exception when attempting to write to GCS with
the FileSink in FLink 14.3. Using flink hadoop shaded 2.8.3-10.0 and gcs
connector hadoop2-2.1.1.
java.lang.UnsupportedOperationException: Recoverable writers on Hadoop are
only supported for HDFS
I am able to write chec
reporting the issue
> and analyzing it. I have created an issue for tracking it [1].
>
> cc Becket.
>
> [1] https://issues.apache.org/jira/browse/FLINK-21691
>
> Cheers,
> Till
>
> On Mon, Mar 8, 2021 at 3:35 PM Bobby Richard
> wrote:
>
>> I'
I'm receiving the following exception when trying to use a KafkaSource from
the new DataSource API.
Exception in thread "main" java.lang.NullPointerException
at
org.apache.flink.connector.kafka.source.reader.deserializer.ValueDeserializerWrapper.getProducedType(ValueDeserializerWrapper.java:79)
at