Hi Community,
I am reading CSV data using data stream file source connector and need to
convert them into AVRO generated specific objects.
I am using CsvReaderFormat with CSVSchema but this supports only primitive
types of AVRO (that also except null and bytes).
Is there any support provided f
I have a flink-SQL task. (enable savepoint)
I want change it , so I stop itsink and source add a cloumn in oracle tableand modify SQL in flinkwhen I commit it I get a errororg.apache.flink.runtime.JobException: Recovery is suppressed by NoRestartBackoffTimeStra
My job read data from mysql and write to doris. It will crash after 20 mins
~ 1 hour after start.
org.apache.flink.runtime.JobException: Recovery is suppressed by
FixedDelayRestartBackoffTimeStrategy(maxNumberRestartAttempts=10,
backoffTimeMS=1)
at org.apache.flink.runtime.executiongraph.failo
Hi, Anuj,
I searched the jira and found the related issue[1]. But this issue is still
open now.
Best,
Hang
[1] https://issues.apache.org/jira/browse/FLINK-6185
Anuj Jain 于2023年4月14日周五 14:58写道:
> Hi Community,
>
> Does Flink File Sink support compression of output files, to reduce the
> file s
Hi, Kirti,
I think you need a custom convertor for your csv files. The convertors
provided by Flink only define how to translate the data into a Flink type.
Best,
Hang
Kirti Dhar Upadhyay K via user 于2023年4月14日周五
15:27写道:
> Hi Community,
>
>
>
> I am reading CSV data using data stream file sour
Hi, igyu,
It seems like the state in the join sql can not be recovered rightly. Do
you change the columns in the join sql? If so, I think this may cause
failing to recover from the checkpoint.
Best,
Hang
igyu 于2023年4月14日周五 16:13写道:
> I have a flink-SQL task. (enable savepoint)
> I want change
Hi community,
I am pretty new to the Flink ecosystem and I am trying to create a Flink
consumer that consumes messages from an Azure Event Hub for Apache Kafka.
My flink job runs on an ubuntu vm.
When I run my job via the mvn -exec command the consumer brings messages.
Following a different proc
退订
--
发自新浪邮箱客户端
Thanks David and others.
Our DAG has multiple sources which are not connected to each other. If one
of them fails, I believe Flink can restart a single region for
defaultscheduler. but it is not the same case for adaptive scheduler. Do
you think somehow the adaptive scheduler supports region pipel