hat it takes to move beyond Scala 2.12.7… This has been
> a big pain point for us.
>
> I'm curious what target Scala versions people are currently interested in.
> I would've expected that everyone wants to migrate to Scala 3, for which
> several wrapper projects around Flink already exist.
…) to be available.
Thanks for all the work that went into making Flink what it is!
Gaël Renoux - Lead R Engineer
E - gael.ren...@datadome.co
W - www.datadome.co
On Wed, Oct 5, 2022 at 9:30 AM Maciek Próchniak wrote:
> Hi Martin,
>
> Could you please remind what was the conclusion of discussion on
the program will always
> fail and it will influence the. normal procedure, it has to be stopped.
> If you don't need to recover from the checkpoint, maybe you could disable
> it. But it's not recommended for a streaming job.
>
> Best,
> Hangxiang.
>
> On Tue, May 24, 2022 at 12:
Got it, thank you. I misread the documentation and thought the async
referred to the task itself, not the process of taking a checkpoint.
I guess there is currently no way to make a job never fail on a failed
checkpoint?
Gaël Renoux - Lead R Engineer
E - gael.ren...@datadome.co
W
s.StreamTask.performCheckpoint(StreamTask.java:1315)
> at
> org.apache.flink.streaming.runtime.tasks.StreamTask.triggerCheckpointOnBarrier(StreamTask.java:1258)
> ... 16 more
> Caused by:
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaException: Failed to
> send data to Kafka: Expiring 1 record(s) for result-31:120003 ms has passed
> since batch creation
>
Any help is appreciated.
Gaël Renoux - Lead R Engineer
E - gael.ren...@datadome.co
W - www.datadome.co
could visit your
> restored broadcast state.
>
> [1]
> https://ci.apache.org/projects/flink/flink-docs-stable/dev/stream/state/state.html#checkpointedfunction
>
> Best
> Yun Tang
>
> ------
> *From:* Gaël Renoux
> *Sent:* Tuesday, December 17, 20
is overkill;
- it could only update the metric on the current subtask, not the others,
so one subtask could lag behind.
Am I missing something here ? Is there any way to trigger a reset of the
value when the broadcast state is reconstructed ?
Thanks for any help,
Gaël Renoux
already had a rules source
> and you can send rules in #open function for a startup if your rules source
> inherit from #RichParallelSourceFunction.
>
>
> Best,
>
> Jiayi Liao
>
> Original Message
> *Sender:* Gaël Renoux
> *Recipient:* user
> *Date:* Monday, Nov
. On startup, I need to request all rules to be sent at once
(the emitter normally sends updated rules only), in case the rule state has
been lost (happens when we evolve the rule model, for instance), and this
is done through a Kafka message.
Thanks in advance!
Gaël Renoux
; *M O T A W O R D*
>>>> The World's Fastest Human Translation Platform.
>>>> oy...@motaword.com — www.motaword.com
>>>>
>>>>
>>>> On Fri, Sep 27, 2019 at 2:42 PM John Smith
>>>> wrote:
>>>>
>>>
ing-job-name-or-job-id>,
but it's two years old and the proposed solution implies a ton of
boilerplate. We're using Flink 1.8.1 (and logback and logz.io, if that
matters).
Thank you for your help!
Gaël Renoux
.lang.reflect.Method.invoke(Method.java:498)
>
> at
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:529)
>
> at
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:421)
12 matches
Mail list logo