当flink作业失败时如何第一时间发通知告警到相关方?现有方式
方式一:flink作业本身提供的rest
api需要client不断去请求,不是实时不说还浪费资源,而且受网络抖动影响有时候还会超时获取不到,但不代表作业有问题。
方式二:通过作业暴露指标给promemtheus,因为prometheus是周期性(10s~20s) 来pull指标的,所以也达不到实时性要求。
flink作业能否在failure之前调用某个hook去通知相关方呢?如果要自己改的话,是要动哪个类呢?谢谢!
一般如果是发生failover或者重启时短时间出现这个信息是没关系的,Flink会自己恢复;如果一直出现并且无法恢复,可以结合WebUI查看一下具体是哪些task没有部署成功
On Thu, Sep 29, 2022 at 10:23 AM yidan zhao wrote:
> 嗯,谢谢建议,等再出现问题我试试,现在重启后还好,目前感觉是长时间运行后的集群才会出现。
>
> Lijie Wang 于2022年9月29日周四 10:17写道:
> >
> > Hi,
> >
> > 可以尝试增大一下 taskmanager.network.request-backoff.max
kafka connect with schema registry运行的时候会将表的schema信息注册到schema
registry,同时消息以avro格式发到kafka topic
请问flink cdc要如何实现达到上述一样的效果?因为接下来我想依赖以下hudi博文提到的debezium入湖工具完成数据入湖
https://hudi.apache.org/blog/2022/01/14/change-data-capture-with-debezium-and-apache-hudi/
My Flink app uses logback for logging, when submitting it from the operator
I get this error:
ERROR in ch.qos.logback.classic.joran.JoranConfigurator@7364985f - Could
not open URL [file:/opt/flink/conf/logback-console.xml].
java.io.FileNotFoundException: /opt/flink/conf/logback-console.xml (No
Hi all,
I need to configure a keyed global window that would trigger a reduce function
for all the events in each key group before the processing finishes and the job
closes.
I have something similar for the realtime(streaming) version of the job,
configured with a processing time gap:
Hello Edwin,
Would you mind sharing a simple FlinkSQL DDL for the table you are creating
with the kafka connector and dthe debezium-avro-confluent format?
Also, can you elaborate on the mechanism who publishes initially to the
schema registry and share the corresponding schema?
In a nutshell,
Sorry I mean the 180 seconds. Where does flink decide that 180 seconds is
the cutoff point... And can I increase it.
On Thu., Sep. 29, 2022, 7:02 a.m. John Smith,
wrote:
> Is there a way to increase the 30 seconds to 60? Where is that 30 second
> timeout set?
>
> I have jdbc query timeout but
Is there a way to increase the 30 seconds to 60? Where is that 30 second
timeout set?
I have jdbc query timeout but at some point at night the insert takes a bit
longer cause of index rebuilding.
On Wed., Sep. 28, 2022, 5:02 a.m. Congxian Qiu,
wrote:
> Hi John
>
> Yes, the whole TaskManager
I see. As mentioned, Flink uses Dynamic Tables, which are a logical
concept. They don't necessarily (fully) materialize during query execution.
On Tue, Sep 27, 2022 at 1:35 PM wrote:
>
> I was thinking if I can use Flink to process large files and save result
> to another file or database
Hi Edwin,
I'm suspecting that's because those fields are considered metadata which
are treated separately. There's
https://issues.apache.org/jira/browse/FLINK-20454 for adding the metadata
support for the Debezium format with a PR provided, but not yet reviewed.
If you could have a look at the PR
10 matches
Mail list logo