@Yun Tang<mailto:myas...@live.com>,Thanks.
From: Yun Tang
Sent: Monday, June 15, 2020 11:30
To: Thomas Huang ; Flink
Subject: Re: The Flink job recovered with wrong checkpoint state.
Hi Thomas
The answer is yes. Without high availability, once the job m
Hi Flink Community,
Currently, I'm using yarn-cluster mode to submit flink job on yarn, and I
haven't set high availability configuration (zookeeper), but set restart
strategy:
env.getConfig.setRestartStrategy(RestartStrategies.fixedDelayRestart(10, 3000))
the attempt time is 10 and the wait
Hi Jingsong,
Cool, Thanks for your reply.
Best wishes.
From: Jingsong Li
Sent: Tuesday, May 19, 2020 10:46
To: Thomas Huang
Cc: Flink
Subject: Re: Is it possible to change 'connector.startup-mode' option in the
flink job
Hi Thomas,
Good to hea
Hi guys,
I'm using hive to store kafka topic metadata as follows::
CREATE TABLE orders (
user_idBIGINT,
productSTRING,
order_time TIMESTAMP(3),
WATERMARK FOR order_time AS order_time - '5' SECONDS
) WITH (
'connector.type' = 'kafka',
Hi,
Actually, seems like spark dynamic allocation saves more resources in that
case.
From: Arvid Heise
Sent: Monday, May 18, 2020 11:15:09 PM
To: Congxian Qiu
Cc: Sergii Mikhtoniuk ; user
Subject: Re: Process available data and stop with savepoint
Hi Sergii,
I’m wondering that why you use a beta feature for production. Why not push the
latest state into down sink like redis or hbase with Apache phoenix .
From: Annemarie Burger
Sent: Monday, May 18, 2020 11:19:23 PM
To: user@flink.apache.org
Subject: Re: Incremental
I met this issue three months ago. Finally, we got the conclusion that is
Prometheus push gateway can not handle high throughout metric data. But we
solved the issue via service discovery. We changed the Prometheus metric
reporter code, adding the registration logic, so the job can expose the ho