--
>>> *From:* Gagan Agrawal
>>> *Sent:* Thursday, November 1, 2018 13:38
>>> *To:* myas...@live.com
>>> *Cc:* happydexu...@gmail.com; user@flink.apache.org
>>> *Subject:* Re: Savepoint failed with error "Checkpoint
;> posted in dev-mail-list.
>>
>> Best
>> Yun Tang
>> --
>> *From:* Gagan Agrawal
>> *Sent:* Thursday, November 1, 2018 13:38
>> *To:* myas...@live.com
>> *Cc:* happydexu...@gmail.com; user@flink.apache.org
>>
pydexu...@gmail.com; user@flink.apache.org
> *Subject:* Re: Savepoint failed with error "Checkpoint expired before
> completing"
>
> Thanks Yun for your inputs. Yes, increasing checkpoint helps and we are
> able to save save points now. In our case we wanted to increase pa
l-list.
Best
Yun Tang
From: Gagan Agrawal
Sent: Thursday, November 1, 2018 13:38
To: myas...@live.com
Cc: happydexu...@gmail.com; user@flink.apache.org
Subject: Re: Savepoint failed with error "Checkpoint expired before completing"
Thanks Yun for
-
> *From:* Gagan Agrawal
> *Sent:* Wednesday, October 31, 2018 19:03
> *To:* happydexu...@gmail.com
> *Cc:* user@flink.apache.org
> *Subject:* Re: Savepoint failed with error "Checkpoint expired before
> completing"
>
> Hi Henry,
> Than
he.org
Subject: Re: Savepoint failed with error "Checkpoint expired before completing"
Hi Henry,
Thanks for your response. However we don't face this issue during normal run as
we have incremental checkpoints. Only when we try to take savepoint (which
tries to save entire state in one g
Hi Henry,
Thanks for your response. However we don't face this issue during normal
run as we have incremental checkpoints. Only when we try to take savepoint
(which tries to save entire state in one go), we face this problem.
Gagan
On Wed, Oct 31, 2018 at 11:41 AM 徐涛 wrote:
> Hi Gagan,
>
Hi Gagan,
I have met with the error the checkpoint timeout too.
In my case, it is not due to big checkpoint size, but due to slow sink
then cause high backpressure to the upper operator. Then the barrier may take a
long time to arrive to sink.
Please check if it is the
Hi,
We have a flink job (flink version 1.6.1) which unions 2 streams to pass
through custom KeyedProcessFunction with RocksDB state store which final
creates another stream into Kafka. Current size of checkpoint is around
~100GB and checkpoints are saved to s3 with 5 mins interval and incremental