Since state size is small, you can try FileState Backend, rather than
RocksDB. You can check once. Thumb rule is if FileStateBackend Performs
worse, RocksDB is good.

Regards
Bhasakar

On Tue, Oct 12, 2021 at 1:47 PM Yun Tang <myas...@live.com> wrote:

> Hi Lei,
>
> RocksDB state-backend's checkpoint is composited by RocksDB's own files
> (unmodified compressed SST format files) and incremental checkpoints means
> Flink does not upload files which were uploaded before. As you can see,
> incremental checkpoints highly depend on the RocksDB's own mechanism to
> remove useless files, which is triggered by internal compaction. You should
> not care too much on the checkpointed data size as your job consuming more
> and more records, moreover the increasing size is actually quite small
> (from 1.32GB to 1.34GB).
>
> Best
> Yun Tang
>
>
>
> ------------------------------
> *From:* Lei Wang <leiwang...@gmail.com>
> *Sent:* Monday, October 11, 2021 16:16
> *To:* user <user@flink.apache.org>
> *Subject:* Checkpoint size increasing even i enable increasemental
> checkpoint
>
>
> [image: image.png]
>
> The  checkpointed data size became bigger and bigger and the node cpu is
> very high when the job is doing checkpointing.
>  But I have enabled incremental checkpointing:  env.setStateBackend(new 
> RocksDBStateBackend(checkpointDir,
> true));
>
> I am using flink-1.11.2 and aliyun oss as checkpoint storage.
>
>
> Any insight on this?
>
> Thanks,
>
> Lei
>
>
>

Reply via email to