Hi neha,

Which flink version are you using? We have also encountered the issue of
continuous growth of off-heap memory in the TM of the session cluster
before, the reason is that the memory fragments cannot be reused like issue
[1]. You can check the memory allocator and try to use jemalloc instead
refer to doc [2] and [3].

[1] https://issues.apache.org/jira/browse/FLINK-19125
[2]
https://nightlies.apache.org/flink/flink-docs-release-1.15/release-notes/flink-1.12/#deployment
[3]
https://nightlies.apache.org/flink/flink-docs-release-1.15/docs/deployment/resource-providers/standalone/docker/#switching-the-memory-allocator

Best,
Shammon FY

On Sat, Jul 1, 2023 at 2:58 PM neha goyal <nehagoy...@gmail.com> wrote:

> Hello,
>
> I am trying to debug the unbounded memory consumption by the Flink
> process. The heap size of the process remains the same. The size of the RSS
> of the process keeps on increasing. I suspect it might be because of
> RocksDB.
>
> we have the default value for state.backend.rocksdb.memory.managed as
> true. Can anyone confirm that this config will Rockdb be able to take the
> unbounded native memory?
>
> If yes, what metrics can I check to confirm the issue? Any help would be
> appreciated.
>

Reply via email to