Hi neha,

Due to the limitation of RocksDB, we cannot create a
strict-capacity-limit LRUCache which shared among rocksDB instance(s),
FLINK-15532[1] is created to track this.
BTW, have you set TTL for this job[2],  TTL can help control the state size.

[1] https://issues.apache.org/jira/browse/FLINK-15532
[2]https://issues.apache.org/jira/browse/FLINK-31089

Shammon FY <zjur...@gmail.com> 于2023年7月4日周二 09:08写道:
>
> Hi neha,
>
> Which flink version are you using? We have also encountered the issue of 
> continuous growth of off-heap memory in the TM of the session cluster before, 
> the reason is that the memory fragments cannot be reused like issue [1]. You 
> can check the memory allocator and try to use jemalloc instead refer to doc 
> [2] and [3].
>
> [1] https://issues.apache.org/jira/browse/FLINK-19125
> [2] 
> https://nightlies.apache.org/flink/flink-docs-release-1.15/release-notes/flink-1.12/#deployment
> [3] 
> https://nightlies.apache.org/flink/flink-docs-release-1.15/docs/deployment/resource-providers/standalone/docker/#switching-the-memory-allocator
>
> Best,
> Shammon FY
>
> On Sat, Jul 1, 2023 at 2:58 PM neha goyal <nehagoy...@gmail.com> wrote:
>>
>> Hello,
>>
>> I am trying to debug the unbounded memory consumption by the Flink process. 
>> The heap size of the process remains the same. The size of the RSS of the 
>> process keeps on increasing. I suspect it might be because of RocksDB.
>>
>> we have the default value for state.backend.rocksdb.memory.managed as true. 
>> Can anyone confirm that this config will Rockdb be able to take the 
>> unbounded native memory?
>>
>> If yes, what metrics can I check to confirm the issue? Any help would be 
>> appreciated.



-- 
Best,
Yanfei

Reply via email to