[ https://issues.apache.org/jira/browse/FLINK-18712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17186505#comment-17186505 ]
Julius Michaelis edited comment on FLINK-18712 at 8/28/20, 12:14 PM: --------------------------------------------------------------------- @Farnight, have you tried setting {{state.backend.rocksdb.memory.managed: false}}, and checked whether it stops that behavior? (I think I'm seeing something quite similar, I'll try to build a small reproducer next week. I'm running into this on bare metal / outside of Kubernetes.) was (Author: caesar): @Farnight, have you tried setting {{state.backend.rocksdb.memory.managed: false}}, and checked whether it stops that behavior? (I think I'm seeing something quite similar, I'll try to build a small reproducer next week.) > Flink RocksDB statebackend memory leak issue > --------------------------------------------- > > Key: FLINK-18712 > URL: https://issues.apache.org/jira/browse/FLINK-18712 > Project: Flink > Issue Type: Bug > Components: Runtime / State Backends > Affects Versions: 1.10.0 > Reporter: Farnight > Priority: Critical > > When using RocksDB as our statebackend, we found it will lead to memory leak > when restarting job (manually or in recovery case). > > How to reproduce: > # increase RocksDB blockcache size(e.g. 1G), it is easier to monitor and > reproduce. > # start a job using RocksDB statebackend. > # when the RocksDB blockcache reachs maximum size, restart the job. and > monitor the memory usage (k8s pod working set) of the TM. > # go through step 2-3 few more times. and memory will keep raising. > > Any solution or suggestion for this? Thanks! -- This message was sent by Atlassian Jira (v8.3.4#803005)