[ https://issues.apache.org/jira/browse/FLINK-9693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16556325#comment-16556325 ]
Steven Zhen Wu commented on FLINK-9693: --------------------------------------- One more observation. we are seeing this issue right after the jobmanager node got killed and replaced. however, it is not reproducible when I trying to kill the jobmanager when job is healthy > Possible memory leak in jobmanager retaining archived checkpoints > ----------------------------------------------------------------- > > Key: FLINK-9693 > URL: https://issues.apache.org/jira/browse/FLINK-9693 > Project: Flink > Issue Type: Bug > Components: JobManager, State Backends, Checkpointing > Affects Versions: 1.5.0, 1.6.0 > Environment: !image.png!!image (1).png! > Reporter: Steven Zhen Wu > Assignee: Till Rohrmann > Priority: Major > Labels: pull-request-available > Fix For: 1.4.3, 1.5.1, 1.6.0 > > Attachments: 20180725_jm_mem_leak.png, > 41K_ExecutionVertex_objs_retained_9GB.png, ExecutionVertexZoomIn.png > > > First, some context about the job > * Flink 1.4.1 > * stand-alone deployment mode > * embarrassingly parallel: all operators are chained together > * parallelism is over 1,000 > * stateless except for Kafka source operators. checkpoint size is 8.4 MB. > * set "state.backend.fs.memory-threshold" so that only jobmanager writes to > S3 to checkpoint > * internal checkpoint with 10 checkpoints retained in history > > Summary of the observations > * 41,567 ExecutionVertex objects retained 9+ GB of memory > * Expanded in one ExecutionVertex. it seems to storing the kafka offsets for > source operator -- This message was sent by Atlassian JIRA (v7.6.3#76005)