Hi Ori,

The error message suggests that there's not enough physical memory on the
machine to satisfy the allocation. This does not necessarily mean a managed
memory leak. Managed memory leak is only one of the possibilities. There
are other potential reasons, e.g., another process/container on the machine
used more memory than expected, Yarn NM is not configured with enough
memory reserved for the system processes, etc.

I would suggest to first look into the machine memory usages, see whether
the Flink process indeed uses more memory than expected. This could be
achieved via:
- Run the `top` command
- Look into the `/proc/meminfo` file
- Any container memory usage metrics that are available to your Yarn cluster

Thank you~

Xintong Song



On Tue, Oct 27, 2020 at 6:21 PM Ori Popowski <ori....@gmail.com> wrote:

> After the job is running for 10 days in production, TaskManagers start
> failing with:
>
> Connection unexpectedly closed by remote task manager
>
> Looking in the machine logs, I can see the following error:
>
> ============= Java processes for user hadoop =============
> OpenJDK 64-Bit Server VM warning: INFO:
> os::commit_memory(0x00007fb4f4010000, 1006567424, 0) failed; error='Cannot
> allocate memory' (err
> #
> # There is insufficient memory for the Java Runtime Environment to
> continue.
> # Native memory allocation (mmap) failed to map 1006567424 bytes for
> committing reserved memory.
> # An error report file with more information is saved as:
> # /mnt/tmp/hsperfdata_hadoop/hs_err_pid6585.log
> =========== End java processes for user hadoop ===========
>
> In addition, the metrics for the TaskManager show very low Heap memory
> consumption (20% of Xmx).
>
> Hence, I suspect there is a memory leak in the TaskManager's Managed
> Memory.
>
> This my TaskManager's memory detail:
> flink process 112g
> framework.heap.size 0.2g
> task.heap.size 50g
> managed.size 54g
> framework.off-heap.size 0.5g
> task.off-heap.size 1g
> network 2g
> XX:MaxMetaspaceSize 1g
>
> As you can see, the managed memory is 54g, so it's already high (my
> managed.fraction is set to 0.5).
>
> I'm running Flink 1.10. Full job details attached.
>
> Can someone advise what would cause a managed memory leak?
>
>
>

Reply via email to