Hi!

Why does this ~30% memory reduction happen?


I don't know how memory is calculated in Flink 1.9 but this 1.11 memory
allocation result is reasonable. This is because managed memory, network
memory and JVM overhead memory in 1.11 all has their default sizes or
fractions (managed memory 40%, network memory 10% with 1g max, JVM overhead
memory 10% with 1g max. See [1]), while heap memory doesn't. So a 5.8GB
heap (about 12G - 2G - 40%) and 4.3G managed memory (about 40%) is
explainable.

How would you suggest discerning what properties we should have a look at?


Network shuffling memory now has its own configuration key, which is
taskmanager.memory.network.fraction (and ...network.min and
...network.max).Also see [1] and [2] for more keys related to task
manager's memory.

[1]
https://github.com/apache/flink/blob/release-1.11/flink-core/src/main/java/org/apache/flink/configuration/TaskManagerOptions.java
[2]
https://ci.apache.org/projects/flink/flink-docs-release-1.11/ops/memory/mem_setup_tm.html#detailed-memory-model

Hailu, Andreas [Engineering] <andreas.ha...@gs.com> 于2021年8月26日周四 上午9:07写道:

> Hi folks,
>
>
>
> We’re about half way complete in migrating our YARN batch processing
> applications from Flink 1.9 to 1.11, and are currently tackling the memory
> configuration migrations.
>
>
>
> Our test application’s sink failed with the following exception while
> writing to HDFS:
>
>
>
> *Caused by: java.lang.OutOfMemoryError: Direct buffer memory. The direct
> out-of-memory error has occurred. This can mean two things: either job(s)
> require(s) a larger size of JVM direct memory or there is a direct memory
> leak. The direct memory can be allocated by user code or some of its
> dependencies. In this case 'taskmanager.memory.task.off-heap.size'
> configuration option should be increased. Flink framework and its
> dependencies also consume the direct memory, mostly for network
> communication. The most of network memory is managed by Flink and should
> not result in out-of-memory error. In certain special cases, in particular
> for jobs with high parallelism, the framework may require more direct
> memory which is not managed by Flink. In this case
> 'taskmanager.memory.framework.off-heap.size' configuration option should be
> increased. If the error persists then there is probably a direct memory
> leak in user code or some of its dependencies which has to be investigated
> and fixed. The task executor has to be shutdown...*
>
>
>
> We submit our applications through a Flink YARN session with –ytm, -yjm
> etc. We don’t have any memory configurations options set aside from
> ‘taskmanager.network.bounded-blocking-subpartition-type: file’ which I see
> is now deprecated and replaced with a new option defaulted to ‘file’ (which
> works for us!) SO nearly everything else is as default.
>
>
>
> We haven’t made any configuration changes yet thus far as we’re still
> combing through the migration instructions, but I did have some questions
> around what I observed.
>
> 1.     I observed that an application ran with “–ytm 12288” on 1.9
> receives 8.47GB JVM Heap space and 5.95 Flink Managed Memory space  (as
> reported by the ApplicationMaster), where on 1.11 it receives 5.79 JVM Heap
> space and 4.30 Flink Managed Memory space.  Why does this ~30% memory
> reduction happen?
>
> 2.     Piggybacking off point 1, on 1..9 we were not explicitly setting
> off-heap memory parameters. How would you suggest discerning what
> properties we should have a look at?
>
>
>
> Best,
>
> Andreas
>
> ------------------------------
>
> Your Personal Data: We may collect and process information about you that
> may be subject to data protection laws. For more information about how we
> use and disclose your personal data, how we protect your information, our
> legal basis to use your information, your rights and who you can contact,
> please refer to: www.gs.com/privacy-notices
>

Reply via email to