Hi folks,
We're about half way complete in migrating our YARN batch processing
applications from Flink 1.9 to 1.11, and are currently tackling the memory
configuration migrations.
Our test application's sink failed with the following exception while writing
to HDFS:
Caused by: java.lang.OutOf
Hi!
Why does this ~30% memory reduction happen?
I don't know how memory is calculated in Flink 1.9 but this 1.11 memory
allocation result is reasonable. This is because managed memory, network
memory and JVM overhead memory in 1.11 all has their default sizes or
fractions (managed memory 40%, ne
-direct-or-native
// ah
From: Caizhi Weng
Sent: Wednesday, August 25, 2021 10:47 PM
To: Hailu, Andreas [Engineering]
Cc: user@flink.apache.org
Subject: Re: 1.9 to 1.11 Managed Memory Migration Questions
Hi!
Why does this ~30% memory reduction happen?
I don't know how memory is calculated in
jects/flink/flink-docs-release-1.10/ops/config.html#taskmanager-memory-task-off-heap-size
>
> [2]
> https://ci.apache.org/projects/flink/flink-docs-release-1.10/ops/memory/mem_setup.html#configure-off-heap-memory-direct-or-native
>
>
>
> *// *ah
>
>
>
> *From:* Caiz
Thanks Caizhi, this was very helpful.
// ah
From: Caizhi Weng
Sent: Thursday, August 26, 2021 10:41 PM
To: Hailu, Andreas [Engineering]
Cc: user@flink.apache.org
Subject: Re: 1.9 to 1.11 Managed Memory Migration Questions
Hi!
I've read the first mail again and discover that the direct m