Hi John,
the memory configuration depends entirely on your use-case. It's hard to
judge from the distance here. You should monitor your memory usage and act
accordingly. Increasing memory usage indicates some memory leak, though.
You will run into issues the longer the job and the cluster runs. Thi
Well the hosts have 16GB.
If there is a "bug" with classloading... Then for now I can only hope to
increase the metaspace size so...
If the host has 16GB
Can I set the Java heap to say 12GB and the Metaspace to 2GB and leave 2GB
for the OS?
Or maybe 10GB for heap and 2GB for Meta which leaves 4G
In general, running out of memory in the Metaspace pool indicates some bug
related to the classloaders. Have you considered upgrading to new versions
of Flink and other parts of your pipeline? Otherwise, you might want to
create a heap dump and analyze that one [1]. This analysis might reveal
some
Hi thanks. I know, I already mentioned that I put 1024, see config above.
But my question is how much? I still get the message once a while. It also
seems that if a job restarts a few times it happens... My jobs aren't
complicated. They use Kafka, some of them JDBC and the JDBC driver to push
to DB
Hi John,
have you had a look at the memory model for Flink 1.10? [1] Based on the
documentation, you could try increasing the Metaspace size independently of
the Flink memory usage (i.e. flink.size). The heap Size is a part of the
overall Flink memory. I hope that helps.
Best,
Matthias
[1]
https:
Hi, has anyone seen this?
On Tue, 16 Nov 2021 at 14:14, John Smith wrote:
> Hi running Flink 1.10
>
> I have
> - 3 job nodes 8GB memory total
> - jobmanager.heap.size: 6144m
>
> - 3 task nodes 16GB memory total
> - taskmanager.memory.flink.size: 10240m
> - taskmanager.memory.jvm-meta
Hi running Flink 1.10
I have
- 3 job nodes 8GB memory total
- jobmanager.heap.size: 6144m
- 3 task nodes 16GB memory total
- taskmanager.memory.flink.size: 10240m
- taskmanager.memory.jvm-metaspace.size: 1024m <--- This still cause
metaspace errors once a while, can I go higher do I n