Hi thanks. I know, I already mentioned that I put 1024, see config above.
But my question is how much? I still get the message once a while. It also
seems that if a job restarts a few times it happens... My jobs aren't
complicated. They use Kafka, some of them JDBC and the JDBC driver to push
to DB. Right now I use flink for ETL

Kafka -> JSon Validation (Jackson) -> filter -> JDBC to database.

On Mon, 22 Nov 2021 at 10:24, Matthias Pohl <matth...@ververica.com> wrote:

> Hi John,
> have you had a look at the memory model for Flink 1.10? [1] Based on the
> documentation, you could try increasing the Metaspace size independently of
> the Flink memory usage (i.e. flink.size). The heap Size is a part of the
> overall Flink memory. I hope that helps.
>
> Best,
> Matthias
>
> [1]
> https://nightlies.apache.org/flink/flink-docs-release-1.10/ops/memory/mem_detail.html
>
> On Mon, Nov 22, 2021 at 3:58 PM John Smith <java.dev....@gmail.com> wrote:
>
>> Hi, has anyone seen this?
>>
>> On Tue, 16 Nov 2021 at 14:14, John Smith <java.dev....@gmail.com> wrote:
>>
>>> Hi running Flink 1.10
>>>
>>> I have
>>> - 3 job nodes 8GB memory total
>>>     - jobmanager.heap.size: 6144m
>>>
>>> - 3 task nodes 16GB memory total
>>>     - taskmanager.memory.flink.size: 10240m
>>>     - taskmanager.memory.jvm-metaspace.size: 1024m <--- This still cause
>>> metaspace errors once a while, can I go higher do I need to lower the 10GB
>>> above?
>>>
>>> The task nodes on the UI are reporting:
>>> - Physical Memory:15.7 GBJVM
>>> - Heap Size:4.88 GB <------- I'm guess this current used heap size and
>>> not the mak of 10GB set above?
>>> - Flink Managed Memory:4.00 GB
>>>
>>

Reply via email to