Bruce GAO created FLINK-20945:
---------------------------------
Summary: flink hive insert heap out of memory
Key: FLINK-20945
URL: https://issues.apache.org/jira/browse/FLINK-20945
Project: Flink
Issue Type: Improvement
Environment: flink 1.12.0
hive-exec 2.3.5
Reporter: Bruce GAO
when using flink sql to insert into hive from kafka, heap out of memory occrus
randomly.
Hive table using year/month/day/hour as partition, it seems the max heap space
needed is corresponded to active partition number(according to kafka message
disordered and delay). which means if partition number increases, the heap
space needed also increase, may cause the heap out of memory.
when write record, is it possible to take the whole heap space usage into
account in checkBlockSizeReached, or some other method to avoid OOM?
--
This message was sent by Atlassian Jira
(v8.3.4#803005)