bhupixb commented on issue #9193:
URL: https://github.com/apache/iceberg/issues/9193#issuecomment-1842146242
Thank you, this helped us give some direction. We disabled the equality
field column and upsert property. After that it is working correctly.
Though our job does not upsert more
Zhangg7723 commented on issue #9193:
URL: https://github.com/apache/iceberg/issues/9193#issuecomment-1840130345
upsert mode caused too many equal delete records in the table, these delete
records will loaded in memory hash set.
--
This is an automated message from the Apache Git Service.
bhupixb commented on issue #9193:
URL: https://github.com/apache/iceberg/issues/9193#issuecomment-1837959214
@pvary We have tried increasing the memory already, but it didn't help. We
are also unable that understand where it's consuming this much memory.
Also, there are a total of ~350
pvary commented on issue #9193:
URL: https://github.com/apache/iceberg/issues/9193#issuecomment-1837926494
I would move forward in 2 directions:
- Check which named (Iceberg/Flink class) references those big HashMaps
- Check how many files are in the tables
If there are many
bhupixb opened a new issue, #9193:
URL: https://github.com/apache/iceberg/issues/9193
### Apache Iceberg version
1.4.1
### Query engine
Flink
### Please describe the bug
# Background:
We are using the flink iceberg sinks to write data to an iceberg