pvary commented on issue #10147:
URL: https://github.com/apache/iceberg/issues/10147#issuecomment-2061388525
Could it be, that the table is partitioned and all of the new data is
targeting a single partition?
If you start the sink with higher writer parallelism, how does the data
sannaroby commented on issue #10147:
URL: https://github.com/apache/iceberg/issues/10147#issuecomment-2058495395
Hi @pvary, thanks for your reply.
We're using the HASH distribution mode and this is an extract from our flink
job:
```
SingleOutputStreamOperator mainFunction =
pvary commented on issue #10147:
URL: https://github.com/apache/iceberg/issues/10147#issuecomment-2057528514
@sannaroby: can you share the Sink code?
What distribution mode do you use? Maybe we need a rebalance step before the
writer?
--
This is an automated message from the Apache
sannaroby opened a new issue, #10147:
URL: https://github.com/apache/iceberg/issues/10147
### Query engine
Flink
### Question
Hello, I'm using iceberg-flink-1.18-1.5.0.
I've configured the [flink-operator autoscaler