nbali opened a new issue, #22951:
URL: https://github.com/apache/beam/issues/22951

   ### What happened?
   
   BigQueryIO.Write.withAutoSharding() uses GroupIntoBatches.withShardedKey(), 
which uses 'workerUuid' and 'threadId' as the sharding key. According to my 
understanding the problem is that the Kafka consumer read in KafkaIO.Read for a 
single partition most likely happens without any parallelism on the same worker 
on the same thread as it's being read in a FIFO manner due to the offset. This 
essentially means that .withShardedKey() has no effect whatsoever.
   
   Although there is a 'FILE_TRIGGERING_BATCHING_DURATION' with '1s' duration, 
and a 'FILE_TRIGGERING_RECORD_COUNT' with '500000' count - and both triggers 
grouping, it still means if we are under 500k elements, and under 1s it will 
try to fire them at once. It is totally possible - with sufficiently high 
throughput, or 'outputWithTimestamp' that we have 500k elements in a single 
sec). This could result in OOME.
   
   We should also have a size limit, not only time and count.
   
   ### Issue Priority
   
   Priority: 2
   
   ### Issue Component
   
   Component: io-java-gcp


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to