Hi all, we current have a Flink job that retrieves jsonl data from GCS and 
writes to Iceberg Tables. We are using Flink 13.2 and things are working fine.

We now have to fan out that same data in to 100 different sinks - Iceberg 
Tables on s3. There will be 100 buckets and the data needs to be sent to each 
of these 100 different buckets.

We are planning to add a new Job that will write to 1 sink at a time for each 
time it is launched. Is there any other optimal approach possible in Flink to 
support this use case of 100 different sinks?

Reply via email to