I solved this problem by removing the Hadoop classpath from Flink cluster deployment.
On 2022/04/28 09:04:50 Terry Heathcote wrote: > Hi > > We are running a Flink job that delivers Kafka data to an Iceberg table. > The job uses the *org.apache.iceberg.flink.CatalogLoader* and > *org.apache.iceberg.flink.TableLoader > *interfaces in combination with *org.apache.iceberg.flink.sink.FlinkSink *where > the catalog type is Hive. > > We have had success in running multiple jobs to respective tables that are > stored in the same s3 bucket but recently, when attempting to write to > tables that are stored in separate s3 buckets, we have run into issues. The > first jobs submitted to the cluster run fine, however, when submitting more > jobs for sink tables with the same name (in separate database schemas and > s3 buckets), we run into a class cast exception as well as > an org.apache.hadoop.metrics2.MetricsException error stating: Metrics > source S3AMetrics{bucket-name} already exists! > > Attached are both the error logs as well as the main code snippets and pom > files for better context. Any help would be greatly appreciated. > > The Flink cluster version is 12.7 and we have enabled the > flink-s3-fs-hadoop jar > plugin so as to be able to write to s3 files. > ᐧ > ᐧ