[ https://issues.apache.org/jira/browse/HUDI-4970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Sagar Sumit updated HUDI-4970: ------------------------------ Epic Link: HUDI-3529 > hudi-kafka-connect-bundle: Could not initialize class > org.apache.hadoop.security.UserGroupInformation > ----------------------------------------------------------------------------------------------------- > > Key: HUDI-4970 > URL: https://issues.apache.org/jira/browse/HUDI-4970 > Project: Apache Hudi > Issue Type: Bug > Reporter: Sagar Sumit > Priority: Critical > Fix For: 0.13.0 > > > The Kafka connect sink loads successfully but fails to sync Hudi table due to > NoClassDefFoundError: Could not initialize class > org.apache.hadoop.security.UserGroupInformation > {code:java} > [2022-10-03 14:31:49,872] INFO The value of > hoodie.datasource.write.keygenerator.type is empty, using SIMPLE > (org.apache.hudi.keygen.factory.HoodieAvroKeyGeneratorFactory:63)[2022-10-03 > 14:31:49,872] INFO Setting record key volume and partition fields date for > table file:///tmp/hoodie/hudi-test-topichudi-test-topic > (org.apache.hudi.connect.writers.KafkaConnectTransactionServices:93)[2022-10-03 > 14:31:49,872] INFO Initializing file:///tmp/hoodie/hudi-test-topic as hoodie > table file:///tmp/hoodie/hudi-test-topic > (org.apache.hudi.common.table.HoodieTableMetaClient:424)[2022-10-03 > 14:31:49,872] INFO Existing partitions deleted [hudi-test-topic-0] > (org.apache.hudi.connect.HoodieSinkTask:156)[2022-10-03 14:31:49,872] ERROR > WorkerSinkTask{id=hudi-sink-3} Task threw an uncaught and unrecoverable > exception. Task is being killed and will not recover until manually restarted > (org.apache.kafka.connect.runtime.WorkerTask:184)java.lang.NoClassDefFoundError: > Could not initialize class org.apache.hadoop.security.UserGroupInformation > at org.apache.hadoop.fs.FileSystem$Cache$Key.<init>(FileSystem.java:3431) > at > org.apache.hadoop.fs.FileSystem$Cache$Key.<init>(FileSystem.java:3421) > at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3263) at > org.apache.hadoop.fs.FileSystem.get(FileSystem.java:475) at > org.apache.hadoop.fs.Path.getFileSystem(Path.java:356) at > org.apache.hudi.common.fs.FSUtils.getFs(FSUtils.java:110) at > org.apache.hudi.common.fs.FSUtils.getFs(FSUtils.java:103) at > org.apache.hudi.common.table.HoodieTableMetaClient.initTableAndGetMetaClient(HoodieTableMetaClient.java:426) > at > org.apache.hudi.common.table.HoodieTableMetaClient$PropertyBuilder.initTable(HoodieTableMetaClient.java:1110) > at > org.apache.hudi.connect.writers.KafkaConnectTransactionServices.<init>(KafkaConnectTransactionServices.java:104) > at > org.apache.hudi.connect.transaction.ConnectTransactionCoordinator.<init>(ConnectTransactionCoordinator.java:88) > at > org.apache.hudi.connect.HoodieSinkTask.bootstrap(HoodieSinkTask.java:191) > at org.apache.hudi.connect.HoodieSinkTask.open(HoodieSinkTask.java:151) at > org.apache.kafka.connect.runtime.WorkerSinkTask.openPartitions(WorkerSinkTask.java:635) > at > org.apache.kafka.connect.runtime.WorkerSinkTask.access$1000(WorkerSinkTask.java:71){code} > Follow [https://github.com/apache/hudi/tree/master/hudi-kafka-connect#readme] > to reproduce. -- This message was sent by Atlassian Jira (v8.20.10#820010)