[jira] [Updated] (HUDI-4369) Hudi Kafka Connect Sink writing to GCS bucket

2022-08-20 Thread Raymond Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/HUDI-4369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond Xu updated HUDI-4369:
-
Sprint: 2022/09/05  (was: 2022/08/22)

> Hudi Kafka Connect Sink writing to GCS bucket
> -
>
> Key: HUDI-4369
> URL: https://issues.apache.org/jira/browse/HUDI-4369
> Project: Apache Hudi
>  Issue Type: Bug
>  Components: kafka-connect
>Reporter: Vishal Agarwal
>Priority: Critical
> Fix For: 0.12.1
>
>
> Hi team,
> I am trying to use Hudi sink connector with Kafka Connect to write to GCS 
> bucket. But I am getting error regarding "gs" file scheme. I have added all 
> GCS related properties in core-site.xml and the corresponding gcs-connector 
> jar in the plugin path. But still facing the issue.
> The issue was already reported with S3 as per jira 
> https://issues.apache.org/jira/browse/HUDI-3610. But I am unable to get the 
> resolution.
> Happy to discuss on this !
> Thanks
> *StackTrace-*
> %d [%thread] %-5level %logger - %msg%n 
> org.apache.hudi.exception.HoodieException: Fatal error instantiating Hudi 
> Write Provider 
>  at 
> org.apache.hudi.connect.writers.KafkaConnectWriterProvider.(KafkaConnectWriterProvider.java:103)
>  ~[connectors-uber.jar:?]
>  at 
> org.apache.hudi.connect.transaction.ConnectTransactionParticipant.(ConnectTransactionParticipant.java:65)
>  ~[connectors-uber.jar:?]
>  at org.apache.hudi.connect.HoodieSinkTask.bootstrap(HoodieSinkTask.java:198) 
> [connectors-uber.jar:?]
>  at org.apache.hudi.connect.HoodieSinkTask.open(HoodieSinkTask.java:151) 
> [connectors-uber.jar:?]
>  at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.openPartitions(WorkerSinkTask.java:587)
>  [connect-runtime-2.4.1.jar:?]
>  at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.access$1100(WorkerSinkTask.java:67)
>  [connect-runtime-2.4.1.jar:?]
>  at 
> org.apache.kafka.connect.runtime.WorkerSinkTask$HandleRebalance.onPartitionsAssigned(WorkerSinkTask.java:652)
>  [connect-runtime-2.4.1.jar:?]
>  at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.invokePartitionsAssigned(ConsumerCoordinator.java:272)
>  [kafka-clients-2.4.1.jar:?]
>  at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinComplete(ConsumerCoordinator.java:400)
>  [kafka-clients-2.4.1.jar:?]
>  at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:421)
>  [kafka-clients-2.4.1.jar:?]
>  at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:340)
>  [kafka-clients-2.4.1.jar:?]
>  at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:471)
>  [kafka-clients-2.4.1.jar:?]
>  at 
> org.apache.kafka.clients.consumer.KafkaConsumer.updateAssignmentMetadataIfNeeded(KafkaConsumer.java:1267)
>  [kafka-clients-2.4.1.jar:?]
>  at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1231) 
> [kafka-clients-2.4.1.jar:?]
>  at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1211) 
> [kafka-clients-2.4.1.jar:?]
>  at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.pollConsumer(WorkerSinkTask.java:444)
>  [connect-runtime-2.4.1.jar:?]
>  at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:317) 
> [connect-runtime-2.4.1.jar:?]
>  at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:224)
>  [connect-runtime-2.4.1.jar:?]
>  at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:192)
>  [connect-runtime-2.4.1.jar:?]
>  at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:177) 
> [connect-runtime-2.4.1.jar:?]
>  at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:227) 
> [connect-runtime-2.4.1.jar:?]
>  at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [?:1.8.0_331]
>  at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_331]
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  [?:1.8.0_331]
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  [?:1.8.0_331]
>  at java.lang.Thread.run(Thread.java:750) [?:1.8.0_331]
> Caused by: org.apache.hudi.exception.HoodieIOException: Failed to get 
> instance of org.apache.hadoop.fs.FileSystem
>  at org.apache.hudi.common.fs.FSUtils.getFs(FSUtils.java:109) 
> ~[connectors-uber.jar:?]
>  at org.apache.hudi.common.fs.FSUtils.getFs(FSUtils.java:100) 
> ~[connectors-uber.jar:?]
>  at org.apache.hudi.client.BaseHoodieClient.(BaseHoodieClient.java:69) 
> ~[connectors-uber.jar:?]
>  at 
> org.apache.hudi.client.BaseHoodieWriteClient.(BaseHoodieWriteClient.java:175)
>  ~[connectors-uber.jar:?]
>  at 
> org.apache.hud

[jira] [Updated] (HUDI-4369) Hudi Kafka Connect Sink writing to GCS bucket

2022-09-07 Thread Raymond Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/HUDI-4369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond Xu updated HUDI-4369:
-
Sprint:   (was: 2022/09/05)

> Hudi Kafka Connect Sink writing to GCS bucket
> -
>
> Key: HUDI-4369
> URL: https://issues.apache.org/jira/browse/HUDI-4369
> Project: Apache Hudi
>  Issue Type: Bug
>  Components: kafka-connect
>Reporter: Vishal Agarwal
>Priority: Critical
> Fix For: 0.12.1
>
>
> Hi team,
> I am trying to use Hudi sink connector with Kafka Connect to write to GCS 
> bucket. But I am getting error regarding "gs" file scheme. I have added all 
> GCS related properties in core-site.xml and the corresponding gcs-connector 
> jar in the plugin path. But still facing the issue.
> The issue was already reported with S3 as per jira 
> https://issues.apache.org/jira/browse/HUDI-3610. But I am unable to get the 
> resolution.
> Happy to discuss on this !
> Thanks
> *StackTrace-*
> %d [%thread] %-5level %logger - %msg%n 
> org.apache.hudi.exception.HoodieException: Fatal error instantiating Hudi 
> Write Provider 
>  at 
> org.apache.hudi.connect.writers.KafkaConnectWriterProvider.(KafkaConnectWriterProvider.java:103)
>  ~[connectors-uber.jar:?]
>  at 
> org.apache.hudi.connect.transaction.ConnectTransactionParticipant.(ConnectTransactionParticipant.java:65)
>  ~[connectors-uber.jar:?]
>  at org.apache.hudi.connect.HoodieSinkTask.bootstrap(HoodieSinkTask.java:198) 
> [connectors-uber.jar:?]
>  at org.apache.hudi.connect.HoodieSinkTask.open(HoodieSinkTask.java:151) 
> [connectors-uber.jar:?]
>  at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.openPartitions(WorkerSinkTask.java:587)
>  [connect-runtime-2.4.1.jar:?]
>  at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.access$1100(WorkerSinkTask.java:67)
>  [connect-runtime-2.4.1.jar:?]
>  at 
> org.apache.kafka.connect.runtime.WorkerSinkTask$HandleRebalance.onPartitionsAssigned(WorkerSinkTask.java:652)
>  [connect-runtime-2.4.1.jar:?]
>  at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.invokePartitionsAssigned(ConsumerCoordinator.java:272)
>  [kafka-clients-2.4.1.jar:?]
>  at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinComplete(ConsumerCoordinator.java:400)
>  [kafka-clients-2.4.1.jar:?]
>  at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:421)
>  [kafka-clients-2.4.1.jar:?]
>  at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:340)
>  [kafka-clients-2.4.1.jar:?]
>  at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:471)
>  [kafka-clients-2.4.1.jar:?]
>  at 
> org.apache.kafka.clients.consumer.KafkaConsumer.updateAssignmentMetadataIfNeeded(KafkaConsumer.java:1267)
>  [kafka-clients-2.4.1.jar:?]
>  at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1231) 
> [kafka-clients-2.4.1.jar:?]
>  at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1211) 
> [kafka-clients-2.4.1.jar:?]
>  at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.pollConsumer(WorkerSinkTask.java:444)
>  [connect-runtime-2.4.1.jar:?]
>  at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:317) 
> [connect-runtime-2.4.1.jar:?]
>  at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:224)
>  [connect-runtime-2.4.1.jar:?]
>  at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:192)
>  [connect-runtime-2.4.1.jar:?]
>  at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:177) 
> [connect-runtime-2.4.1.jar:?]
>  at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:227) 
> [connect-runtime-2.4.1.jar:?]
>  at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [?:1.8.0_331]
>  at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_331]
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  [?:1.8.0_331]
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  [?:1.8.0_331]
>  at java.lang.Thread.run(Thread.java:750) [?:1.8.0_331]
> Caused by: org.apache.hudi.exception.HoodieIOException: Failed to get 
> instance of org.apache.hadoop.fs.FileSystem
>  at org.apache.hudi.common.fs.FSUtils.getFs(FSUtils.java:109) 
> ~[connectors-uber.jar:?]
>  at org.apache.hudi.common.fs.FSUtils.getFs(FSUtils.java:100) 
> ~[connectors-uber.jar:?]
>  at org.apache.hudi.client.BaseHoodieClient.(BaseHoodieClient.java:69) 
> ~[connectors-uber.jar:?]
>  at 
> org.apache.hudi.client.BaseHoodieWriteClient.(BaseHoodieWriteClient.java:175)
>  ~[connectors-uber.jar:?]
>  at 
> org.apache.hudi.client.B

[jira] [Updated] (HUDI-4369) Hudi Kafka Connect Sink writing to GCS bucket

2022-10-01 Thread Zhaojing Yu (Jira)


 [ 
https://issues.apache.org/jira/browse/HUDI-4369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhaojing Yu updated HUDI-4369:
--
Fix Version/s: 0.13.0
   (was: 0.12.1)

> Hudi Kafka Connect Sink writing to GCS bucket
> -
>
> Key: HUDI-4369
> URL: https://issues.apache.org/jira/browse/HUDI-4369
> Project: Apache Hudi
>  Issue Type: Bug
>  Components: kafka-connect
>Reporter: Vishal Agarwal
>Priority: Critical
> Fix For: 0.13.0
>
>
> Hi team,
> I am trying to use Hudi sink connector with Kafka Connect to write to GCS 
> bucket. But I am getting error regarding "gs" file scheme. I have added all 
> GCS related properties in core-site.xml and the corresponding gcs-connector 
> jar in the plugin path. But still facing the issue.
> The issue was already reported with S3 as per jira 
> https://issues.apache.org/jira/browse/HUDI-3610. But I am unable to get the 
> resolution.
> Happy to discuss on this !
> Thanks
> *StackTrace-*
> %d [%thread] %-5level %logger - %msg%n 
> org.apache.hudi.exception.HoodieException: Fatal error instantiating Hudi 
> Write Provider 
>  at 
> org.apache.hudi.connect.writers.KafkaConnectWriterProvider.(KafkaConnectWriterProvider.java:103)
>  ~[connectors-uber.jar:?]
>  at 
> org.apache.hudi.connect.transaction.ConnectTransactionParticipant.(ConnectTransactionParticipant.java:65)
>  ~[connectors-uber.jar:?]
>  at org.apache.hudi.connect.HoodieSinkTask.bootstrap(HoodieSinkTask.java:198) 
> [connectors-uber.jar:?]
>  at org.apache.hudi.connect.HoodieSinkTask.open(HoodieSinkTask.java:151) 
> [connectors-uber.jar:?]
>  at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.openPartitions(WorkerSinkTask.java:587)
>  [connect-runtime-2.4.1.jar:?]
>  at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.access$1100(WorkerSinkTask.java:67)
>  [connect-runtime-2.4.1.jar:?]
>  at 
> org.apache.kafka.connect.runtime.WorkerSinkTask$HandleRebalance.onPartitionsAssigned(WorkerSinkTask.java:652)
>  [connect-runtime-2.4.1.jar:?]
>  at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.invokePartitionsAssigned(ConsumerCoordinator.java:272)
>  [kafka-clients-2.4.1.jar:?]
>  at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinComplete(ConsumerCoordinator.java:400)
>  [kafka-clients-2.4.1.jar:?]
>  at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:421)
>  [kafka-clients-2.4.1.jar:?]
>  at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:340)
>  [kafka-clients-2.4.1.jar:?]
>  at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:471)
>  [kafka-clients-2.4.1.jar:?]
>  at 
> org.apache.kafka.clients.consumer.KafkaConsumer.updateAssignmentMetadataIfNeeded(KafkaConsumer.java:1267)
>  [kafka-clients-2.4.1.jar:?]
>  at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1231) 
> [kafka-clients-2.4.1.jar:?]
>  at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1211) 
> [kafka-clients-2.4.1.jar:?]
>  at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.pollConsumer(WorkerSinkTask.java:444)
>  [connect-runtime-2.4.1.jar:?]
>  at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:317) 
> [connect-runtime-2.4.1.jar:?]
>  at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:224)
>  [connect-runtime-2.4.1.jar:?]
>  at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:192)
>  [connect-runtime-2.4.1.jar:?]
>  at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:177) 
> [connect-runtime-2.4.1.jar:?]
>  at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:227) 
> [connect-runtime-2.4.1.jar:?]
>  at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [?:1.8.0_331]
>  at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_331]
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  [?:1.8.0_331]
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  [?:1.8.0_331]
>  at java.lang.Thread.run(Thread.java:750) [?:1.8.0_331]
> Caused by: org.apache.hudi.exception.HoodieIOException: Failed to get 
> instance of org.apache.hadoop.fs.FileSystem
>  at org.apache.hudi.common.fs.FSUtils.getFs(FSUtils.java:109) 
> ~[connectors-uber.jar:?]
>  at org.apache.hudi.common.fs.FSUtils.getFs(FSUtils.java:100) 
> ~[connectors-uber.jar:?]
>  at org.apache.hudi.client.BaseHoodieClient.(BaseHoodieClient.java:69) 
> ~[connectors-uber.jar:?]
>  at 
> org.apache.hudi.client.BaseHoodieWriteClient.(BaseHoodieWriteClient.java:175)
>  ~[connectors-uber.jar:?]

[jira] [Updated] (HUDI-4369) Hudi Kafka Connect Sink writing to GCS bucket

2022-08-18 Thread Raymond Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/HUDI-4369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond Xu updated HUDI-4369:
-
Fix Version/s: 0.12.1

> Hudi Kafka Connect Sink writing to GCS bucket
> -
>
> Key: HUDI-4369
> URL: https://issues.apache.org/jira/browse/HUDI-4369
> Project: Apache Hudi
>  Issue Type: Bug
>  Components: kafka-connect
>Reporter: Vishal Agarwal
>Priority: Critical
> Fix For: 0.12.1
>
>
> Hi team,
> I am trying to use Hudi sink connector with Kafka Connect to write to GCS 
> bucket. But I am getting error regarding "gs" file scheme. I have added all 
> GCS related properties in core-site.xml and the corresponding gcs-connector 
> jar in the plugin path. But still facing the issue.
> The issue was already reported with S3 as per jira 
> https://issues.apache.org/jira/browse/HUDI-3610. But I am unable to get the 
> resolution.
> Happy to discuss on this !
> Thanks
> *StackTrace-*
> %d [%thread] %-5level %logger - %msg%n 
> org.apache.hudi.exception.HoodieException: Fatal error instantiating Hudi 
> Write Provider 
>  at 
> org.apache.hudi.connect.writers.KafkaConnectWriterProvider.(KafkaConnectWriterProvider.java:103)
>  ~[connectors-uber.jar:?]
>  at 
> org.apache.hudi.connect.transaction.ConnectTransactionParticipant.(ConnectTransactionParticipant.java:65)
>  ~[connectors-uber.jar:?]
>  at org.apache.hudi.connect.HoodieSinkTask.bootstrap(HoodieSinkTask.java:198) 
> [connectors-uber.jar:?]
>  at org.apache.hudi.connect.HoodieSinkTask.open(HoodieSinkTask.java:151) 
> [connectors-uber.jar:?]
>  at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.openPartitions(WorkerSinkTask.java:587)
>  [connect-runtime-2.4.1.jar:?]
>  at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.access$1100(WorkerSinkTask.java:67)
>  [connect-runtime-2.4.1.jar:?]
>  at 
> org.apache.kafka.connect.runtime.WorkerSinkTask$HandleRebalance.onPartitionsAssigned(WorkerSinkTask.java:652)
>  [connect-runtime-2.4.1.jar:?]
>  at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.invokePartitionsAssigned(ConsumerCoordinator.java:272)
>  [kafka-clients-2.4.1.jar:?]
>  at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinComplete(ConsumerCoordinator.java:400)
>  [kafka-clients-2.4.1.jar:?]
>  at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:421)
>  [kafka-clients-2.4.1.jar:?]
>  at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:340)
>  [kafka-clients-2.4.1.jar:?]
>  at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:471)
>  [kafka-clients-2.4.1.jar:?]
>  at 
> org.apache.kafka.clients.consumer.KafkaConsumer.updateAssignmentMetadataIfNeeded(KafkaConsumer.java:1267)
>  [kafka-clients-2.4.1.jar:?]
>  at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1231) 
> [kafka-clients-2.4.1.jar:?]
>  at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1211) 
> [kafka-clients-2.4.1.jar:?]
>  at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.pollConsumer(WorkerSinkTask.java:444)
>  [connect-runtime-2.4.1.jar:?]
>  at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:317) 
> [connect-runtime-2.4.1.jar:?]
>  at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:224)
>  [connect-runtime-2.4.1.jar:?]
>  at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:192)
>  [connect-runtime-2.4.1.jar:?]
>  at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:177) 
> [connect-runtime-2.4.1.jar:?]
>  at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:227) 
> [connect-runtime-2.4.1.jar:?]
>  at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [?:1.8.0_331]
>  at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_331]
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  [?:1.8.0_331]
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  [?:1.8.0_331]
>  at java.lang.Thread.run(Thread.java:750) [?:1.8.0_331]
> Caused by: org.apache.hudi.exception.HoodieIOException: Failed to get 
> instance of org.apache.hadoop.fs.FileSystem
>  at org.apache.hudi.common.fs.FSUtils.getFs(FSUtils.java:109) 
> ~[connectors-uber.jar:?]
>  at org.apache.hudi.common.fs.FSUtils.getFs(FSUtils.java:100) 
> ~[connectors-uber.jar:?]
>  at org.apache.hudi.client.BaseHoodieClient.(BaseHoodieClient.java:69) 
> ~[connectors-uber.jar:?]
>  at 
> org.apache.hudi.client.BaseHoodieWriteClient.(BaseHoodieWriteClient.java:175)
>  ~[connectors-uber.jar:?]
>  at 
> org.apache.hudi.client.BaseHoo

[jira] [Updated] (HUDI-4369) Hudi Kafka Connect Sink writing to GCS bucket

2022-07-07 Thread Vishal Agarwal (Jira)


 [ 
https://issues.apache.org/jira/browse/HUDI-4369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vishal Agarwal updated HUDI-4369:
-
Description: 
Hi team,

I am trying to use Hudi sink connector with Kafka Connect to write to GCS 
bucket. But I am getting error regarding "gs" file scheme. I have added all GCS 
related properties in core-site.xml and the corresponding gcs-connector jar in 
the plugin path. But still facing the issue.

The issue was already reported with S3 as per jira 
https://issues.apache.org/jira/browse/HUDI-3610. But I am unable to get the 
resolution.

Happy to discuss on this !

Thanks


*StackTrace-*

%d [%thread] %-5level %logger - %msg%n 
org.apache.hudi.exception.HoodieException: Fatal error instantiating Hudi Write 
Provider 

 at 
org.apache.hudi.connect.writers.KafkaConnectWriterProvider.(KafkaConnectWriterProvider.java:103)
 ~[connectors-uber.jar:?]

 at 
org.apache.hudi.connect.transaction.ConnectTransactionParticipant.(ConnectTransactionParticipant.java:65)
 ~[connectors-uber.jar:?]

 at org.apache.hudi.connect.HoodieSinkTask.bootstrap(HoodieSinkTask.java:198) 
[connectors-uber.jar:?]

 at org.apache.hudi.connect.HoodieSinkTask.open(HoodieSinkTask.java:151) 
[connectors-uber.jar:?]

 at 
org.apache.kafka.connect.runtime.WorkerSinkTask.openPartitions(WorkerSinkTask.java:587)
 [connect-runtime-2.4.1.jar:?]

 at 
org.apache.kafka.connect.runtime.WorkerSinkTask.access$1100(WorkerSinkTask.java:67)
 [connect-runtime-2.4.1.jar:?]

 at 
org.apache.kafka.connect.runtime.WorkerSinkTask$HandleRebalance.onPartitionsAssigned(WorkerSinkTask.java:652)
 [connect-runtime-2.4.1.jar:?]

 at 
org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.invokePartitionsAssigned(ConsumerCoordinator.java:272)
 [kafka-clients-2.4.1.jar:?]

 at 
org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinComplete(ConsumerCoordinator.java:400)
 [kafka-clients-2.4.1.jar:?]

 at 
org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:421)
 [kafka-clients-2.4.1.jar:?]

 at 
org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:340)
 [kafka-clients-2.4.1.jar:?]

 at 
org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:471)
 [kafka-clients-2.4.1.jar:?]

 at 
org.apache.kafka.clients.consumer.KafkaConsumer.updateAssignmentMetadataIfNeeded(KafkaConsumer.java:1267)
 [kafka-clients-2.4.1.jar:?]

 at 
org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1231) 
[kafka-clients-2.4.1.jar:?]

 at 
org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1211) 
[kafka-clients-2.4.1.jar:?]

 at 
org.apache.kafka.connect.runtime.WorkerSinkTask.pollConsumer(WorkerSinkTask.java:444)
 [connect-runtime-2.4.1.jar:?]

 at 
org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:317) 
[connect-runtime-2.4.1.jar:?]

 at 
org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:224)
 [connect-runtime-2.4.1.jar:?]

 at 
org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:192)
 [connect-runtime-2.4.1.jar:?]

 at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:177) 
[connect-runtime-2.4.1.jar:?]

 at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:227) 
[connect-runtime-2.4.1.jar:?]

 at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
[?:1.8.0_331]

 at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_331]

 at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
[?:1.8.0_331]

 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
[?:1.8.0_331]

 at java.lang.Thread.run(Thread.java:750) [?:1.8.0_331]

Caused by: org.apache.hudi.exception.HoodieIOException: Failed to get instance 
of org.apache.hadoop.fs.FileSystem

 at org.apache.hudi.common.fs.FSUtils.getFs(FSUtils.java:109) 
~[connectors-uber.jar:?]

 at org.apache.hudi.common.fs.FSUtils.getFs(FSUtils.java:100) 
~[connectors-uber.jar:?]

 at org.apache.hudi.client.BaseHoodieClient.(BaseHoodieClient.java:69) 
~[connectors-uber.jar:?]

 at 
org.apache.hudi.client.BaseHoodieWriteClient.(BaseHoodieWriteClient.java:175)
 ~[connectors-uber.jar:?]

 at 
org.apache.hudi.client.BaseHoodieWriteClient.(BaseHoodieWriteClient.java:160)
 ~[connectors-uber.jar:?]

 at 
org.apache.hudi.client.HoodieJavaWriteClient.(HoodieJavaWriteClient.java:55)
 ~[connectors-uber.jar:?]

 at 
org.apache.hudi.connect.writers.KafkaConnectWriterProvider.(KafkaConnectWriterProvider.java:101)
 ~[connectors-uber.jar:?]

 ... 25 more

Caused by: org.apache.hadoop.fs.UnsupportedFileSystemException: No FileSystem 
for scheme "gs"

 at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:3225) 
~[connectors-uber.jar:?]

 at org.apache.hadoop.fs.FileSys

[jira] [Updated] (HUDI-4369) Hudi Kafka Connect Sink writing to GCS bucket

2023-05-22 Thread Yue Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HUDI-4369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yue Zhang updated HUDI-4369:

Fix Version/s: 0.14.0
   (was: 0.13.1)

> Hudi Kafka Connect Sink writing to GCS bucket
> -
>
> Key: HUDI-4369
> URL: https://issues.apache.org/jira/browse/HUDI-4369
> Project: Apache Hudi
>  Issue Type: Bug
>  Components: kafka-connect
>Reporter: Vishal Agarwal
>Priority: Critical
> Fix For: 0.14.0
>
>
> Hi team,
> I am trying to use Hudi sink connector with Kafka Connect to write to GCS 
> bucket. But I am getting error regarding "gs" file scheme. I have added all 
> GCS related properties in core-site.xml and the corresponding gcs-connector 
> jar in the plugin path. But still facing the issue.
> The issue was already reported with S3 as per jira 
> https://issues.apache.org/jira/browse/HUDI-3610. But I am unable to get the 
> resolution.
> Happy to discuss on this !
> Thanks
> *StackTrace-*
> %d [%thread] %-5level %logger - %msg%n 
> org.apache.hudi.exception.HoodieException: Fatal error instantiating Hudi 
> Write Provider 
>  at 
> org.apache.hudi.connect.writers.KafkaConnectWriterProvider.(KafkaConnectWriterProvider.java:103)
>  ~[connectors-uber.jar:?]
>  at 
> org.apache.hudi.connect.transaction.ConnectTransactionParticipant.(ConnectTransactionParticipant.java:65)
>  ~[connectors-uber.jar:?]
>  at org.apache.hudi.connect.HoodieSinkTask.bootstrap(HoodieSinkTask.java:198) 
> [connectors-uber.jar:?]
>  at org.apache.hudi.connect.HoodieSinkTask.open(HoodieSinkTask.java:151) 
> [connectors-uber.jar:?]
>  at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.openPartitions(WorkerSinkTask.java:587)
>  [connect-runtime-2.4.1.jar:?]
>  at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.access$1100(WorkerSinkTask.java:67)
>  [connect-runtime-2.4.1.jar:?]
>  at 
> org.apache.kafka.connect.runtime.WorkerSinkTask$HandleRebalance.onPartitionsAssigned(WorkerSinkTask.java:652)
>  [connect-runtime-2.4.1.jar:?]
>  at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.invokePartitionsAssigned(ConsumerCoordinator.java:272)
>  [kafka-clients-2.4.1.jar:?]
>  at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinComplete(ConsumerCoordinator.java:400)
>  [kafka-clients-2.4.1.jar:?]
>  at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:421)
>  [kafka-clients-2.4.1.jar:?]
>  at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:340)
>  [kafka-clients-2.4.1.jar:?]
>  at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:471)
>  [kafka-clients-2.4.1.jar:?]
>  at 
> org.apache.kafka.clients.consumer.KafkaConsumer.updateAssignmentMetadataIfNeeded(KafkaConsumer.java:1267)
>  [kafka-clients-2.4.1.jar:?]
>  at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1231) 
> [kafka-clients-2.4.1.jar:?]
>  at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1211) 
> [kafka-clients-2.4.1.jar:?]
>  at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.pollConsumer(WorkerSinkTask.java:444)
>  [connect-runtime-2.4.1.jar:?]
>  at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:317) 
> [connect-runtime-2.4.1.jar:?]
>  at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:224)
>  [connect-runtime-2.4.1.jar:?]
>  at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:192)
>  [connect-runtime-2.4.1.jar:?]
>  at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:177) 
> [connect-runtime-2.4.1.jar:?]
>  at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:227) 
> [connect-runtime-2.4.1.jar:?]
>  at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [?:1.8.0_331]
>  at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_331]
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  [?:1.8.0_331]
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  [?:1.8.0_331]
>  at java.lang.Thread.run(Thread.java:750) [?:1.8.0_331]
> Caused by: org.apache.hudi.exception.HoodieIOException: Failed to get 
> instance of org.apache.hadoop.fs.FileSystem
>  at org.apache.hudi.common.fs.FSUtils.getFs(FSUtils.java:109) 
> ~[connectors-uber.jar:?]
>  at org.apache.hudi.common.fs.FSUtils.getFs(FSUtils.java:100) 
> ~[connectors-uber.jar:?]
>  at org.apache.hudi.client.BaseHoodieClient.(BaseHoodieClient.java:69) 
> ~[connectors-uber.jar:?]
>  at 
> org.apache.hudi.client.BaseHoodieWriteClient.(BaseHoodieWriteClient.java:175)
>  ~[connectors-uber.jar:?]
>  a

[jira] [Updated] (HUDI-4369) Hudi Kafka Connect Sink writing to GCS bucket

2023-10-04 Thread Prashant Wason (Jira)


 [ 
https://issues.apache.org/jira/browse/HUDI-4369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prashant Wason updated HUDI-4369:
-
Fix Version/s: 0.14.1
   (was: 0.14.0)

> Hudi Kafka Connect Sink writing to GCS bucket
> -
>
> Key: HUDI-4369
> URL: https://issues.apache.org/jira/browse/HUDI-4369
> Project: Apache Hudi
>  Issue Type: Bug
>  Components: kafka-connect
>Reporter: Vishal Agarwal
>Priority: Critical
> Fix For: 0.14.1
>
>
> Hi team,
> I am trying to use Hudi sink connector with Kafka Connect to write to GCS 
> bucket. But I am getting error regarding "gs" file scheme. I have added all 
> GCS related properties in core-site.xml and the corresponding gcs-connector 
> jar in the plugin path. But still facing the issue.
> The issue was already reported with S3 as per jira 
> https://issues.apache.org/jira/browse/HUDI-3610. But I am unable to get the 
> resolution.
> Happy to discuss on this !
> Thanks
> *StackTrace-*
> %d [%thread] %-5level %logger - %msg%n 
> org.apache.hudi.exception.HoodieException: Fatal error instantiating Hudi 
> Write Provider 
>  at 
> org.apache.hudi.connect.writers.KafkaConnectWriterProvider.(KafkaConnectWriterProvider.java:103)
>  ~[connectors-uber.jar:?]
>  at 
> org.apache.hudi.connect.transaction.ConnectTransactionParticipant.(ConnectTransactionParticipant.java:65)
>  ~[connectors-uber.jar:?]
>  at org.apache.hudi.connect.HoodieSinkTask.bootstrap(HoodieSinkTask.java:198) 
> [connectors-uber.jar:?]
>  at org.apache.hudi.connect.HoodieSinkTask.open(HoodieSinkTask.java:151) 
> [connectors-uber.jar:?]
>  at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.openPartitions(WorkerSinkTask.java:587)
>  [connect-runtime-2.4.1.jar:?]
>  at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.access$1100(WorkerSinkTask.java:67)
>  [connect-runtime-2.4.1.jar:?]
>  at 
> org.apache.kafka.connect.runtime.WorkerSinkTask$HandleRebalance.onPartitionsAssigned(WorkerSinkTask.java:652)
>  [connect-runtime-2.4.1.jar:?]
>  at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.invokePartitionsAssigned(ConsumerCoordinator.java:272)
>  [kafka-clients-2.4.1.jar:?]
>  at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinComplete(ConsumerCoordinator.java:400)
>  [kafka-clients-2.4.1.jar:?]
>  at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:421)
>  [kafka-clients-2.4.1.jar:?]
>  at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:340)
>  [kafka-clients-2.4.1.jar:?]
>  at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:471)
>  [kafka-clients-2.4.1.jar:?]
>  at 
> org.apache.kafka.clients.consumer.KafkaConsumer.updateAssignmentMetadataIfNeeded(KafkaConsumer.java:1267)
>  [kafka-clients-2.4.1.jar:?]
>  at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1231) 
> [kafka-clients-2.4.1.jar:?]
>  at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1211) 
> [kafka-clients-2.4.1.jar:?]
>  at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.pollConsumer(WorkerSinkTask.java:444)
>  [connect-runtime-2.4.1.jar:?]
>  at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:317) 
> [connect-runtime-2.4.1.jar:?]
>  at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:224)
>  [connect-runtime-2.4.1.jar:?]
>  at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:192)
>  [connect-runtime-2.4.1.jar:?]
>  at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:177) 
> [connect-runtime-2.4.1.jar:?]
>  at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:227) 
> [connect-runtime-2.4.1.jar:?]
>  at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [?:1.8.0_331]
>  at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_331]
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  [?:1.8.0_331]
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  [?:1.8.0_331]
>  at java.lang.Thread.run(Thread.java:750) [?:1.8.0_331]
> Caused by: org.apache.hudi.exception.HoodieIOException: Failed to get 
> instance of org.apache.hadoop.fs.FileSystem
>  at org.apache.hudi.common.fs.FSUtils.getFs(FSUtils.java:109) 
> ~[connectors-uber.jar:?]
>  at org.apache.hudi.common.fs.FSUtils.getFs(FSUtils.java:100) 
> ~[connectors-uber.jar:?]
>  at org.apache.hudi.client.BaseHoodieClient.(BaseHoodieClient.java:69) 
> ~[connectors-uber.jar:?]
>  at 
> org.apache.hudi.client.BaseHoodieWriteClient.(BaseHoodieWriteClient.java:175)
>  ~[connectors-uber.j