Machos65 opened a new issue, #13245:
URL: https://github.com/apache/hudi/issues/13245

   **_Tips before filing an issue_**
   
   - Have you gone through our [FAQs](https://hudi.apache.org/learn/faq/)?
   
   - Join the mailing list to engage in conversations and get faster support at 
[email protected].
   
   - If you have triaged this as a bug, then file an 
[issue](https://issues.apache.org/jira/projects/HUDI/issues) directly.
   
   **Describe the problem you faced**
   
   I was ingesting data from kafka which comes from my postgres debezium 
connectors i have tried to ingest one table it has accepted now when i try to 
ingest multi tables it gives me an error no table config found 
   
   A clear and concise description of the problem.
   
   Exception in thread "main" java.lang.IllegalArgumentException: Please 
provide valid table config file path!
           at 
org.apache.hudi.utilities.streamer.HoodieMultiTableStreamer.checkIfTableConfigFileExists(HoodieMultiTableStreamer.java:109)
           at 
org.apache.hudi.utilities.streamer.HoodieMultiTableStreamer.populateTableExecutionContextList(HoodieMultiTableStreamer.java:132)
           at 
org.apache.hudi.utilities.streamer.HoodieMultiTableStreamer.<init>(HoodieMultiTableStreamer.java:94)
           at 
org.apache.hudi.utilities.streamer.HoodieMultiTableStreamer.main(HoodieMultiTableStreamer.java:281)
           at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
           at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
           at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
           at java.base/java.lang.reflect.Method.invoke(Method.java:569)
           at 
org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
           at 
org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:1034)
           at 
org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:199)
           at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:222)
           at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:91)
           at 
org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1125)
           at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1134)
           at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
   25/05/01 21:08:39 INFO ShutdownHookManager: Shutdown hook called
   
   **To Reproduce**
   
   Steps to reproduce the behavior:
   
   1.i have my connector from kafka and is producing the topics as it should
   /opt/spark#  kafka-topics --bootstrap-server localhost:9092 --list
   SLF4J: Class path contains multiple SLF4J bindings.
   SLF4J: Found binding in 
[jar:file:/opt/confluent/share/java/kafka-rest-bin/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class]
   SLF4J: Found binding in 
[jar:file:/opt/confluent/share/java/kafka/slf4j-simple-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class]
   SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
explanation.
   SLF4J: Actual binding is of type [org.slf4j.impl.Reload4jLoggerFactory]
   __consumer_offsets
   _confluent-command
   _confluent-metrics
   _confluent-telemetry-metrics
   _confluent_balancer_api_state
   _dek_registry_keys
   _schema_encoders
   _schemas
   connect-configs
   connect-offsets
   connect-status
   dbserver1.public.categories
   dbserver1.public.customers
   dbserver1.public.employees
   dbserver1.public.orders
   dbserver1.public.products
   dbserver1.public.suppliers
   
   2. I create a config file for spark hive and minio configs
   
   #cat spark-config.properties
   spark.serializer=org.apache.spark.serializer.KryoSerializer
   
spark.sql.catalog.spark_catalog=org.apache.spark.sql.hudi.catalog.HoodieCatalog
   spark.sql.hive.convertMetastoreParquet=false
   
   spark.hadoop.fs.s3a.access.key=minios
   spark.hadoop.fs.s3a.secret.key=xxxxxxxx
   spark.hadoop.fs.s3a.endpoint=http://127.0.0.1:9000
   spark.hadoop.fs.s3a.path.style.access=true
   fs.s3a.signing-algorithm=S3SignerType
   
   spark.sql.catalogImplementation=hive
   spark.hadoop.hive.metastore.uris=thrift://localhost:9083
   
   3.i created the config file for source properties and my two tables i want 
to ingest
   cat configfolder/source.properties
   # Ingest Multiple Tables
   hoodie.streamer.ingestion.tablesToBeIngested=customers,employees
   
hoodie.streamer.ingestion.customers.configFile=/opt/spark/configfolder/customers_hudi_tbl.properties
   
hoodie.streamer.ingestion.employees.configFile=/opt/spark/configfolder/employees_hudi_tbl.properties
   
   # Common configs
   bootstrap.servers=localhost:9092
   schema.registry.url=http://localhost:8082
   
hoodie.streamer.source.kafka.value.deserializer.class=io.confluent.kafka.serializers.KafkaAvroDeserializer
   
   # hive sync
   
hoodie.datasource.hive_sync.partition_extractor_class=org.apache.hudi.hive.MultiPartKeysValueExtractor
   hoodie.datasource.hive_sync.metastore.uris=thrift://localhost:9083
   hoodie.datasource.hive_sync.mode=hms
   hoodie.datasource.hive_sync.enable=true
   hoodie.datasource.write.hive_style_partitioning=true
   
   ---------------------------------------
   cat configfolder/employees_hudi_tbl.properties
   hoodie.datasource.write.recordkey.field=employee_id
   hoodie.datasource.write.partitionpath.field=""
   hoodie.streamer.write.precombine.field=ts_ms
   hoodie.streamer.source.kafka.topic=dbserver1.public.employees
   
hoodie.streamer.schemaprovider.registry.url=http://localhost:8082/subjects/dbserver1.public.employees-value/versions/latest
   auto.offset.reset=earliest
   hoodie.metadata.enable=true
   hoodie.metadata.index.async=true
   
   # Hive sync conf
   hoodie.datasource.hive_sync.database=default
   hoodie.datasource.hive_sync.table=employees
   
   --------------------------
   
   cat configfolder/customers_hudi_tbl.properties
   hoodie.datasource.write.recordkey.field=customer_id
   hoodie.datasource.write.partitionpath.field=""
   hoodie.streamer.write.precombine.field=ts_ms
   hoodie.streamer.source.kafka.topic=dbserver1.public.customers
   
hoodie.streamer.schemaprovider.registry.url=http://localhost:8082/subjects/dbserver1.public.customers-value/versions/latest
   
hoodie.datasource.write.keygenerator.class=org.apache.hudi.keygen.SimpleKeyGenerator
   auto.offset.reset=earliest
   hoodie.metadata.enable=true
   hoodie.metadata.index.async=true
   
   # Hive sync conf
   hoodie.datasource.hive_sync.database=default
   hoodie.datasource.hive_sync.table=customers
   
   
   4. then i run my submit command
   spark-submit \
       --class  
org.apache.hudi.utilities.deltastreamer.HoodieMultiTableDeltaStreamer \
       --packages 
'org.apache.hudi:hudi-spark3.5-bundle_2.13:0.15.0,org.apache.hadoop:hadoop-aws:3.3.2'
 \
       --properties-file /opt/spark/spark-config.properties \
       --master 'local[*]' \
       --executor-memory 1g \
       /opt/spark/jars/hudi-utilities-bundle_2.13-0.15.0.jar \
       --op UPSERT \
       --enable-hive-sync \
       --table-type COPY_ON_WRITE \
       --source-ordering-field ts_ms \
       --source-class 
org.apache.hudi.utilities.sources.debezium.PostgresDebeziumSource \
       --payload-class 
org.apache.hudi.common.model.debezium.PostgresDebeziumAvroPayload \
       --base-path-prefix s3a://warehouse \
       --target-table customers,employees \
       --config-folder file://opt/spark/configfolder/ \
       --props file://opt/spark/configfolder/source.properties
   
   **Expected behavior**
   
   A clear and concise description of what you expected to happen.
   
   **Environment Description**
   
   * Hudi version :0.15.0
   
   * Spark version :3.5.5
   
   * Hive version :3.1.3
   
   * Hadoop version :3.3.6
   
   * Storage (HDFS/S3/GCS..) :s3
   
   * Running on Docker? (yes/no) : no
   
   
   **Additional context**
   
   Add any other context about the problem here.
   
   **Stacktrace**
   
   /opt/spark# spark-submit     --class 
org.apache.hudi.utilities.deltastreamer.HoodieMultiTableDeltaStreamer     
--packages 
'org.apache.hudi:hudi-spark3.5-bundle_2.13:0.15.0,org.apache.hadoop:hadoop-aws:3.3.2'
     --properties-file /opt/spark/spark-config.properties     --master 
'local[*]'     --executor-memory 1g     
/opt/spark/jars/hudi-utilities-bundle_2.13-0.15.0.jar     --op UPSERT     
--enable-hive-sync     --table-type COPY_ON_WRITE     --source-ordering-field 
ts_ms     --source-class 
org.apache.hudi.utilities.sources.debezium.PostgresDebeziumSource     
--payload-class 
org.apache.hudi.common.model.debezium.PostgresDebeziumAvroPayload     
--base-path-prefix s3a://warehouse     --target-table customers,employees     
--config-folder file:///opt/spark/configfolder     --props 
file:///opt/spark/configfolder/source.properties
   Warning: Ignoring non-Spark config property: fs.s3a.signing-algorithm
   25/05/01 21:22:53 WARN Utils: Your hostname, ICTWKS-FP002 resolves to a 
loopback address: 127.0.1.1; using 10.255.255.254 instead (on interface lo)
   25/05/01 21:22:53 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to 
another address
   :: loading settings :: url = 
jar:file:/opt/spark/jars/ivy-2.5.1.jar!/org/apache/ivy/core/settings/ivysettings.xml
   Ivy Default Cache set to: /root/.ivy2/cache
   The jars for the packages stored in: /root/.ivy2/jars
   org.apache.hudi#hudi-spark3.5-bundle_2.13 added as a dependency
   org.apache.hadoop#hadoop-aws added as a dependency
   :: resolving dependencies :: 
org.apache.spark#spark-submit-parent-12501a23-4f84-4307-a7c5-3ab16c9852c0;1.0
           confs: [default]
           found org.apache.hudi#hudi-spark3.5-bundle_2.13;0.15.0 in central
           found org.apache.hive#hive-storage-api;2.8.1 in central
           found org.slf4j#slf4j-api;1.7.36 in central
           found org.apache.hadoop#hadoop-aws;3.3.2 in central
           found com.amazonaws#aws-java-sdk-bundle;1.11.1026 in central
           found org.wildfly.openssl#wildfly-openssl;1.0.7.Final in central
   :: resolution report :: resolve 383ms :: artifacts dl 41ms
           :: modules in use:
           com.amazonaws#aws-java-sdk-bundle;1.11.1026 from central in [default]
           org.apache.hadoop#hadoop-aws;3.3.2 from central in [default]
           org.apache.hive#hive-storage-api;2.8.1 from central in [default]
           org.apache.hudi#hudi-spark3.5-bundle_2.13;0.15.0 from central in 
[default]
           org.slf4j#slf4j-api;1.7.36 from central in [default]
           org.wildfly.openssl#wildfly-openssl;1.0.7.Final from central in 
[default]
           ---------------------------------------------------------------------
           |                  |            modules            ||   artifacts   |
           |       conf       | number| search|dwnlded|evicted|| number|dwnlded|
           ---------------------------------------------------------------------
           |      default     |   6   |   0   |   0   |   0   ||   6   |   0   |
           ---------------------------------------------------------------------
   :: retrieving :: 
org.apache.spark#spark-submit-parent-12501a23-4f84-4307-a7c5-3ab16c9852c0
           confs: [default]
           0 artifacts copied, 6 already retrieved (0kB/10ms)
   25/05/01 21:22:54 WARN NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
   25/05/01 21:22:54 WARN HoodieMultiTableStreamer: --enable-hive-sync will be 
deprecated in a future release; please use --enable-sync instead for Hive 
syncing
   25/05/01 21:22:54 WARN HoodieMultiTableStreamer: --target-table is 
deprecated and will be removed in a future release due to it's useless; please 
use hoodie.streamer.ingestion.tablesToBeIngested to configure multiple target 
tables
   25/05/01 21:22:55 INFO SparkContext: Running Spark version 3.5.5
   25/05/01 21:22:55 INFO SparkContext: OS info Linux, 
5.15.167.4-microsoft-standard-WSL2, amd64
   25/05/01 21:22:55 INFO SparkContext: Java version 17.0.14
   25/05/01 21:22:55 INFO ResourceUtils: 
==============================================================
   25/05/01 21:22:55 INFO ResourceUtils: No custom resources configured for 
spark.driver.
   25/05/01 21:22:55 INFO ResourceUtils: 
==============================================================
   25/05/01 21:22:55 INFO SparkContext: Submitted application: 
multi-table-streamer
   25/05/01 21:22:55 INFO ResourceProfile: Default ResourceProfile created, 
executor resources: Map(cores -> name: cores, amount: 1, script: , vendor: , 
memory -> name: memory, amount: 1024, script: , vendor: , offHeap -> name: 
offHeap, amount: 0, script: , vendor: ), task resources: Map(cpus -> name: 
cpus, amount: 1.0)
   25/05/01 21:22:55 INFO ResourceProfile: Limiting resource is cpu
   25/05/01 21:22:55 INFO ResourceProfileManager: Added ResourceProfile id: 0
   25/05/01 21:22:55 INFO SecurityManager: Changing view acls to: root
   25/05/01 21:22:55 INFO SecurityManager: Changing modify acls to: root
   25/05/01 21:22:55 INFO SecurityManager: Changing view acls groups to:
   25/05/01 21:22:55 INFO SecurityManager: Changing modify acls groups to:
   25/05/01 21:22:55 INFO SecurityManager: SecurityManager: authentication 
disabled; ui acls disabled; users with view permissions: root; groups with view 
permissions: EMPTY; users with modify permissions: root; groups with modify 
permissions: EMPTY
   25/05/01 21:22:55 INFO deprecation: mapred.output.compression.type is 
deprecated. Instead, use mapreduce.output.fileoutputformat.compress.type
   25/05/01 21:22:55 INFO deprecation: mapred.output.compress is deprecated. 
Instead, use mapreduce.output.fileoutputformat.compress
   25/05/01 21:22:55 INFO deprecation: mapred.output.compression.codec is 
deprecated. Instead, use mapreduce.output.fileoutputformat.compress.codec
   25/05/01 21:22:55 INFO Utils: Successfully started service 'sparkDriver' on 
port 33105.
   25/05/01 21:22:55 INFO SparkEnv: Registering MapOutputTracker
   25/05/01 21:22:55 INFO SparkEnv: Registering BlockManagerMaster
   25/05/01 21:22:55 INFO BlockManagerMasterEndpoint: Using 
org.apache.spark.storage.DefaultTopologyMapper for getting topology information
   25/05/01 21:22:55 INFO BlockManagerMasterEndpoint: 
BlockManagerMasterEndpoint up
   25/05/01 21:22:55 INFO SparkEnv: Registering BlockManagerMasterHeartbeat
   25/05/01 21:22:55 INFO DiskBlockManager: Created local directory at 
/tmp/blockmgr-93c89a1b-7638-4989-9bc6-b7227e5321b9
   25/05/01 21:22:55 INFO MemoryStore: MemoryStore started with capacity 434.4 
MiB
   25/05/01 21:22:56 INFO SparkEnv: Registering OutputCommitCoordinator
   25/05/01 21:22:56 INFO JettyUtils: Start Jetty 0.0.0.0:8090 for SparkUI
   25/05/01 21:22:56 WARN Utils: Service 'SparkUI' could not bind on port 8090. 
Attempting port 8091.
   25/05/01 21:22:56 INFO Utils: Successfully started service 'SparkUI' on port 
8091.
   25/05/01 21:22:56 INFO SparkContext: Added JAR 
file:///root/.ivy2/jars/org.apache.hudi_hudi-spark3.5-bundle_2.13-0.15.0.jar at 
spark://10.255.255.254:33105/jars/org.apache.hudi_hudi-spark3.5-bundle_2.13-0.15.0.jar
 with timestamp 1746123774994
   25/05/01 21:22:56 INFO SparkContext: Added JAR 
file:///root/.ivy2/jars/org.apache.hadoop_hadoop-aws-3.3.2.jar at 
spark://10.255.255.254:33105/jars/org.apache.hadoop_hadoop-aws-3.3.2.jar with 
timestamp 1746123774994
   25/05/01 21:22:56 INFO SparkContext: Added JAR 
file:///root/.ivy2/jars/org.apache.hive_hive-storage-api-2.8.1.jar at 
spark://10.255.255.254:33105/jars/org.apache.hive_hive-storage-api-2.8.1.jar 
with timestamp 1746123774994
   25/05/01 21:22:56 INFO SparkContext: Added JAR 
file:///root/.ivy2/jars/org.slf4j_slf4j-api-1.7.36.jar at 
spark://10.255.255.254:33105/jars/org.slf4j_slf4j-api-1.7.36.jar with timestamp 
1746123774994
   25/05/01 21:22:56 INFO SparkContext: Added JAR 
file:///root/.ivy2/jars/com.amazonaws_aws-java-sdk-bundle-1.11.1026.jar at 
spark://10.255.255.254:33105/jars/com.amazonaws_aws-java-sdk-bundle-1.11.1026.jar
 with timestamp 1746123774994
   25/05/01 21:22:56 INFO SparkContext: Added JAR 
file:///root/.ivy2/jars/org.wildfly.openssl_wildfly-openssl-1.0.7.Final.jar at 
spark://10.255.255.254:33105/jars/org.wildfly.openssl_wildfly-openssl-1.0.7.Final.jar
 with timestamp 1746123774994
   25/05/01 21:22:56 INFO SparkContext: Added JAR 
file:/opt/spark/jars/hudi-utilities-bundle_2.13-0.15.0.jar at 
spark://10.255.255.254:33105/jars/hudi-utilities-bundle_2.13-0.15.0.jar with 
timestamp 1746123774994
   25/05/01 21:22:56 INFO Executor: Starting executor ID driver on host 
10.255.255.254
   25/05/01 21:22:56 INFO Executor: OS info Linux, 
5.15.167.4-microsoft-standard-WSL2, amd64
   25/05/01 21:22:56 INFO Executor: Java version 17.0.14
   25/05/01 21:22:56 INFO Executor: Starting executor with user classpath 
(userClassPathFirst = false): ''
   25/05/01 21:22:56 INFO Executor: Created or updated repl class loader 
org.apache.spark.util.MutableURLClassLoader@6f31df32 for default.
   25/05/01 21:22:56 INFO Executor: Fetching 
spark://10.255.255.254:33105/jars/org.apache.hadoop_hadoop-aws-3.3.2.jar with 
timestamp 1746123774994
   25/05/01 21:22:56 INFO TransportClientFactory: Successfully created 
connection to /10.255.255.254:33105 after 29 ms (0 ms spent in bootstraps)
   25/05/01 21:22:56 INFO Utils: Fetching 
spark://10.255.255.254:33105/jars/org.apache.hadoop_hadoop-aws-3.3.2.jar to 
/tmp/spark-75ccc320-b7d0-4a0a-a73b-383822b5706d/userFiles-4109e468-ce0e-46bd-89f9-045e51332557/fetchFileTemp3385563798097687100.tmp
   25/05/01 21:22:56 INFO Executor: Adding 
file:/tmp/spark-75ccc320-b7d0-4a0a-a73b-383822b5706d/userFiles-4109e468-ce0e-46bd-89f9-045e51332557/org.apache.hadoop_hadoop-aws-3.3.2.jar
 to class loader default
   25/05/01 21:22:56 INFO Executor: Fetching 
spark://10.255.255.254:33105/jars/org.wildfly.openssl_wildfly-openssl-1.0.7.Final.jar
 with timestamp 1746123774994
   25/05/01 21:22:56 INFO Utils: Fetching 
spark://10.255.255.254:33105/jars/org.wildfly.openssl_wildfly-openssl-1.0.7.Final.jar
 to 
/tmp/spark-75ccc320-b7d0-4a0a-a73b-383822b5706d/userFiles-4109e468-ce0e-46bd-89f9-045e51332557/fetchFileTemp15736072333126868795.tmp
   25/05/01 21:22:56 INFO Executor: Adding 
file:/tmp/spark-75ccc320-b7d0-4a0a-a73b-383822b5706d/userFiles-4109e468-ce0e-46bd-89f9-045e51332557/org.wildfly.openssl_wildfly-openssl-1.0.7.Final.jar
 to class loader default
   25/05/01 21:22:56 INFO Executor: Fetching 
spark://10.255.255.254:33105/jars/org.apache.hive_hive-storage-api-2.8.1.jar 
with timestamp 1746123774994
   25/05/01 21:22:56 INFO Utils: Fetching 
spark://10.255.255.254:33105/jars/org.apache.hive_hive-storage-api-2.8.1.jar to 
/tmp/spark-75ccc320-b7d0-4a0a-a73b-383822b5706d/userFiles-4109e468-ce0e-46bd-89f9-045e51332557/fetchFileTemp16953752506511186288.tmp
   25/05/01 21:22:56 INFO Executor: Adding 
file:/tmp/spark-75ccc320-b7d0-4a0a-a73b-383822b5706d/userFiles-4109e468-ce0e-46bd-89f9-045e51332557/org.apache.hive_hive-storage-api-2.8.1.jar
 to class loader default
   25/05/01 21:22:56 INFO Executor: Fetching 
spark://10.255.255.254:33105/jars/hudi-utilities-bundle_2.13-0.15.0.jar with 
timestamp 1746123774994
   25/05/01 21:22:56 INFO Utils: Fetching 
spark://10.255.255.254:33105/jars/hudi-utilities-bundle_2.13-0.15.0.jar to 
/tmp/spark-75ccc320-b7d0-4a0a-a73b-383822b5706d/userFiles-4109e468-ce0e-46bd-89f9-045e51332557/fetchFileTemp17213494354151454577.tmp
   25/05/01 21:22:57 INFO Executor: Adding 
file:/tmp/spark-75ccc320-b7d0-4a0a-a73b-383822b5706d/userFiles-4109e468-ce0e-46bd-89f9-045e51332557/hudi-utilities-bundle_2.13-0.15.0.jar
 to class loader default
   25/05/01 21:22:57 INFO Executor: Fetching 
spark://10.255.255.254:33105/jars/org.apache.hudi_hudi-spark3.5-bundle_2.13-0.15.0.jar
 with timestamp 1746123774994
   25/05/01 21:22:57 INFO Utils: Fetching 
spark://10.255.255.254:33105/jars/org.apache.hudi_hudi-spark3.5-bundle_2.13-0.15.0.jar
 to 
/tmp/spark-75ccc320-b7d0-4a0a-a73b-383822b5706d/userFiles-4109e468-ce0e-46bd-89f9-045e51332557/fetchFileTemp1653984423957972224.tmp
   25/05/01 21:22:57 INFO Executor: Adding 
file:/tmp/spark-75ccc320-b7d0-4a0a-a73b-383822b5706d/userFiles-4109e468-ce0e-46bd-89f9-045e51332557/org.apache.hudi_hudi-spark3.5-bundle_2.13-0.15.0.jar
 to class loader default
   25/05/01 21:22:57 INFO Executor: Fetching 
spark://10.255.255.254:33105/jars/com.amazonaws_aws-java-sdk-bundle-1.11.1026.jar
 with timestamp 1746123774994
   25/05/01 21:22:57 INFO Utils: Fetching 
spark://10.255.255.254:33105/jars/com.amazonaws_aws-java-sdk-bundle-1.11.1026.jar
 to 
/tmp/spark-75ccc320-b7d0-4a0a-a73b-383822b5706d/userFiles-4109e468-ce0e-46bd-89f9-045e51332557/fetchFileTemp8299358163528827261.tmp
   25/05/01 21:22:58 INFO Executor: Adding 
file:/tmp/spark-75ccc320-b7d0-4a0a-a73b-383822b5706d/userFiles-4109e468-ce0e-46bd-89f9-045e51332557/com.amazonaws_aws-java-sdk-bundle-1.11.1026.jar
 to class loader default
   25/05/01 21:22:58 INFO Executor: Fetching 
spark://10.255.255.254:33105/jars/org.slf4j_slf4j-api-1.7.36.jar with timestamp 
1746123774994
   25/05/01 21:22:58 INFO Utils: Fetching 
spark://10.255.255.254:33105/jars/org.slf4j_slf4j-api-1.7.36.jar to 
/tmp/spark-75ccc320-b7d0-4a0a-a73b-383822b5706d/userFiles-4109e468-ce0e-46bd-89f9-045e51332557/fetchFileTemp9113282514998080245.tmp
   25/05/01 21:22:58 INFO Executor: Adding 
file:/tmp/spark-75ccc320-b7d0-4a0a-a73b-383822b5706d/userFiles-4109e468-ce0e-46bd-89f9-045e51332557/org.slf4j_slf4j-api-1.7.36.jar
 to class loader default
   25/05/01 21:22:58 INFO Utils: Successfully started service 
'org.apache.spark.network.netty.NettyBlockTransferService' on port 39543.
   25/05/01 21:22:58 INFO NettyBlockTransferService: Server created on 
10.255.255.254:39543
   25/05/01 21:22:58 INFO BlockManager: Using 
org.apache.spark.storage.RandomBlockReplicationPolicy for block replication 
policy
   25/05/01 21:22:58 INFO BlockManagerMaster: Registering BlockManager 
BlockManagerId(driver, 10.255.255.254, 39543, None)
   25/05/01 21:22:58 INFO BlockManagerMasterEndpoint: Registering block manager 
10.255.255.254:39543 with 434.4 MiB RAM, BlockManagerId(driver, 10.255.255.254, 
39543, None)
   25/05/01 21:22:58 INFO BlockManagerMaster: Registered BlockManager 
BlockManagerId(driver, 10.255.255.254, 39543, None)
   25/05/01 21:22:58 INFO BlockManager: Initialized BlockManager: 
BlockManagerId(driver, 10.255.255.254, 39543, None)
   25/05/01 21:22:58 INFO HoodieMultiTableStreamer: tables to be ingested via 
MultiTableDeltaStreamer : [customers, employees]
   25/05/01 21:22:58 INFO SparkContext: SparkContext is stopping with exitCode 
0.
   25/05/01 21:22:58 INFO SparkUI: Stopped Spark web UI at 
http://10.255.255.254:8091
   25/05/01 21:22:58 INFO MapOutputTrackerMasterEndpoint: 
MapOutputTrackerMasterEndpoint stopped!
   25/05/01 21:22:58 INFO MemoryStore: MemoryStore cleared
   25/05/01 21:22:58 INFO BlockManager: BlockManager stopped
   25/05/01 21:22:58 INFO BlockManagerMaster: BlockManagerMaster stopped
   25/05/01 21:22:58 INFO 
OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: 
OutputCommitCoordinator stopped!
   25/05/01 21:22:58 INFO SparkContext: Successfully stopped SparkContext
   Exception in thread "main" java.lang.IllegalArgumentException: Please 
provide valid table config file path!
           at 
org.apache.hudi.utilities.streamer.HoodieMultiTableStreamer.checkIfTableConfigFileExists(HoodieMultiTableStreamer.java:109)
           at 
org.apache.hudi.utilities.streamer.HoodieMultiTableStreamer.populateTableExecutionContextList(HoodieMultiTableStreamer.java:132)
           at 
org.apache.hudi.utilities.streamer.HoodieMultiTableStreamer.<init>(HoodieMultiTableStreamer.java:94)
           at 
org.apache.hudi.utilities.streamer.HoodieMultiTableStreamer.main(HoodieMultiTableStreamer.java:281)
           at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
           at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
           at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
           at java.base/java.lang.reflect.Method.invoke(Method.java:569)
           at 
org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
           at 
org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:1034)
           at 
org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:199)
           at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:222)
           at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:91)
           at 
org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1125)
           at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1134)
           at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
   25/05/01 21:22:58 INFO ShutdownHookManager: Shutdown hook called
   25/05/01 21:22:58 INFO ShutdownHookManager: Deleting directory 
/tmp/spark-06ce81ae-01e9-4b74-92d5-ce2243c77487
   25/05/01 21:22:58 INFO ShutdownHookManager: Deleting directory 
/tmp/spark-75ccc320-b7d0-4a0a-a73b-383822b5706d
   ```Add the stacktrace of the error.```
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to