wosow opened a new issue #2282:
URL: https://github.com/apache/hudi/issues/2282


   **_Tips before filing an issue_**
   
   - Have you gone through our 
[FAQs](https://cwiki.apache.org/confluence/display/HUDI/FAQ)?
   
   - Join the mailing list to engage in conversations and get faster support at 
dev-subscr...@hudi.apache.org.
   
   - If you have triaged this as a bug, then file an 
[issue](https://issues.apache.org/jira/projects/HUDI/issues) directly.
   
   An error occurred when I used Hudi-0.6.0 to integrate Spark-2.4.4 to write 
data to Hudi and synchronize Hive, as follows 
   
--------------------------------------------------------------------------------------------
   20/11/26 14:29:56 INFO memory.MemoryStore: Block broadcast_0 stored as 
values in memory (estimated size 76.6 KB, free 366.2 MB)
   20/11/26 14:29:56 INFO memory.MemoryStore: Block broadcast_0_piece0 stored 
as bytes in memory (estimated size 36.4 KB, free 366.2 MB)
   20/11/26 14:29:56 INFO storage.BlockManagerInfo: Added broadcast_0_piece0 in 
memory on lake03:38531 (size: 36.4 KB, free: 366.3 MB)
   20/11/26 14:29:56 INFO spark.SparkContext: Created broadcast 0 from 
broadcast at DAGScheduler.scala:1161
   20/11/26 14:29:56 INFO scheduler.DAGScheduler: Submitting 1 missing tasks 
from ResultStage 0 (MapPartitionsRDD[1] at resolveRelation at 
DefaultSource.scala:78) (first 15 tasks are for partitions Vector(0))
   20/11/26 14:29:56 INFO cluster.YarnScheduler: Adding task set 0.0 with 1 
tasks
   20/11/26 14:29:56 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 
0.0 (TID 0, lake03, executor 1, partition 0, PROCESS_LOCAL, 7854 bytes)
   20/11/26 14:29:57 INFO storage.BlockManagerInfo: Added broadcast_0_piece0 in 
memory on lake03:43398 (size: 36.4 KB, free: 8.4 GB)
   20/11/26 14:29:59 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 
0.0 (TID 0) in 2564 ms on lake03 (executor 1) (1/1)
   20/11/26 14:29:59 INFO cluster.YarnScheduler: Removed TaskSet 0.0, whose 
tasks have all completed, from pool 
   20/11/26 14:29:59 INFO scheduler.DAGScheduler: ResultStage 0 
(resolveRelation at DefaultSource.scala:78) finished in 3.154 s
   20/11/26 14:29:59 INFO scheduler.DAGScheduler: Job 0 finished: 
resolveRelation at DefaultSource.scala:78, took 3.249613 s
   Exception in thread "main" org.apache.hudi.exception.HoodieException: 
'hoodie.table.name', 'path' must be set.
        at 
org.apache.hudi.HoodieSparkSqlWriter$.write(HoodieSparkSqlWriter.scala:56)
        at org.apache.hudi.DefaultSource.createRelation(DefaultSource.scala:108)
        at 
org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:45)
        at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
        at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)
        at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:86)
        at 
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
        at 
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
        at 
org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
        at 
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
        at 
org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
        at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127)
        at 
org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:80)
        at 
org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:80)
        at 
org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:676)
        at 
org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:676)
        at 
org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:78)
        at 
org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:125)
        at 
org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:73)
        at 
org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:676)
        at 
org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:285)
        at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:271)
        at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:229)
        at 
com.ws.hudi.wdt.cow.StockOutOrder$.stockOutOrderIncUpdate(StockOutOrder.scala:142)
        at com.ws.hudi.wdt.cow.StockOutOrder$.main(StockOutOrder.scala:41)
        at com.ws.hudi.wdt.cow.StockOutOrder.main(StockOutOrder.scala)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at 
org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
        at 
org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:845)
        at 
org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:161)
        at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:184)
        at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86)
        at 
org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:920)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:929)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
   20/11/26 14:30:00 INFO spark.SparkContext: Invoking stop() from shutdown hook
   20/11/26 14:30:00 INFO server.AbstractConnector: Stopped 
Spark@6dd93a21{HTTP/1.1,[http/1.1]}{0.0.0.0:4040}
   20/11/26 14:30:00 INFO ui.SparkUI: Stopped Spark web UI at http://lake03:4040
   20/11/26 14:30:00 INFO cluster.YarnClientSchedulerBackend: Interrupting 
monitor thread
   20/11/26 14:30:00 INFO cluster.YarnClientSchedulerBackend: Shutting down all 
executors
   20/11/26 14:30:00 INFO cluster.YarnSchedulerBackend$YarnDriverEndpoint: 
Asking each executor to shut down
   20/11/26 14:30:00 INFO cluster.SchedulerExtensionServices: Stopping 
SchedulerExtensionServices
   (serviceOption=None,
    services=List(),
    started=false)
   20/11/26 14:30:00 INFO cluster.YarnClientSchedulerBackend: Stopped
   20/11/26 14:30:01 INFO spark.MapOutputTrackerMasterEndpoint: 
MapOutputTrackerMasterEndpoint stopped!
   20/11/26 14:30:01 INFO memory.MemoryStore: MemoryStore cleared
   20/11/26 14:30:01 INFO storage.BlockManager: BlockManager stopped
   20/11/26 14:30:01 INFO storage.BlockManagerMaster: BlockManagerMaster stopped
   20/11/26 14:30:01 INFO 
scheduler.OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: 
OutputCommitCoordinator stopped!
   20/11/26 14:30:01 INFO spark.SparkContext: Successfully stopped SparkContext
   20/11/26 14:30:01 INFO util.ShutdownHookManager: Shutdown hook called
   20/11/26 14:30:01 INFO util.ShutdownHookManager: Deleting directory 
/tmp/spark-0d8a4c81-88e0-473a-ac9a-5edb79ced447
   20/11/26 14:30:01 INFO util.ShutdownHookManager: Deleting directory 
/tmp/spark-348aed06-ced0-4e82-bd1f-3347584b0dc7
   
   
---------------------------------------------------------------------------------------------
   
   
   
   **Environment Description**
   * Hudi version :
   hudi-0.6.0
   * Spark version :
   spark-2.4.4
   * Hive version :
   hive-2.3.1
   * Hadoop version :
   hadoop-2.7.5
   * Storage (HDFS/S3/GCS..) :
   HDFS
   * Running on Docker? (yes/no) :
   no
   
   
   
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Reply via email to