dengd1937 opened a new issue, #9288:
URL: https://github.com/apache/hudi/issues/9288

   **_Tips before filing an issue_**
   
   - Have you gone through our [FAQs](https://hudi.apache.org/learn/faq/)?
   
   - Join the mailing list to engage in conversations and get faster support at 
dev-subscr...@hudi.apache.org.
   
   - If you have triaged this as a bug, then file an 
[issue](https://issues.apache.org/jira/projects/HUDI/issues) directly.
   
   **Describe the problem you faced**
   
   When I execute the following SQL statement in spark sql cli:
   - `
   create table hudi_ctas_cow_pt_tbl
   using hudi
   tblproperties (type = 'cow', primaryKey = 'id', preCombineField = 'ts')
   partitioned by (dt)
   as
   select 1 as id, 'a1' as name, 10 as price, 1000 as ts, '2021-12-01' as dt;`
   - `insert into hudi_cow_nonpcf_tbl select 1, 'a1', 20;`
   - `ALTER TABLE hudi_cow_nonpcf_tbl2 add columns(remark string);`
   
   An error occurs: 
   `java.lang.NoSuchMethodError: 
org.apache.hudi.org.apache.jetty.util.thread.ScheduledExecutorScheduler.<init>(Ljava/lang/String;ZI)V
           at 
org.apache.hudi.timeline.service.TimelineService.startService(TimelineService.java:348)
           at 
org.apache.hudi.client.embedded.EmbeddedTimelineService.startServer(EmbeddedTimelineService.java:105)
           at 
org.apache.hudi.client.embedded.EmbeddedTimelineServerHelper.startTimelineService(EmbeddedTimelineServerHelper.java:71)
           at 
org.apache.hudi.client.embedded.EmbeddedTimelineServerHelper.createEmbeddedTimelineService(EmbeddedTimelineServerHelper.java:58)
           at 
org.apache.hudi.client.BaseHoodieClient.startEmbeddedServerView(BaseHoodieClient.java:127)
           at 
org.apache.hudi.client.BaseHoodieClient.<init>(BaseHoodieClient.java:93)
           at 
org.apache.hudi.client.BaseHoodieWriteClient.<init>(BaseHoodieWriteClient.java:161)
           at 
org.apache.hudi.client.SparkRDDWriteClient.<init>(SparkRDDWriteClient.java:84)
           at 
org.apache.hudi.client.SparkRDDWriteClient.<init>(SparkRDDWriteClient.java:68)
           at 
org.apache.hudi.DataSourceUtils.createHoodieClient(DataSourceUtils.java:201)
           at 
org.apache.spark.sql.hudi.command.AlterHoodieTableAddColumnsCommand$.commitWithSchema(AlterHoodieTableAddColumnsCommand.scala:111)
           at 
org.apache.spark.sql.hudi.command.AlterHoodieTableAddColumnsCommand.run(AlterHoodieTableAddColumnsCommand.scala:66)
           at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:75)
           at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:73)
           at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.executeCollect(commands.scala:84)
           at 
org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.$anonfun$applyOrElse$1(QueryExecution.scala:98)
           at 
org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$6(SQLExecution.scala:109)
           at 
org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:169)
           at 
org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:95)
           at 
org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:779)
           at 
org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64)
           at 
org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:98)
           at 
org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:94)
           at 
org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$1(TreeNode.scala:584)
           at 
org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:176)
           at 
org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:584)`
   
   
   **To Reproduce**
   
   Steps to reproduce the behavior:
   
   1. modify jetty-util.version to 9.4.43.v20210629 in parent pom and complie 
hudi: `mvn clean package -DskipTests -Prelease -Pflink-bundle-shade-hive3 
-Pscala-2.12 -Pspark3.3 -Pflink1.15 -Dspark33.version=3.3.2 
-Dflink1.15.version=1.15.4 -Dhive.version=3.1.3 -Dhadoop.version=3.3.2` 
   2. execute `bin/spark-sql --master yarn --deploy-mode client --jars 
/home/hadoop/lib/hudi-spark3.3-bundle_2.12-0.13.1.jar \
   --conf 'spark.serializer=org.apache.spark.serializer.KryoSerializer' \
   --conf 
'spark.sql.extensions=org.apache.spark.sql.hudi.HoodieSparkSessionExtension' \
   --conf 
'spark.sql.catalog.spark_catalog=org.apache.spark.sql.hudi.catalog.HoodieCatalog'`
   3. execute above-mentioned some sql
   
   **Expected behavior**
   
   A clear and concise description of what you expected to happen.
   
   **Environment Description**
   
   * Hudi version : 0.13.1
   
   * Spark version : 3.3.2
   
   * Hive version : 3.1.3
   
   * Hadoop version : 3.3.2
   
   * Storage (HDFS/S3/GCS..) : no
   
   * Running on Docker? (yes/no) : no
   
   
   **Additional context**
   
   Add any other context about the problem here.
   
   **Stacktrace**
   
   ```Add the stacktrace of the error.```
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@hudi.apache.org.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to