daihw commented on issue #6588:
URL: https://github.com/apache/hudi/issues/6588#issuecomment-1347627637

   > I solved this problem by adding the following configuration in 
Packaging/Hudi-spark-bundle/pom.xml
   > 
   > ```
   > ...
   >  <include>org.apache.avro:avro</include>
   > ...
   > ...
   > <relocation>
   >   <pattern>org.apache.avro.</pattern>
   >   <shadedPattern>org.apache.hudi.org.apache.avro.</shadedPattern>
   > </relocation>
   > ...
   > ...
   >     <dependency>
   >       <groupId>org.apache.avro</groupId>
   >       <artifactId>avro</artifactId>
   >       <version>1.8.2</version>
   >       <scope>compile</scope>
   >     </dependency>
   > ...
   > ```
   
   hi,i got the same promblem as you ,when I repaired it according to your 
method,I encountered a new problem and could not use Spark to insert data,the 
error message is 
   java.lang.ClassCastException: 
org.apache.hudi.org.apache.avro.Schema$RecordSchema cannot be cast to 
org.apache.avro.Schema
           at 
org.apache.spark.SparkConf$$anonfun$registerAvroSchemas$1.apply(SparkConf.scala:221)
           at 
scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
           at 
scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:35)
           at 
org.apache.spark.SparkConf.registerAvroSchemas(SparkConf.scala:221)
           at 
org.apache.hudi.HoodieSparkSqlWriter$.write(HoodieSparkSqlWriter.scala:280)
           at 
org.apache.spark.sql.hudi.command.InsertIntoHoodieTableCommand$.run(InsertIntoHoodieTableCommand.scala:101)
           at 
org.apache.spark.sql.hudi.command.InsertIntoHoodieTableCommand.run(InsertIntoHoodieTableCommand.scala:60)
           at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
           at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)
           at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.executeCollect(commands.scala:79)
           at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:194)
           at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:194)
           at org.apache.spark.sql.Dataset$$anonfun$52.apply(Dataset.scala:3370)
           at 
org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:80)
           at 
org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:127)
           at 
org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:75)
           at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3369)
           at org.apache.spark.sql.Dataset.<init>(Dataset.scala:194)
           at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:79)
           at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:642)
           at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:694)
           at 
org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.org$apache$spark$sql$hive$thriftserver$SparkExecuteStatementOperation$$execute(SparkExecuteStatementOperation.scala:232)
           at 
org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1$$anon$2.run(SparkExecuteStatementOperation.scala:175)
           at 
org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1$$anon$2.run(SparkExecuteStatementOperation.scala:171)
           at java.security.AccessController.doPrivileged(Native Method)
           at javax.security.auth.Subject.doAs(Subject.java:422)
           at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1692)
           at 
org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1.run(SparkExecuteStatementOperation.scala:185)
           at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
           at java.util.concurrent.FutureTask.run(FutureTask.java:266)
           at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
           at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
           at java.lang.Thread.run(Thread.java:748)
   165739 [HiveServer2-Background-Pool: Thread-133] ERROR 
org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation  - Error 
running hive query: 
   org.apache.hive.service.cli.HiveSQLException: java.lang.ClassCastException: 
org.apache.hudi.org.apache.avro.Schema$RecordSchema cannot be cast to 
org.apache.avro.Schema
           at 
org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.org$apache$spark$sql$hive$thriftserver$SparkExecuteStatementOperation$$execute(SparkExecuteStatementOperation.scala:269)
           at 
org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1$$anon$2.run(SparkExecuteStatementOperation.scala:175)
           at 
org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1$$anon$2.run(SparkExecuteStatementOperation.scala:171)
           at java.security.AccessController.doPrivileged(Native Method)
           at javax.security.auth.Subject.doAs(Subject.java:422)
           at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1692)
           at 
org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1.run(SparkExecuteStatementOperation.scala:185)
           at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
           at java.util.concurrent.FutureTask.run(FutureTask.java:266)
           at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
           at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
           at java.lang.Thread.run(Thread.java:748)
   Error: java.lang.ClassCastException: 
org.apache.hudi.org.apache.avro.Schema$RecordSchema cannot be cast to 
org.apache.avro.Schema (state=,code=0)
   
   can you help me ?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@hudi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to