HuangFru opened a new issue, #7325:
URL: https://github.com/apache/hudi/issues/7325

   
   **Describe the problem you faced**
   
   https://issues.apache.org/jira/browse/HUDI-3972. I think I met the same 
problem as this issue, but it's already fixed, right?
   
   
   **To Reproduce**
   
   Steps to reproduce the behavior:
   
   1. i have some hudi tables in hive which is written by Flink.
   2. then i try to read it in spark-sql or spark thrift server, with 
hive-site.xml in my ${SPAKR_HOME}/conf.
   3. then i met the same error as the issue above
   
   
   **Environment Description**
   
   * Hudi version : 0.11.1 0.12.1 (i tryed both two version)
   
   * Spark version : 3.1
   
   * Hive version : 2.1.1
   
   * Hadoop version : 2.7.5
   
   * Storage (HDFS/S3/GCS..) : HDFS
   
   * Running on Docker? (yes/no) : i tryed both in docker and physical machine.
   
   **Additional context**
   
   I can read the table with suffix '_ro' , but i can't read the table with 
suffix '_rt'.
   
   In fact, the table on hive is not written by me, I am not familiar with 
flink, but I found some configurations that may be related:
   <img width="608" alt="image" 
src="https://user-images.githubusercontent.com/68625618/204521404-4ee569d8-2cf9-4dbf-87bb-ae070afdc0ac.png";>
   
   
   **Stacktrace**
   
   ```
   java.sql.SQLException: org.apache.hive.service.cli.HiveSQLException: Error 
running query: java.util.NoSuchElementException: key not found: ts
           at 
org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.org$apache$spark$sql$hive$thriftserver$SparkExecuteStatementOperation$$execute(SparkExecuteStatementOperation.scala:362)
           at 
org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$2$$anon$3.$anonfun$run$2(SparkExecuteStatementOperation.scala:264)
           at 
scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
           at 
org.apache.spark.sql.hive.thriftserver.SparkOperation.withLocalProperties(SparkOperation.scala:78)
           at 
org.apache.spark.sql.hive.thriftserver.SparkOperation.withLocalProperties$(SparkOperation.scala:62)
           at 
org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.withLocalProperties(SparkExecuteStatementOperation.scala:43)
           at 
org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$2$$anon$3.run(SparkExecuteStatementOperation.scala:264)
           at 
org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$2$$anon$3.run(SparkExecuteStatementOperation.scala:259)
           at java.security.AccessController.doPrivileged(Native Method)
           at javax.security.auth.Subject.doAs(Subject.java:422)
           at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
           at 
org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$2.run(SparkExecuteStatementOperation.scala:273)
           at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
           at java.util.concurrent.FutureTask.run(FutureTask.java:266)
           at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
           at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
           at java.lang.Thread.run(Thread.java:750)
   Caused by: java.util.NoSuchElementException: key not found: ts
           at scala.collection.MapLike.default(MapLike.scala:235)
           at scala.collection.MapLike.default$(MapLike.scala:234)
           at scala.collection.AbstractMap.default(Map.scala:63)
           at scala.collection.MapLike.apply(MapLike.scala:144)
           at scala.collection.MapLike.apply$(MapLike.scala:143)
           at scala.collection.AbstractMap.apply(Map.scala:63)
           at 
org.apache.hudi.HoodieBaseRelation$.$anonfun$projectSchema$2(HoodieBaseRelation.scala:709)
           at 
scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:238)
           at 
scala.collection.IndexedSeqOptimized.foreach(IndexedSeqOptimized.scala:36)
           at 
scala.collection.IndexedSeqOptimized.foreach$(IndexedSeqOptimized.scala:33)
           at 
scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:198)
           at scala.collection.TraversableLike.map(TraversableLike.scala:238)
           at scala.collection.TraversableLike.map$(TraversableLike.scala:231)
           at scala.collection.mutable.ArrayOps$ofRef.map(ArrayOps.scala:198)
           at 
org.apache.hudi.HoodieBaseRelation$.projectSchema(HoodieBaseRelation.scala:708)
           at 
org.apache.hudi.HoodieBaseRelation.buildScan(HoodieBaseRelation.scala:331)
           at 
org.apache.spark.sql.execution.datasources.DataSourceStrategy$.$anonfun$apply$4(DataSourceStrategy.scala:332)
           at 
org.apache.spark.sql.execution.datasources.DataSourceStrategy$.$anonfun$pruneFilterProject$1(DataSourceStrategy.scala:365)
           at 
org.apache.spark.sql.execution.datasources.DataSourceStrategy$.pruneFilterProjectRaw(DataSourceStrategy.scala:420)
           at 
org.apache.spark.sql.execution.datasources.DataSourceStrategy$.pruneFilterProject(DataSourceStrategy.scala:364)
           at 
org.apache.spark.sql.execution.datasources.DataSourceStrategy$.apply(DataSourceStrategy.scala:332)
           at 
org.apache.spark.sql.catalyst.planning.QueryPlanner.$anonfun$plan$1(QueryPlanner.scala:63)
           at scala.collection.Iterator$$anon$11.nextCur(Iterator.scala:484)
           at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:490)
           at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:489)
           at 
org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:93)
           at 
org.apache.spark.sql.execution.SparkStrategies.plan(SparkStrategies.scala:67)
           at 
org.apache.spark.sql.catalyst.planning.QueryPlanner.$anonfun$plan$3(QueryPlanner.scala:78)
           at 
scala.collection.TraversableOnce.$anonfun$foldLeft$1(TraversableOnce.scala:162)
           at 
scala.collection.TraversableOnce.$anonfun$foldLeft$1$adapted(TraversableOnce.scala:162)
           at scala.collection.Iterator.foreach(Iterator.scala:941)
           at scala.collection.Iterator.foreach$(Iterator.scala:941)
           at scala.collection.AbstractIterator.foreach(Iterator.scala:1429)
           at 
scala.collection.TraversableOnce.foldLeft(TraversableOnce.scala:162)
           at 
scala.collection.TraversableOnce.foldLeft$(TraversableOnce.scala:160)
           at scala.collection.AbstractIterator.foldLeft(Iterator.scala:1429)
           at 
org.apache.spark.sql.catalyst.planning.QueryPlanner.$anonfun$plan$2(QueryPlanner.scala:75)
           at scala.collection.Iterator$$anon$11.nextCur(Iterator.scala:484)
           at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:490)
           at 
org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:93)
           at 
org.apache.spark.sql.execution.SparkStrategies.plan(SparkStrategies.scala:67)
           at 
org.apache.spark.sql.catalyst.planning.QueryPlanner.$anonfun$plan$3(QueryPlanner.scala:78)
           at 
scala.collection.TraversableOnce.$anonfun$foldLeft$1(TraversableOnce.scala:162)
           at 
scala.collection.TraversableOnce.$anonfun$foldLeft$1$adapted(TraversableOnce.scala:162)
           at scala.collection.Iterator.foreach(Iterator.scala:941)
           at scala.collection.Iterator.foreach$(Iterator.scala:941)
           at scala.collection.AbstractIterator.foreach(Iterator.scala:1429)
           at 
scala.collection.TraversableOnce.foldLeft(TraversableOnce.scala:162)
           at 
scala.collection.TraversableOnce.foldLeft$(TraversableOnce.scala:160)
           at scala.collection.AbstractIterator.foldLeft(Iterator.scala:1429)
           at 
org.apache.spark.sql.catalyst.planning.QueryPlanner.$anonfun$plan$2(QueryPlanner.scala:75)
           at scala.collection.Iterator$$anon$11.nextCur(Iterator.scala:484)
           at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:490)
           at 
org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:93)
           at 
org.apache.spark.sql.execution.SparkStrategies.plan(SparkStrategies.scala:67)
           at 
org.apache.spark.sql.catalyst.planning.QueryPlanner.$anonfun$plan$3(QueryPlanner.scala:78)
           at 
scala.collection.TraversableOnce.$anonfun$foldLeft$1(TraversableOnce.scala:162)
           at 
scala.collection.TraversableOnce.$anonfun$foldLeft$1$adapted(TraversableOnce.scala:162)
           at scala.collection.Iterator.foreach(Iterator.scala:941)
           at scala.collection.Iterator.foreach$(Iterator.scala:941)
           at scala.collection.AbstractIterator.foreach(Iterator.scala:1429)
           at 
scala.collection.TraversableOnce.foldLeft(TraversableOnce.scala:162)
           at 
scala.collection.TraversableOnce.foldLeft$(TraversableOnce.scala:160)
           at scala.collection.AbstractIterator.foldLeft(Iterator.scala:1429)
           at 
org.apache.spark.sql.catalyst.planning.QueryPlanner.$anonfun$plan$2(QueryPlanner.scala:75)
           at scala.collection.Iterator$$anon$11.nextCur(Iterator.scala:484)
           at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:490)
           at 
org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:93)
           at 
org.apache.spark.sql.execution.SparkStrategies.plan(SparkStrategies.scala:67)
           at 
org.apache.spark.sql.catalyst.planning.QueryPlanner.$anonfun$plan$3(QueryPlanner.scala:78)
           at 
scala.collection.TraversableOnce.$anonfun$foldLeft$1(TraversableOnce.scala:162)
           at 
scala.collection.TraversableOnce.$anonfun$foldLeft$1$adapted(TraversableOnce.scala:162)
           at scala.collection.Iterator.foreach(Iterator.scala:941)
           at scala.collection.Iterator.foreach$(Iterator.scala:941)
           at scala.collection.AbstractIterator.foreach(Iterator.scala:1429)
           at 
scala.collection.TraversableOnce.foldLeft(TraversableOnce.scala:162)
           at 
scala.collection.TraversableOnce.foldLeft$(TraversableOnce.scala:160)
           at scala.collection.AbstractIterator.foldLeft(Iterator.scala:1429)
           at 
org.apache.spark.sql.catalyst.planning.QueryPlanner.$anonfun$plan$2(QueryPlanner.scala:75)
           at scala.collection.Iterator$$anon$11.nextCur(Iterator.scala:484)
           at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:490)
           at 
org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:93)
           at 
org.apache.spark.sql.execution.SparkStrategies.plan(SparkStrategies.scala:67)
           at 
org.apache.spark.sql.execution.QueryExecution$.createSparkPlan(QueryExecution.scala:391)
           at 
org.apache.spark.sql.execution.QueryExecution.$anonfun$sparkPlan$1(QueryExecution.scala:104)
           at 
org.apache.spark.sql.catalyst.QueryPlanningTracker.measurePhase(QueryPlanningTracker.scala:111)
           at 
org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$1(QueryExecution.scala:143)
           at 
org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:775)
           at 
org.apache.spark.sql.execution.QueryExecution.executePhase(QueryExecution.scala:143)
           at 
org.apache.spark.sql.execution.QueryExecution.sparkPlan$lzycompute(QueryExecution.scala:104)
           at 
org.apache.spark.sql.execution.QueryExecution.sparkPlan(QueryExecution.scala:97)
           at 
org.apache.spark.sql.execution.QueryExecution.$anonfun$executedPlan$1(QueryExecution.scala:117)
           at 
org.apache.spark.sql.catalyst.QueryPlanningTracker.measurePhase(QueryPlanningTracker.scala:111)
           at 
org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$1(QueryExecution.scala:143)
           at 
org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:775)
           at 
org.apache.spark.sql.execution.QueryExecution.executePhase(QueryExecution.scala:143)
           at 
org.apache.spark.sql.execution.QueryExecution.executedPlan$lzycompute(QueryExecution.scala:117)
           at 
org.apache.spark.sql.execution.QueryExecution.executedPlan(QueryExecution.scala:110)
           at 
org.apache.spark.sql.execution.QueryExecution.$anonfun$writePlans$5(QueryExecution.scala:225)
           at 
org.apache.spark.sql.catalyst.plans.QueryPlan$.append(QueryPlan.scala:487)
           at 
org.apache.spark.sql.execution.QueryExecution.writePlans(QueryExecution.scala:225)
           at 
org.apache.spark.sql.execution.QueryExecution.toString(QueryExecution.scala:240)
           at 
org.apache.spark.sql.execution.QueryExecution.toString(QueryExecution.scala:233)
           at 
org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.org$apache$spark$sql$hive$thriftserver$SparkExecuteStatementOperation$$execute(SparkExecuteStatementOperation.scala:329)
           ... 16 more
   
           at 
org.apache.hive.jdbc.HiveStatement.waitForOperationToComplete(HiveStatement.java:349)
           at org.apache.hive.jdbc.HiveStatement.execute(HiveStatement.java:251)
           at 
org.apache.hive.jdbc.HiveStatement.executeQuery(HiveStatement.java:434)
           at 
org.apache.hive.jdbc.HivePreparedStatement.executeQuery(HivePreparedStatement.java:109)
           at 
com.zaxxer.hikari.pool.ProxyPreparedStatement.executeQuery(ProxyPreparedStatement.java:52)
           at 
com.zaxxer.hikari.pool.HikariProxyPreparedStatement.executeQuery(HikariProxyPreparedStatement.java)
           at 
com.oltpbenchmark.benchmarks.chbenchmarkForSpark.chbenchmark.queries.GenericQuery.run(GenericQuery.java:35)
           at 
com.oltpbenchmark.benchmarks.chbenchmarkForSpark.chbenchmark.CHBenCHmarkWorker.executeWork(CHBenCHmarkWorker.java:41)
           at com.oltpbenchmark.api.Worker.doWork(Worker.java:363)
           at com.oltpbenchmark.api.Worker.run(Worker.java:259)
           at java.base/java.lang.Thread.run(Thread.java:833)
   ```
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@hudi.apache.org.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to