zhouyuan commented on issue #11865:
URL: https://github.com/apache/gluten/issues/11865#issuecomment-4175772493

   
https://github.com/apache/gluten/actions/runs/23871267879/job/69658793595?pr=11860
   
   ```
   2026-04-02T08:13:59.4886836Z - SPARK-36803: parquet files with legacy mode 
and schema evolution *** FAILED ***
   2026-04-02T08:13:59.4889116Z   org.apache.spark.SparkException: Job aborted 
due to stage failure: Task 1 in stage 17.0 failed 1 times, most recent failure: 
Lost task 1.0 in stage 17.0 (TID 28) (549bc3e6a6e0 executor driver): 
org.apache.spark.sql.execution.datasources.SchemaColumnConvertNotSupportedException
   2026-04-02T08:13:59.4891977Z         at 
org.apache.gluten.vectorized.ColumnarBatchOutIterator.translateToSchemaException(ColumnarBatchOutIterator.java:141)
   2026-04-02T08:13:59.4893699Z         at 
org.apache.gluten.vectorized.ColumnarBatchOutIterator.translateException(ColumnarBatchOutIterator.java:150)
   2026-04-02T08:13:59.4894632Z         at 
org.apache.gluten.iterator.ClosableIterator.hasNext(ClosableIterator.java:38)
   2026-04-02T08:13:59.4895216Z         at 
scala.collection.convert.Wrappers$JIteratorWrapper.hasNext(Wrappers.scala:45)
   2026-04-02T08:13:59.4895843Z         at 
org.apache.gluten.iterator.IteratorsV1$InvocationFlowProtection.hasNext(IteratorsV1.scala:154)
   2026-04-02T08:13:59.4896482Z         at 
org.apache.gluten.iterator.IteratorsV1$IteratorCompleter.hasNext(IteratorsV1.scala:66)
   2026-04-02T08:13:59.4897074Z         at 
org.apache.gluten.iterator.IteratorsV1$PayloadCloser.hasNext(IteratorsV1.scala:38)
   2026-04-02T08:13:59.4897673Z         at 
org.apache.gluten.iterator.IteratorsV1$LifeTimeAccumulator.hasNext(IteratorsV1.scala:95)
   2026-04-02T08:13:59.4898267Z         at 
org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
   2026-04-02T08:13:59.4898748Z         at 
scala.collection.Iterator.isEmpty(Iterator.scala:387)
   2026-04-02T08:13:59.4899267Z         at 
scala.collection.Iterator.isEmpty$(Iterator.scala:387)
   2026-04-02T08:13:59.4900136Z         at 
org.apache.spark.InterruptibleIterator.isEmpty(InterruptibleIterator.scala:28)
   2026-04-02T08:13:59.4900799Z         at 
org.apache.gluten.execution.VeloxColumnarToRowExec$.toRowIterator(VeloxColumnarToRowExec.scala:127)
   2026-04-02T08:13:59.4901616Z         at 
org.apache.gluten.execution.VeloxColumnarToRowExec.$anonfun$doExecuteInternal$1(VeloxColumnarToRowExec.scala:77)
   2026-04-02T08:13:59.4902263Z         at 
org.apache.spark.rdd.RDD.$anonfun$mapPartitions$2(RDD.scala:855)
   2026-04-02T08:13:59.4902718Z         at 
org.apache.spark.rdd.RDD.$anonfun$mapPartitions$2$adapted(RDD.scala:855)
   2026-04-02T08:13:59.4903209Z         at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
   2026-04-02T08:13:59.4903700Z         at 
org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:365)
   2026-04-02T08:13:59.4904110Z         at 
org.apache.spark.rdd.RDD.iterator(RDD.scala:329)
   2026-04-02T08:13:59.4904533Z         at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
   2026-04-02T08:13:59.4905020Z         at 
org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:365)
   2026-04-02T08:13:59.4905416Z         at 
org.apache.spark.rdd.RDD.iterator(RDD.scala:329)
   2026-04-02T08:13:59.4905996Z         at 
org.apache.spark.sql.execution.SQLExecutionRDD.$anonfun$compute$1(SQLExecutionRDD.scala:52)
   2026-04-02T08:13:59.4906574Z         at 
org.apache.spark.sql.internal.SQLConf$.withExistingConf(SQLConf.scala:158)
   2026-04-02T08:13:59.4907115Z         at 
org.apache.spark.sql.execution.SQLExecutionRDD.compute(SQLExecutionRDD.scala:52)
   2026-04-02T08:13:59.4907631Z         at 
org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:365)
   2026-04-02T08:13:59.4908025Z         at 
org.apache.spark.rdd.RDD.iterator(RDD.scala:329)
   2026-04-02T08:13:59.4908443Z         at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
   2026-04-02T08:13:59.4908919Z         at 
org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:365)
   2026-04-02T08:13:59.4909367Z         at 
org.apache.spark.rdd.RDD.iterator(RDD.scala:329)
   2026-04-02T08:13:59.4910119Z         at 
org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
   2026-04-02T08:13:59.4910537Z         at 
org.apache.spark.scheduler.Task.run(Task.scala:136)
   2026-04-02T08:13:59.4910979Z         at 
org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:548)
   2026-04-02T08:13:59.4911474Z         at 
org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1504)
   2026-04-02T08:13:59.4911930Z         at 
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:551)
   2026-04-02T08:13:59.4912491Z         at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
   2026-04-02T08:13:59.4913121Z         at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
   2026-04-02T08:13:59.4913592Z         at 
java.base/java.lang.Thread.run(Thread.java:833)
   2026-04-02T08:13:59.4914270Z Caused by: 
org.apache.gluten.exception.GlutenException: Exception: VeloxRuntimeError
   2026-04-02T08:13:59.4915012Z Error Source: RUNTIME
   2026-04-02T08:13:59.4915378Z Error Code: INVALID_STATE
   2026-04-02T08:13:59.4916050Z Reason: Converted type INTEGER is not allowed 
for requested type ROW<"col-0":INTEGER,"col-1":INTEGER>
   2026-04-02T08:13:59.4916845Z Retriable: False
   2026-04-02T08:13:59.4917996Z Expression: !requestedType || isCompatible( 
requestedType, isRepeated, [&](const TypePtr& type) { return isInt32Compatible( 
type, TypeKind::INTEGER, allowNarrowing); })
   2026-04-02T08:13:59.4920613Z Context: Split Hive: 
file:///tmp/spark-2c148b69-1823-4656-95d6-8d3e2b2e3b2f/part-00000-8074c66f-9a7c-4027-a8a9-9840ba36eac7-c000.snappy.parquet
 0 - 891 Task Gluten_Stage_17_TID_28_VTID_82748
   2026-04-02T08:13:59.4921975Z Function: convertType
   2026-04-02T08:13:59.4922624Z File: 
/work/ep/build-velox/build/velox_ep/velox/dwio/parquet/reader/ParquetReader.cpp
   2026-04-02T08:13:59.4923324Z Line: 1078
   2026-04-02T08:13:59.4923616Z Stack trace:
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to