[ 
https://issues.apache.org/jira/browse/SPARK-48792?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kent Yao resolved SPARK-48792.
------------------------------
    Fix Version/s: 4.0.0
       Resolution: Fixed

> INSERT with partial column list to table with char/varchar crashes
> ------------------------------------------------------------------
>
>                 Key: SPARK-48792
>                 URL: https://issues.apache.org/jira/browse/SPARK-48792
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 3.5.1
>            Reporter: Kent Yao
>            Priority: Major
>              Labels: pull-request-available
>             Fix For: 4.0.0
>
>
> ```
> 24/07/03 16:29:01 ERROR Executor: Exception in task 0.0 in stage 0.0 (TID 0)
> org.apache.spark.SparkException: [INTERNAL_ERROR] Unsupported data type 
> VarcharType(64). SQLSTATE: XX000
>       at 
> org.apache.spark.SparkException$.internalError(SparkException.scala:92)
>       at 
> org.apache.spark.SparkException$.internalError(SparkException.scala:96)
>       at 
> org.apache.spark.sql.execution.datasources.parquet.ParquetWriteSupport.makeWriter(ParquetWriteSupport.scala:266)
>       at 
> org.apache.spark.sql.execution.datasources.parquet.ParquetWriteSupport.$anonfun$init$2(ParquetWriteSupport.scala:111)
>       at scala.collection.immutable.List.map(List.scala:247)
>       at scala.collection.immutable.List.map(List.scala:79)
>       at 
> org.apache.spark.sql.execution.datasources.parquet.ParquetWriteSupport.init(ParquetWriteSupport.scala:111)
>       at 
> org.apache.parquet.hadoop.ParquetOutputFormat.getRecordWriter(ParquetOutputFormat.java:478)
>       at 
> org.apache.parquet.hadoop.ParquetOutputFormat.getRecordWriter(ParquetOutputFormat.java:422)
>       at 
> org.apache.parquet.hadoop.ParquetOutputFormat.getRecordWriter(ParquetOutputFormat.java:411)
>       at 
> org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.<init>(ParquetOutputWriter.scala:36)
>       at 
> org.apache.spark.sql.execution.datasources.parquet.ParquetUtils$$anon$1.newInstance(ParquetUtils.scala:500)
>       at 
> org.apache.spark.sql.execution.datasources.SingleDirectoryDataWriter.newOutputWriter(FileFormatDataWriter.scala:180)
>       at 
> org.apache.spark.sql.execution.datasources.SingleDirectoryDataWriter.<init>(FileFormatDataWriter.scala:165)
>       at 
> org.apache.spark.sql.execution.datasources.FileFormatWriter$.executeTask(FileFormatWriter.scala:391)
>       at 
> org.apache.spark.sql.execution.datasources.WriteFilesExec.$anonfun$doExecuteWrite$1(WriteFiles.scala:107)
>       at 
> org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:896)
>       at 
> org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:896)
>       at 
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
>       at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:369)
>       at org.apache.spark.rdd.RDD.iterator(RDD.scala:333)
>       at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:93)
>       at 
> org.apache.spark.TaskContext.runTaskWithListeners(TaskContext.scala:171)
>       at org.apache.spark.scheduler.Task.run(Task.scala:146)
>       at 
> org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$5(Executor.scala:640)
>       at 
> org.apache.spark.util.SparkErrorUtils.tryWithSafeFinally(SparkErrorUtils.scala:64)
>       at 
> org.apache.spark.util.SparkErrorUtils.tryWithSafeFinally$(SparkErrorUtils.scala:61)
>       at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:99)
>       at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:643)
>       at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
>       at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
>       at java.base/java.lang.Thread.run(Thread.java:840)
> ```



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to