xiangyuf opened a new issue, #5271:
URL: https://github.com/apache/paimon/issues/5271

   ### Search before asking
   
   - [x] I searched in the [issues](https://github.com/apache/paimon/issues) 
and found nothing similar.
   
   
   ### Paimon version
   
   0.8.2
   
   ### Compute Engine
   
   spark
   
   ### Minimal reproduce step
   
   `org.apache.spark.util.TaskCompletionListenerException: 
java.lang.NegativeArraySizeException: -1689105028
   
   Previous exception in task: -1689105028
        
org.apache.paimon.data.columnar.heap.HeapBytesVector.reserve(HeapBytesVector.java:100)
        
org.apache.paimon.data.columnar.heap.HeapBytesVector.appendBytes(HeapBytesVector.java:77)
        
org.apache.paimon.format.parquet.reader.BytesColumnReader.readBinary(BytesColumnReader.java:89)
        
org.apache.paimon.format.parquet.reader.BytesColumnReader.readBatch(BytesColumnReader.java:51)
        
org.apache.paimon.format.parquet.reader.BytesColumnReader.readBatch(BytesColumnReader.java:32)
        
org.apache.paimon.format.parquet.reader.AbstractColumnReader.readToVector(AbstractColumnReader.java:189)
        
org.apache.paimon.format.parquet.ParquetReaderFactory$ParquetReader.nextBatch(ParquetReaderFactory.java:318)
        
org.apache.paimon.format.parquet.ParquetReaderFactory$ParquetReader.readBatch(ParquetReaderFactory.java:291)
        
org.apache.paimon.io.FileRecordReader.readBatch(FileRecordReader.java:47)
        
org.apache.paimon.spark.PaimonRecordReaderIterator.readBatch(PaimonRecordReaderIterator.scala:69)
        
org.apache.paimon.spark.PaimonRecordReaderIterator.<init>(PaimonRecordReaderIterator.scala:34)
        
org.apache.paimon.spark.PaimonPartitionReader.iterator$lzycompute(PaimonPartitionReader.scala:43)
        
org.apache.paimon.spark.PaimonPartitionReader.iterator(PaimonPartitionReader.scala:41)
        
org.apache.paimon.spark.PaimonPartitionReader.next(PaimonPartitionReader.scala:47)
        
org.apache.spark.sql.execution.datasources.v2.PartitionIterator.hasNext(DataSourceRDD.scala:93)
        
org.apache.spark.sql.execution.datasources.v2.MetricsIterator.hasNext(DataSourceRDD.scala:138)
        
org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
        scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460)
        
org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage4.processNext(Unknown
 Source)
        
org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
        
org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:773)
        scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460)
        
org.apache.spark.shuffle.sort.UnsafeShuffleWriter.write(UnsafeShuffleWriter.java:179)
        
org.apache.spark.shuffle.ShuffleWriteProcessor.write(ShuffleWriteProcessor.scala:61)
        
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
        
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:52)
        org.apache.spark.scheduler.Task.run(Task.scala:134)
        
org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:538)
        org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1618)
        org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:541)
        
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
        
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
        java.base/java.lang.Thread.run(Thread.java:840)
        at 
org.apache.spark.TaskContextImpl.invokeListeners(TaskContextImpl.scala:205)
        at 
org.apache.spark.TaskContextImpl.invokeTaskCompletionListeners(TaskContextImpl.scala:142)
        at 
org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:135)
        at org.apache.spark.scheduler.Task.run(Task.scala:144)
        at 
org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:538)
        at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1618)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:541)
        at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
        at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
        at java.base/java.lang.Thread.run(Thread.java:840)`
   
   ### What doesn't meet your expectations?
   
   Spark job fail
   
   ### Anything else?
   
   _No response_
   
   ### Are you willing to submit a PR?
   
   - [x] I'm willing to submit a PR!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to