zhztheplayer commented on PR #11461:
URL: 
https://github.com/apache/incubator-gluten/pull/11461#issuecomment-3779933656

   PR status:
   
   When running the test case: 
https://github.com/apache/incubator-gluten/blob/4cd6440d23665d8807fbf1fc0d470a148accf020/backends-velox/src-delta33/test/scala/org/apache/spark/sql/delta/perf/OptimizedWritesSuite.scala#L124,
 an error "Stream is corrupted" from the shuffle reader is raised:
   
   ```
   E20260121 18:29:26.731045 420393 Exceptions.h:53] Line: 
/opt/code/incubator-gluten/ep/build-velox/build/velox_ep/velox/exec/Driver.cpp:582,
 Function:operator(), Expression:  Operator::getOutput failed for [operator: 
ValueStream, plan node ID: 0]: Error during calling Java code from native code: 
org.apache.gluten.exception.GlutenException: 
org.apache.gluten.exception.GlutenException: Error during calling Java code 
from native code: org.apache.spark.shuffle.FetchFailedException: Block 
shuffle_0_0_1805 is corrupted but the cause is unknown
        at 
org.apache.spark.errors.SparkCoreErrors$.fetchFailedError(SparkCoreErrors.scala:437)
        at 
org.apache.spark.storage.ShuffleBlockFetcherIterator.throwFetchFailedException(ShuffleBlockFetcherIterator.scala:1239)
        at 
org.apache.spark.storage.BufferReleasingInputStream.tryOrFetchFailedException(ShuffleBlockFetcherIterator.scala:1398)
        at 
org.apache.spark.storage.BufferReleasingInputStream.read(ShuffleBlockFetcherIterator.scala:1374)
        at 
org.apache.gluten.vectorized.OnHeapJniByteInputStream.read(OnHeapJniByteInputStream.java:39)
        at 
org.apache.gluten.vectorized.ColumnarBatchOutIterator.nativeNext(Native Method)
        at 
org.apache.gluten.vectorized.ColumnarBatchOutIterator.next0(ColumnarBatchOutIterator.java:63)
        at 
org.apache.gluten.vectorized.ColumnarBatchOutIterator.next0(ColumnarBatchOutIterator.java:28)
        at 
org.apache.gluten.iterator.ClosableIterator.next(ClosableIterator.java:48)
        at 
org.apache.gluten.vectorized.ColumnarBatchSerializerInstanceImpl$TaskDeserializationStream.liftedTree1$1(ColumnarBatchSerializer.scala:188)
        at 
org.apache.gluten.vectorized.ColumnarBatchSerializerInstanceImpl$TaskDeserializationStream.readValue(ColumnarBatchSerializer.scala:187)
        at 
org.apache.spark.serializer.DeserializationStream$$anon$2.getNext(Serializer.scala:188)
        at 
org.apache.spark.serializer.DeserializationStream$$anon$2.getNext(Serializer.scala:185)
        at org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:73)
        at 
org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
        at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460)
        at 
scala.collection.convert.Wrappers$IteratorWrapper.hasNext(Wrappers.scala:32)
        at 
org.apache.gluten.vectorized.ColumnarBatchInIterator.hasNext(ColumnarBatchInIterator.java:36)
        at 
org.apache.gluten.vectorized.ColumnarBatchOutIterator.nativeHasNext(Native 
Method)
        at 
org.apache.gluten.vectorized.ColumnarBatchOutIterator.hasNext0(ColumnarBatchOutIterator.java:58)
        at 
org.apache.gluten.iterator.ClosableIterator.hasNext(ClosableIterator.java:36)
        at 
scala.collection.convert.Wrappers$JIteratorWrapper.hasNext(Wrappers.scala:45)
        at 
org.apache.gluten.iterator.IteratorsV1$InvocationFlowProtection.hasNext(IteratorsV1.scala:154)
        at 
org.apache.gluten.iterator.IteratorsV1$IteratorCompleter.hasNext(IteratorsV1.scala:66)
        at 
org.apache.gluten.iterator.IteratorsV1$PayloadCloser.hasNext(IteratorsV1.scala:38)
        at 
org.apache.gluten.iterator.IteratorsV1$LifeTimeAccumulator.hasNext(IteratorsV1.scala:95)
        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:491)
        at 
org.apache.spark.sql.delta.files.GlutenDeltaFileFormatWriter$.executeTask(GlutenDeltaFileFormatWriter.scala:452)
        at 
org.apache.spark.sql.delta.files.GlutenDeltaFileFormatWriter$.$anonfun$executeWrite$4(GlutenDeltaFileFormatWriter.scala:311)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:93)
        at 
org.apache.spark.TaskContext.runTaskWithListeners(TaskContext.scala:166)
        at org.apache.spark.scheduler.Task.run(Task.scala:141)
        at 
org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$4(Executor.scala:620)
        at 
org.apache.spark.util.SparkErrorUtils.tryWithSafeFinally(SparkErrorUtils.scala:64)
        at 
org.apache.spark.util.SparkErrorUtils.tryWithSafeFinally$(SparkErrorUtils.scala:61)
        at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:94)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:623)
        at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
        at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
        at java.base/java.lang.Thread.run(Thread.java:840)
   Caused by: java.io.IOException: Stream is corrupted
        at 
net.jpountz.lz4.LZ4BlockInputStream.refill(LZ4BlockInputStream.java:202)
        at 
net.jpountz.lz4.LZ4BlockInputStream.read(LZ4BlockInputStream.java:159)
        at 
net.jpountz.lz4.LZ4BlockInputStream.read(LZ4BlockInputStream.java:172)
        at 
org.apache.spark.storage.BufferReleasingInputStream.$anonfun$read$2(ShuffleBlockFetcherIterator.scala:1374)
        at 
scala.runtime.java8.JFunction0$mcI$sp.apply(JFunction0$mcI$sp.java:23)
        at 
org.apache.spark.storage.BufferReleasingInputStream.tryOrFetchFailedException(ShuffleBlockFetcherIterator.scala:1389)
        ... 37 more
   ```
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to