[ https://issues.apache.org/jira/browse/PARQUET-831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17684187#comment-17684187 ]
ASF GitHub Bot commented on PARQUET-831: ---------------------------------------- wgtmac commented on PR #1022: URL: https://github.com/apache/parquet-mr/pull/1022#issuecomment-1416791622 Thanks for your contribution @jianchun Could you please fix the CI check first? > Corrupt Parquet Files > --------------------- > > Key: PARQUET-831 > URL: https://issues.apache.org/jira/browse/PARQUET-831 > Project: Parquet > Issue Type: Bug > Components: parquet-mr > Affects Versions: 1.7.0 > Environment: HDP-2.5.3.0 Spark-2.0.2 > Reporter: Steve Severance > Priority: Major > > I am getting corrupt parquet files as the result of a spark job. The write > job completes with no errors but when I read the data again I get the > following error: > org.apache.parquet.io.ParquetDecodingException: Can not read value at 0 in > block -1 in file > hdfs://MYPATH/part-r-00004-b5c93a19-2f75-4c04-b798-de9cb463f02f.gz.parquet > at > org.apache.parquet.hadoop.InternalParquetRecordReader.nextKeyValue(InternalParquetRecordReader.java:228) > at > org.apache.parquet.hadoop.ParquetRecordReader.nextKeyValue(ParquetRecordReader.java:201) > at > org.apache.spark.sql.execution.datasources.RecordReaderIterator.hasNext(RecordReaderIterator.scala:39) > at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408) > at > org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:91) > at > org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:128) > at > org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:91) > at > org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.agg_doAggregateWithoutKey$(Unknown > Source) > at > org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown > Source) > at > org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) > at > org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1.hasNext(WholeStageCodegenExec.scala:370) > at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408) > at > org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:125) > at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:79) > at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:47) > at org.apache.spark.scheduler.Task.run(Task.scala:86) > at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > Caused by: java.lang.NegativeArraySizeException > at > org.apache.parquet.hadoop.ParquetFileReader$ConsecutiveChunkList.readAll(ParquetFileReader.java:755) > at > org.apache.parquet.hadoop.ParquetFileReader.readNextRowGroup(ParquetFileReader.java:494) > at > org.apache.parquet.hadoop.InternalParquetRecordReader.checkRead(InternalParquetRecordReader.java:127) > at > org.apache.parquet.hadoop.InternalParquetRecordReader.nextKeyValue(InternalParquetRecordReader.java:208) > > The job that generates this data partitions and sorts the data in a > particular way to achieve better compression. If I don't partition and sort I > have not been able to reproduce its behavior. It also only has this behavior > on say 25% of the data. Most of the time simply rerunning the write job would > cause the read error to go away but I have now run across cases where that > was not the case. I am happy to give what data I can, or work with someone to > run this down. > I know this is a sub-optimal report, but I have not been able to randomly > generate data to reproduce this issue. The data that trips this bug is > typically 5GB+ post compression files. -- This message was sent by Atlassian Jira (v8.20.10#820010)