[ 
https://issues.apache.org/jira/browse/SPARK-34516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17289680#comment-17289680
 ] 

angerszhu edited comment on SPARK-34516 at 2/24/21, 6:34 AM:
-------------------------------------------------------------

For this error , I found some related issue:

[https://github.com/trinodb/trino/issues/2256] (Not so clear)

https://issues.apache.org/jira/browse/DRILL-3871 (seems same issue, caused by 
parquet reader's logic)

https://issues.apache.org/jira/browse/PARQUET-400 (looks like it has been fixed 
in parquet versison used by spark 3.0.1)

 

Check the parquet's code about this part, it just decode PageHeader from a data 
stream. 

Gentle ping [~lian cheng] [~viirya] [~maxgekk] [~dongjoon]   I am not sure if 
it is related to spark's vectorized reader of parquet. Can you take a look and 
give some advise?


was (Author: angerszhuuu):
For this error , I found some related issue:

[https://github.com/trinodb/trino/issues/2256] (Not so clear)

https://issues.apache.org/jira/browse/DRILL-3871 (seems same issue, caused by 
parquet reader's logic)

https://issues.apache.org/jira/browse/PARQUET-400 (looks like it has been fixed 
in parquet versison used by spark 3.0.1)

Gentle ping [~lian cheng] [~viirya] [~maxgekk] [~dongjoon]   I am not sure if 
it is related to spark's vectorized reader of parquet. Can you take a look and 
give some advise?

> Spark 3.0.1 encounter parquet PageHerder IO issue
> -------------------------------------------------
>
>                 Key: SPARK-34516
>                 URL: https://issues.apache.org/jira/browse/SPARK-34516
>             Project: Spark
>          Issue Type: Improvement
>          Components: SQL
>    Affects Versions: 3.0.1
>            Reporter: angerszhu
>            Priority: Major
>
> {code:java}
> Caused by: java.io.IOException: can not read class 
> org.apache.parquet.format.PageHeader: Required field 'uncompressed_page_size' 
> was not found in serialized data! Struct: 
> org.apache.parquet.format.PageHeader$PageHeaderStandardScheme@42a9002d
>       at org.apache.parquet.format.Util.read(Util.java:216)
>       at org.apache.parquet.format.Util.readPageHeader(Util.java:65)
>       at 
> org.apache.parquet.hadoop.ParquetFileReader$WorkaroundChunk.readPageHeader(ParquetFileReader.java:1064)
>       at 
> org.apache.parquet.hadoop.ParquetFileReader$Chunk.readAllPages(ParquetFileReader.java:950)
>       at 
> org.apache.parquet.hadoop.ParquetFileReader.readNextRowGroup(ParquetFileReader.java:807)
>       at 
> org.apache.spark.sql.execution.datasources.parquet.VectorizedParquetRecordReader.checkEndOfRowGroup(VectorizedParquetRecordReader.java:313)
>       at 
> org.apache.spark.sql.execution.datasources.parquet.VectorizedParquetRecordReader.nextBatch(VectorizedParquetRecordReader.java:268)
>       at 
> org.apache.spark.sql.execution.datasources.parquet.VectorizedParquetRecordReader.nextKeyValue(VectorizedParquetRecordReader.java:171)
>       at 
> org.apache.spark.sql.execution.datasources.RecordReaderIterator.hasNext(RecordReaderIterator.scala:39)
>       at 
> org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:93)
>       at 
> org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:173)
>       at 
> org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:93)
>       at 
> org.apache.spark.sql.execution.FileSourceScanExec$$anon$1.hasNext(DataSourceScanExec.scala:491)
>       at 
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to