Github user viirya commented on a diff in the pull request:

    https://github.com/apache/spark/pull/16578#discussion_r139331198
  
    --- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetReadSupport.scala
 ---
    @@ -63,9 +74,22 @@ private[parquet] class ParquetReadSupport extends 
ReadSupport[UnsafeRow] with Lo
           StructType.fromString(schemaString)
         }
     
    -    val parquetRequestedSchema =
    +    val clippedParquetSchema =
           ParquetReadSupport.clipParquetSchema(context.getFileSchema, 
catalystRequestedSchema)
     
    +    val parquetRequestedSchema = if (parquetMrCompatibility) {
    +      // Parquet-mr will throw an exception if we try to read a superset 
of the file's schema.
    +      // Therefore, we intersect our clipped schema with the underlying 
file's schema
    +      ParquetReadSupport.intersectParquetGroups(clippedParquetSchema, 
context.getFileSchema)
    +        .map(intersectionGroup =>
    +          new MessageType(intersectionGroup.getName, 
intersectionGroup.getFields))
    +        .getOrElse(ParquetSchemaConverter.EMPTY_MESSAGE)
    +    } else {
    +      // Spark's built-in Parquet reader will throw an exception in some 
cases if the requested
    +      // schema is not the same as the clipped schema
    --- End diff --
    
    I think the built-in Parquet reader means vectorized reader. But I think we 
don't use `ParquetReadSupport` in vectorized reader?


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to