HyukjinKwon commented on a change in pull request #31921: URL: https://github.com/apache/spark/pull/31921#discussion_r598455484
########## File path: sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetSchemaConverter.scala ########## @@ -130,13 +130,11 @@ class ParquetToSparkSchemaConverter( case INT32 => originalType match { case INT_8 => ByteType - case INT_16 => ShortType - case INT_32 | null => IntegerType + case INT_16 | UINT_8 => ShortType + case INT_32 | UINT_16 | null => IntegerType case DATE => DateType case DECIMAL => makeDecimalType(Decimal.MAX_INT_DIGITS) - case UINT_8 => typeNotSupported() - case UINT_16 => typeNotSupported() - case UINT_32 => typeNotSupported() + case UINT_32 => LongType Review comment: These were explicitly unsupported at https://github.com/apache/spark/pull/9646 .. per @liancheng's advice (who's also Parquet committer). So I'm less sure if this is something we should support. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org