[ https://issues.apache.org/jira/browse/SPARK-25206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
yucai updated SPARK-25206: -------------------------- Description: In current Spark 2.3.1, below query returns wrong data silently. {code:java} spark.range(10).write.parquet("/tmp/data") sql("DROP TABLE t") sql("CREATE TABLE t (ID LONG) USING parquet LOCATION '/tmp/data'") scala> sql("select * from t where id > 0").show +---+ | ID| +---+ +---+{code} *Root Cause* Spark pushdowns FilterApi.gt(intColumn("{color:#ff0000}ID{color}"), 0: Integer) into parquet, but {color:#ff0000}ID{color} does not exist in /tmp/data (parquet is case sensitive, it has {color:#ff0000}id{color} actually). So no records are returned. In Spark 2.1, the user will get Exception: {code:java} Caused by: java.lang.IllegalArgumentException: Column [ID] was not found in schema!{code} But in Spark 2.3, they will get the wrong results sliently. Since SPARK-24716, Spark uses Parquet schema instead of Hive metastore schema to do the pushdown, perfect for this issue. [~yumwang], [~cloud_fan], [~smilegator], any thoughts? Should we backport it? was: In current Spark 2.3.1, below query returns wrong data silently. {code:java} spark.range(10).write.parquet("/tmp/data") sql("DROP TABLE t") sql("CREATE TABLE t (ID LONG) USING parquet LOCATION '/tmp/data'") scala> sql("select * from t where id > 0").show +---+ | ID| +---+ +---+{code} *Root Cause* Spark pushdowns FilterApi.gt(intColumn("{color:#ff0000}ID{color}"), 0: Integer) into parquet, but {color:#ff0000}ID{color} does not exist in /tmp/data (parquet is case sensitive, it has {color:#ff0000}id{color} actually). So no records are returned. In Spark 2.1, the user will get Exception: {code:java} Caused by: java.lang.IllegalArgumentException: Column [ID] was not found in schema!{code} But in Spark 2.3, they will get the wrong results sliently. Since SPARK-24716, Spark uses Parquet schema instead of Hive metastore schema to do the pushdown, perfect for this issue. [~yuming] , [~cloud_fan], [~smilegator], any thoughts? Should we backport it? > Wrong data may be returned when enable pushdown > ----------------------------------------------- > > Key: SPARK-25206 > URL: https://issues.apache.org/jira/browse/SPARK-25206 > Project: Spark > Issue Type: Bug > Components: SQL > Affects Versions: 2.3.1 > Reporter: yucai > Priority: Major > > In current Spark 2.3.1, below query returns wrong data silently. > {code:java} > spark.range(10).write.parquet("/tmp/data") > sql("DROP TABLE t") > sql("CREATE TABLE t (ID LONG) USING parquet LOCATION '/tmp/data'") > scala> sql("select * from t where id > 0").show > +---+ > | ID| > +---+ > +---+{code} > > *Root Cause* > Spark pushdowns FilterApi.gt(intColumn("{color:#ff0000}ID{color}"), 0: > Integer) into parquet, but {color:#ff0000}ID{color} does not exist in > /tmp/data (parquet is case sensitive, it has {color:#ff0000}id{color} > actually). > So no records are returned. > In Spark 2.1, the user will get Exception: > {code:java} > Caused by: java.lang.IllegalArgumentException: Column [ID] was not found in > schema!{code} > But in Spark 2.3, they will get the wrong results sliently. > > Since SPARK-24716, Spark uses Parquet schema instead of Hive metastore schema > to do the pushdown, perfect for this issue. > [~yumwang], [~cloud_fan], [~smilegator], any thoughts? Should we backport it? -- This message was sent by Atlassian JIRA (v7.6.3#76005) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org