Github user HyukjinKwon commented on a diff in the pull request: https://github.com/apache/spark/pull/9327#discussion_r43846677 --- Diff: sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFilterSuite.scala --- @@ -314,4 +314,24 @@ class ParquetFilterSuite extends QueryTest with ParquetTest with SharedSQLContex } } } + + test("SPARK-11103: Filter applied on merged Parquet schema with new column fails") { + import testImplicits._ + + withSQLConf(SQLConf.PARQUET_FILTER_PUSHDOWN_ENABLED.key -> "true", + SQLConf.PARQUET_SCHEMA_MERGING_ENABLED.key -> "true") { + withTempPath { dir => + var pathOne = s"${dir.getCanonicalPath}/table1" + (1 to 3).map(i => (i, i.toString)).toDF("a", "b").write.parquet(pathOne) + var pathTwo = s"${dir.getCanonicalPath}/table2" + (1 to 3).map(i => (i, i.toString)).toDF("c", "b").write.parquet(pathTwo) + + // If the "c = 1" filter gets pushed down, this query will throw an exception which + // Parquet emits. This is a Parquet issue (PARQUET-389). + checkAnswer( + sqlContext.read.parquet(pathOne, pathTwo).filter("c = 1"), --- End diff -- I investigated that. It does not guarantee the order. This is because of `FileStatusCache` in `HadoopFsRelation` (which `ParquetRelation` extends as you know). When `FileStatusCache.listLeafFiles()` is called, this returns `Set[FileStatus]` which messes up the order of `Array[FileStatus]`. So, after retrieving the list of leaf files including `_metadata` and `_common_metadata`, this starts to merge (separately if necessary) the `Set`s of `_metadata`, `_common_metadata` and part-files in `ParquetRelation.mergeSchemasInParallel()`. I think this can be resolved by using `LinkedHashSet`. I will open an issue for this.
--- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org