Github user HyukjinKwon commented on a diff in the pull request:

    https://github.com/apache/spark/pull/18000#discussion_r116950919
  
    --- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFilters.scala
 ---
    @@ -166,7 +166,14 @@ private[parquet] object ParquetFilters {
        * Converts data sources filters to Parquet filter predicates.
        */
       def createFilter(schema: StructType, predicate: sources.Filter): 
Option[FilterPredicate] = {
    -    val dataTypeOf = getFieldMap(schema)
    +    val nameTypeMap = getFieldMap(schema)
    +
    +    // Parquet does not allow dots in the column name because dots are 
used as a column path
    +    // delimiter. Since Parquet 1.8.2 (PARQUET-389), Parquet accepts the 
filter predicates
    +    // with missing columns. The incorrect results could be got from 
Parquet when we push down
    +    // filters for the column having dots in the names. Thus, we do not 
push down such filters.
    --- End diff --
    
    Yes, but the problem is, it (almost) always evaluates it with NULL when the 
columns have dots in the names because column paths become nested (`a.b` not `` 
`a.b` ``) in the Parquet predicate filter.
    
    You are right for `IsNull`. I pointed out this in 
https://github.com/apache/spark/pull/17680#discussion_r112285883 as they 
(almost) always evaluate it to `true` in Parquet-side but it is filtered in 
Spark-side. So, for input/output, it is not an issue in this case but I believe 
we should disable this for this case too.
    
    I think this example explains the case
    
    ```scala
    val dfs = Seq(
      Seq(Some(1), None).toDF("col.dots"),
      Seq(Some(1L), None).toDF("col.dots"),
      Seq(Some(1.0F), None).toDF("col.dots"),
      Seq(Some(1.0D), None).toDF("col.dots"),
      Seq(true, false).toDF("col.dots"),
      Seq("apple", null).toDF("col.dots"),
      Seq("apple", null).toDF("col.dots")
    )
     
    val predicates = Seq(
      "`col.dots` > 0",
      "`col.dots` >= 1L",
      "`col.dots` < 2.0",
      "`col.dots` <= 1.0D",
      "`col.dots` == true",
      "`col.dots` IS NOT NULL",
      "`col.dots` IS NULL"
    )
    
    dfs.zip(predicates).zipWithIndex.foreach { case ((df, predicate), i) =>
      val path = s"/tmp/abcd$i"
      df.write.mode("overwrite").parquet(path)
      spark.read.parquet(path).where(predicate).show()  
    }
    ```
    
    ```
    +--------+
    |col.dots|
    +--------+
    +--------+
    
    +--------+
    |col.dots|
    +--------+
    +--------+
    
    +--------+
    |col.dots|
    +--------+
    +--------+
    
    +--------+
    |col.dots|
    +--------+
    +--------+
    
    +--------+
    |col.dots|
    +--------+
    +--------+
    
    +--------+
    |col.dots|
    +--------+
    +--------+
    
    +--------+
    |col.dots|
    +--------+
    |    null|
    +--------+
    ```


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to