Github user rdblue commented on a diff in the pull request:

    https://github.com/apache/spark/pull/21556#discussion_r198904779
  
    --- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFilters.scala
 ---
    @@ -62,6 +98,30 @@ private[parquet] class ParquetFilters(pushDownDate: 
Boolean) {
           (n: String, v: Any) => FilterApi.eq(
             intColumn(n),
             Option(v).map(date => 
dateToDays(date.asInstanceOf[Date]).asInstanceOf[Integer]).orNull)
    +    case decimal: DecimalType
    +      if pushDownDecimal && (DecimalType.is32BitDecimalType(decimal) && 
!readLegacyFormat) =>
    +      (n: String, v: Any) => FilterApi.eq(
    +        intColumn(n),
    +        
Option(v).map(_.asInstanceOf[JBigDecimal].unscaledValue().intValue()
    --- End diff --
    
    Does this need to validate the scale of the decimal, or is scale adjusted 
in the analyzer?


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to