Github user squito commented on a diff in the pull request:

    https://github.com/apache/spark/pull/19769#discussion_r151556907
  
    --- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFileFormat.scala
 ---
    @@ -355,9 +361,31 @@ class ParquetFileFormat
               fileSplit.getLocations,
               null)
     
    +      // PARQUET_INT96_TIMESTAMP_CONVERSION says to apply timezone 
conversions to int96 timestamps'
    +      // *only* if the file was created by something other than 
"parquet-mr", so check the actual
    +      // writer here for this file.  We have to do this per-file, as each 
file in the table may
    +      // have different writers.  Sadly, this also means we have to clone 
the hadoopConf, as
    +      // different threads may want different values.  We have to use the 
hadoopConf as its
    +      // our only way to pass value to ParquetReadSupport.init
    +      val localHadoopConf =
    --- End diff --
    
    yeah that works for `VectorizedParquetRecordReader`, but not 
[`ParquetRecordReader`](https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFileFormat.scala#L381)
 which we don't control, it does this dance w/ the hadoopConf.  I could put 
this copy all behind `!enableVectorizedReader` though


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to