Github user cloud-fan commented on a diff in the pull request:

    https://github.com/apache/spark/pull/19769#discussion_r151549968
  
    --- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFileFormat.scala
 ---
    @@ -355,9 +361,31 @@ class ParquetFileFormat
               fileSplit.getLocations,
               null)
     
    +      // PARQUET_INT96_TIMESTAMP_CONVERSION says to apply timezone 
conversions to int96 timestamps'
    +      // *only* if the file was created by something other than 
"parquet-mr", so check the actual
    +      // writer here for this file.  We have to do this per-file, as each 
file in the table may
    +      // have different writers.  Sadly, this also means we have to clone 
the hadoopConf, as
    +      // different threads may want different values.  We have to use the 
hadoopConf as its
    +      // our only way to pass value to ParquetReadSupport.init
    +      val localHadoopConf =
    --- End diff --
    
    This code is run at executor side, we don't need to use `hadoopConf` to 
carry the config, we can just pass a boolean flag when creating the parquet 
reader, e.g. `new VectorizedParquetRecordReader(timeZoneToAdjust)`


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to