jcamachor commented on a change in pull request #2282:
URL: https://github.com/apache/hive/pull/2282#discussion_r638140142



##########
File path: common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
##########
@@ -2197,9 +2197,14 @@ private static void 
populateLlapDaemonVarsSet(Set<String> llapDaemonVarsSetLocal
     
HIVE_PARQUET_DATE_PROLEPTIC_GREGORIAN_DEFAULT("hive.parquet.date.proleptic.gregorian.default",
 false,
       "This value controls whether date type in Parquet files was written 
using the hybrid or proleptic\n" +
       "calendar. Hybrid is the default."),
-    
HIVE_PARQUET_TIMESTAMP_LEGACY_CONVERSION_ENABLED("hive.parquet.timestamp.legacy.conversion.enabled",
 true,

Review comment:
       Although this was marked as being used for debugging purposes only, if 
we are deprecating this property, we should try to fail when it is set 
explicitly? Otherwise, we may be changing results for users silently.
   
   If capturing the set command for specific property proves complicated, 
another option could be to use the same name for one of the properties (even if 
then they do not contain .*read.*/.*write.*).

##########
File path: 
ql/src/java/org/apache/hadoop/hive/ql/io/parquet/read/DataWritableReadSupport.java
##########
@@ -523,10 +532,30 @@ private static MessageType getRequestedPrunedSchema(
           configuration, 
HiveConf.ConfVars.HIVE_PARQUET_DATE_PROLEPTIC_GREGORIAN_DEFAULT)));
     }
 
-    String legacyConversion = 
ConfVars.HIVE_PARQUET_TIMESTAMP_LEGACY_CONVERSION_ENABLED.varname;
-    if (!metadata.containsKey(legacyConversion)) {
-      metadata.put(legacyConversion, String.valueOf(HiveConf.getBoolVar(
-          configuration, 
HiveConf.ConfVars.HIVE_PARQUET_TIMESTAMP_LEGACY_CONVERSION_ENABLED)));
+    if 
(!metadata.containsKey(DataWritableWriteSupport.WRITER_ZONE_CONVERSION_LEGACY)) 
{
+      final String legacyConversion;
+      
if(keyValueMetaData.containsKey(DataWritableWriteSupport.WRITER_ZONE_CONVERSION_LEGACY))
 {
+        // If there is meta about the legacy conversion then the file should 
be read in the same way it was written. 
+        legacyConversion = 
keyValueMetaData.get(DataWritableWriteSupport.WRITER_ZONE_CONVERSION_LEGACY);
+      } else 
if(keyValueMetaData.containsKey(DataWritableWriteSupport.WRITER_TIMEZONE)) {
+        // If there is no meta about the legacy conversion but there is meta 
about the timezone then we can infer the
+        // file was written with the new rules.
+        legacyConversion = "false";
+      } else {

Review comment:
       @zabetak , I guess you mean this block?
   `else 
if(keyValueMetaData.containsKey(DataWritableWriteSupport.WRITER_TIMEZONE))`
   
   The question is whether the default value for the config property is going 
to give the desired results for those users. I believe that's not the case, 
i.e., if the value is not there, we assume it was written applying conversions 
with old APIs? Many users would fall in that bucket (3.1.2, 3.2.0) so maybe it 
makes sense to have this clause to preserve behavior, as you have done.

##########
File path: 
ql/src/java/org/apache/hadoop/hive/ql/io/parquet/write/DataWritableWriter.java
##########
@@ -536,7 +542,8 @@ public void write(Object value) {
         Long int64value = ParquetTimestampUtils.getInt64(ts, timeUnit);
         recordConsumer.addLong(int64value);

Review comment:
       Isn't timestamp in INT64 Parquet already stored as UTC? I think we tried 
to keep all these conversions away from the new type.

##########
File path: 
ql/src/java/org/apache/hadoop/hive/ql/io/parquet/read/DataWritableReadSupport.java
##########
@@ -304,6 +305,14 @@ public static Boolean getWriterDateProleptic(Map<String, 
String> metadata) {
     return null;
   }
 
+  public static Boolean getWriterLegacyConversion(Map<String, String> 
metadata) {

Review comment:
       `getWriterLegacyConversion` -> `getWriterTimeZoneLegacyConversion`?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to