Github user twalthr commented on a diff in the pull request:

    https://github.com/apache/flink/pull/6218#discussion_r199746426
  
    --- Diff: 
flink-formats/flink-avro/src/main/java/org/apache/flink/formats/avro/AvroRowSerializationSchema.java
 ---
    @@ -37,18 +43,42 @@
     import java.io.IOException;
     import java.io.ObjectInputStream;
     import java.io.ObjectOutputStream;
    +import java.math.BigDecimal;
    +import java.nio.ByteBuffer;
    +import java.sql.Date;
    +import java.sql.Time;
    +import java.sql.Timestamp;
    +import java.util.HashMap;
     import java.util.List;
    +import java.util.Map;
    +import java.util.TimeZone;
     
     /**
    - * Serialization schema that serializes {@link Row} over {@link 
SpecificRecord} into a Avro bytes.
    + * Serialization schema that serializes {@link Row} into Avro bytes.
    + *
    + * <p>Serializes objects that are represented in (nested) Flink rows. It 
support types that
    + * are compatible with Flink's Table & SQL API.
    + *
    + * <p>Note: Changes in this class need to be kept in sync with the 
corresponding runtime
    + * class {@link AvroRowDeserializationSchema} and schema converter {@link 
AvroSchemaConverter}.
      */
     public class AvroRowSerializationSchema implements 
SerializationSchema<Row> {
     
        /**
    -    * Avro record class.
    +    * Used for time conversions into SQL types.
    +    */
    +   private static final TimeZone LOCAL_TZ = TimeZone.getDefault();
    --- End diff --
    
    We are using this pattern at different places. E.g. 
`org.apache.flink.orc.OrcBatchReader`. The problem is that Java's SQL 
time/date/timestamp are a complete design fail. They are timezone specific. 
This adds/removes the local timezone from the timestamp. Such that the string 
representation of the produced `Timestamp` object is always correct.


---

Reply via email to