[ 
https://issues.apache.org/jira/browse/HIVE-6784?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13965702#comment-13965702
 ] 

Tongjie Chen commented on HIVE-6784:
------------------------------------

Not allowing changing types of columns would be negative for adopting parquet 
--- at least this is a different behavior from using other file format. 

Regarding performance penalty, 

1) if you look at current implementation of LazyInteger, LayzLong etc (from 
LazySimpleSerDe, in package org.apache.hadoop.hive.serde2.lazy), they try to 
parseInt, parseLong for every column (all initially represented as string, the 
parsing overhead occurs even if the type is expected).  This is how hive 
achieves change types of columns ("schema on read");  in another word, the 
similar performance penalty is already there for other SerDe in order to 
achieve "schema on read".

  /** 
   *Parses the string argument as if it was an int value and returns the
   * result. Throws NumberFormatException if the string does not represent an
   * int quantity.
   ...
    public static int parseInt(byte[] bytes, int start, int length, int radix) {


   /**
   * Parses the string argument as if it was a long value and returns the
   * result. Throws NumberFormatException if the string does not represent a
   * long quantity.
   ....
   public static long parseLong(byte[] bytes, int start, int length, int radix) 
{


2) In he patch for this jira, the extra overhead is to list the top level 
element of ArrayWritable to inspect whether there is a type change or not. A 
new converted object is created ONLY IF the type is changed.

    I agree that there would be some overhead to list the element of 
ArrayWritable, but it is a tradeoff.

3) The patch in this jira would actually help performance when serializing (in 
time of writing) ArrayWritable. The old approach is to create a new object for 
every single writable element; with this patch, it only creates new a new 
object when there is type change.



> parquet-hive should allow column type change
> --------------------------------------------
>
>                 Key: HIVE-6784
>                 URL: https://issues.apache.org/jira/browse/HIVE-6784
>             Project: Hive
>          Issue Type: Bug
>          Components: File Formats, Serializers/Deserializers
>    Affects Versions: 0.13.0
>            Reporter: Tongjie Chen
>             Fix For: 0.14.0
>
>         Attachments: HIVE-6784.1.patch.txt, HIVE-6784.2.patch.txt
>
>
> see also in the following parquet issue:
> https://github.com/Parquet/parquet-mr/issues/323
> Currently, if we change parquet format hive table using "alter table 
> parquet_table change c1 c1 bigint " ( assuming original type of c1 is int), 
> it will result in exception thrown from SerDe: 
> "org.apache.hadoop.io.IntWritable cannot be cast to 
> org.apache.hadoop.io.LongWritable" in query runtime.
> This is different behavior from hive (using other file format), where it will 
> try to perform cast (null value in case of incompatible type).
> Parquet Hive's RecordReader returns an ArrayWritable (based on schema stored 
> in footers of parquet files); ParquetHiveSerDe also creates an corresponding 
> ArrayWritableObjectInspector (but using column type info from metastore). 
> Whenever there is column type change, the objector inspector will throw 
> exception, since WritableLongObjectInspector cannot inspect an IntWritable 
> etc...
> Conversion has to happen somewhere if we want to allow type change. SerDe's 
> deserialize method seems a natural place for it.
> Currently, serialize method calls createStruct (then createPrimitive) for 
> every record, but it creates a new object regardless, which seems expensive. 
> I think that could be optimized a bit by just returning the object passed if 
> already of the right type. deserialize also reuse this method, if there is a 
> type change, there will be new object to be created, which I think is 
> inevitable. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to