Sergey,

Is your table a partitioned or a non-partitioned one? I have usually seen
this problem manifest itself for partitioned tables and that is mostly
where the pruning bites. So if you now try to add a partition to this
table, you might see an exception like:

java.sql.BatchUpdateException: Data truncation: Data too long for column
'TYPE_NAME' at row 1)

The "TYPE_NAME" is not actually a definition of the Avro schema.  Instead,
it is a definition of the type structure in Hive terms.  I assume it is
used for things such as validating the query before it is executed, etc.

On Mon, Sep 28, 2015 at 7:38 PM, Chaoyu Tang <ctang...@gmail.com> wrote:

> Yes, when you described the avro table, what you get back was actually from
> your avro schema instead of database table. The avro table is NOT
> considered as a metastore backed SerDe. But that it has its columns
> populated to DB (e.g. HIVE-6308
> <https://issues.apache.org/jira/browse/HIVE-6308>) is mainly for column
> statistics purpose, which obviously is not applicable to your case which
> has a type name > 100kb.
>
> Chaoyu
>
> On Mon, Sep 28, 2015 at 8:12 PM, Sergey Shelukhin <ser...@hortonworks.com>
> wrote:
>
> > Hi.
> > I noticed that when I create an Avro table using a very large schema
> file,
> > mysql metastore silently truncates the TYPE_NAME in COLUMNS_V2 table to
> > the size of varchar (4000); however, when I do describe on the table, it
> > still displays the whole type name (around 100Kb long) that I presume it
> > gets from deserializer.
> > Is the value in TYPE_NAME used for anything for Avro tables?
> >
> >
>



-- 
Swarnim

Reply via email to