[ 
https://issues.apache.org/jira/browse/PARQUET-76?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pratik Khadloya updated PARQUET-76:
-----------------------------------

    Description: 
Today we are not able to create a parquet based hive table without having to 
specify the column names and types. When we try to define it the following way, 
we get the error 
"14/08/20 17:27:46 ERROR ql.Driver: FAILED: SemanticException [Error 10043]: 
Either list of columns or a custom serializer should be specified"

{code:sql}
CREATE  TABLE parquet_test
ROW FORMAT SERDE
  'org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe'
STORED AS INPUTFORMAT
  'org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat'
OUTPUTFORMAT
  'org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat'
LOCATION
  '/user/pratik/campaigns';
{code}

Whereas if we create a hive table on top of AVRO based files, we do not need to 
specify the column names, hive automatically figures out the schema through the 
SerDe.

{code:sql}
CREATE EXTERNAL TABLE campaigns
ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.avro.AvroSerDe'
STORED AS INPUTFORMAT 
'org.apache.hadoop.hive.ql.io.avro.AvroContainerInputFormat'
OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.avro.AvroContainerOutputFormat'
LOCATION '/user/pratik/campaigns'
TBLPROPERTIES ('avro.schema.url'='hdfs:///user/pratik/campaigns.avsc');
{code}

  was:
Today we are not able to create a parquet based hive table without having to 
specify the column names and types. When we try to define it the following way, 
we get the error 
"14/08/20 17:27:46 ERROR ql.Driver: FAILED: SemanticException [Error 10043]: 
Either list of columns or a custom serializer should be specified"

{code}
CREATE  TABLE parquet_test
ROW FORMAT SERDE
  'org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe'
STORED AS INPUTFORMAT
  'org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat'
OUTPUTFORMAT
  'org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat'
LOCATION
  '/user/pratik/campaigns';
{code}

Whereas if we create a hive table on top of AVRO based files, we do not need to 
specify the column names, hive automatically figures out the schema through the 
SerDe.

{code}

CREATE EXTERNAL TABLE campaigns
ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.avro.AvroSerDe'
STORED AS INPUTFORMAT 
'org.apache.hadoop.hive.ql.io.avro.AvroContainerInputFormat'
OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.avro.AvroContainerOutputFormat'
LOCATION '/user/pratik/campaigns'
TBLPROPERTIES ('avro.schema.url'='hdfs:///user/pratik/campaigns.avsc');
{code}


> Hive cannot determine the list of columns automatically based on Parquet serde
> ------------------------------------------------------------------------------
>
>                 Key: PARQUET-76
>                 URL: https://issues.apache.org/jira/browse/PARQUET-76
>             Project: Parquet
>          Issue Type: New Feature
>            Reporter: Pratik Khadloya
>            Priority: Critical
>
> Today we are not able to create a parquet based hive table without having to 
> specify the column names and types. When we try to define it the following 
> way, we get the error 
> "14/08/20 17:27:46 ERROR ql.Driver: FAILED: SemanticException [Error 10043]: 
> Either list of columns or a custom serializer should be specified"
> {code:sql}
> CREATE  TABLE parquet_test
> ROW FORMAT SERDE
>   'org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe'
> STORED AS INPUTFORMAT
>   'org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat'
> OUTPUTFORMAT
>   'org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat'
> LOCATION
>   '/user/pratik/campaigns';
> {code}
> Whereas if we create a hive table on top of AVRO based files, we do not need 
> to specify the column names, hive automatically figures out the schema 
> through the SerDe.
> {code:sql}
> CREATE EXTERNAL TABLE campaigns
> ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.avro.AvroSerDe'
> STORED AS INPUTFORMAT 
> 'org.apache.hadoop.hive.ql.io.avro.AvroContainerInputFormat'
> OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.avro.AvroContainerOutputFormat'
> LOCATION '/user/pratik/campaigns'
> TBLPROPERTIES ('avro.schema.url'='hdfs:///user/pratik/campaigns.avsc');
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to