[jira] [Commented] (HIVE-7787) Reading Parquet file with enum in Thrift Encoding throws NoSuchFieldError

2014-08-19 Thread Raymond Lau (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14103025#comment-14103025
 ] 

Raymond Lau commented on HIVE-7787:
---

This issue does not occur in Hive 0.12 CDH 5.0.0.  
ETypeConverter.getNewConverter in that version does not have checks involving 
DECIMAL type.
CDH 5.0.0
{code}
public static Converter getNewConverter(final Class type, final int index, 
final HiveGroupConverter parent) {
for (final ETypeConverter eConverter : values()) {
  if (eConverter.getType() == type) {
return eConverter.getConverter(type, index, parent);
  }
}
throw new IllegalArgumentException("Converter not found ... for type : " + 
type);
  }
{code}

CDH 5.1.0
{code}
public static Converter getNewConverter(final PrimitiveType type, final int 
index, final HiveGroupConverter parent) {
if (type.isPrimitive() && 
(type.asPrimitiveType().getPrimitiveTypeName().equals(PrimitiveType.PrimitiveTypeName.INT96)))
 {
  //TODO- cleanup once parquet support Timestamp type annotation.
  return ETypeConverter.ETIMESTAMP_CONVERTER.getConverter(type, index, 
parent);
}
if (OriginalType.DECIMAL == type.getOriginalType()) {
  return EDECIMAL_CONVERTER.getConverter(type, index, parent);
} else if (OriginalType.UTF8 == type.getOriginalType()) {
  return ESTRING_CONVERTER.getConverter(type, index, parent);
}

Class javaType = type.getPrimitiveTypeName().javaType;
for (final ETypeConverter eConverter : values()) {
  if (eConverter.getType() == javaType) {
return eConverter.getConverter(type, index, parent);
  }
}

throw new IllegalArgumentException("Converter not found ... for type : " + 
type);
  }
{code}

> Reading Parquet file with enum in Thrift Encoding throws NoSuchFieldError
> -
>
> Key: HIVE-7787
> URL: https://issues.apache.org/jira/browse/HIVE-7787
> Project: Hive
>  Issue Type: Bug
>  Components: Database/Schema, Thrift API
>Affects Versions: 0.12.0, 0.13.0, 0.12.1, 0.14.0, 0.13.1
> Environment: Hive 0.12 CDH 5.1.0, Hadoop 2.3.0 CDH 5.1.0
>Reporter: Raymond Lau
>Priority: Minor
>
> When reading Parquet file, where the original Thrift schema contains a struct 
> with an enum, this causes the following error (full stack trace blow): 
> {code}
>  java.lang.NoSuchFieldError: DECIMAL.
> {code} 
> Example Thrift Schema:
> {code}
> enum MyEnumType {
> EnumOne,
> EnumTwo,
> EnumThree
> }
> struct MyStruct {
> 1: optional MyEnumType myEnumType;
> 2: optional string field2;
> 3: optional string field3;
> }
> struct outerStruct {
> 1: optional list myStructs
> }
> {code}
> Hive Table:
> {code}
> CREATE EXTERNAL TABLE mytable (
>   mystructs array>
> )
> ROW FORMAT SERDE 'parquet.hive.serde.ParquetHiveSerDe'
> STORED AS
> INPUTFORMAT 'parquet.hive.DeprecatedParquetInputFormat'
> OUTPUTFORMAT 'parquet.hive.DeprecatedParquetOutputFormat'
> ; 
> {code}
> Error Stack trace:
> {code}
> Java stack trace for Hive 0.12:
> Caused by: java.lang.NoSuchFieldError: DECIMAL
>   at 
> org.apache.hadoop.hive.ql.io.parquet.convert.ETypeConverter.getNewConverter(ETypeConverter.java:146)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:31)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.convert.ArrayWritableGroupConverter.(ArrayWritableGroupConverter.java:45)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:34)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.(DataWritableGroupConverter.java:64)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.(DataWritableGroupConverter.java:47)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:36)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.(DataWritableGroupConverter.java:64)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.(DataWritableGroupConverter.java:40)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableRecordConverter.(DataWritableRecordConverter.java:32)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.read.DataWritableReadSupport.prepareForRead(DataWritableReadSupport.java:128)
>   at 
> parquet.hadoop.InternalParquetRecordReader.initialize(InternalParquetRecordReader.java:142)
>   at 
> parquet.hadoop.ParquetRecordReader.initializeInternalReader(ParquetRecordReader.java:118)
>   at 
> parquet.hadoop.ParquetRecordReader.initialize(ParquetRecordReader.java:107)
>   at 
> org.apache.h

[jira] [Updated] (HIVE-7787) Reading Parquet file with enum in Thrift Encoding throws NoSuchFieldError

2014-08-19 Thread Raymond Lau (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond Lau updated HIVE-7787:
--

Affects Version/s: 0.14.0
   0.13.0
   0.13.1

> Reading Parquet file with enum in Thrift Encoding throws NoSuchFieldError
> -
>
> Key: HIVE-7787
> URL: https://issues.apache.org/jira/browse/HIVE-7787
> Project: Hive
>  Issue Type: Bug
>  Components: Database/Schema, Thrift API
>Affects Versions: 0.12.0, 0.13.0, 0.12.1, 0.14.0, 0.13.1
> Environment: Hive 0.12 CDH 5.1.0, Hadoop 2.3.0 CDH 5.1.0
>Reporter: Raymond Lau
>Priority: Minor
>
> When reading Parquet file, where the original Thrift schema contains a struct 
> with an enum, this causes the following error (full stack trace blow): 
> {code}
>  java.lang.NoSuchFieldError: DECIMAL.
> {code} 
> Example Thrift Schema:
> {code}
> enum MyEnumType {
> EnumOne,
> EnumTwo,
> EnumThree
> }
> struct MyStruct {
> 1: optional MyEnumType myEnumType;
> 2: optional string field2;
> 3: optional string field3;
> }
> struct outerStruct {
> 1: optional list myStructs
> }
> {code}
> Hive Table:
> {code}
> CREATE EXTERNAL TABLE mytable (
>   mystructs array>
> )
> ROW FORMAT SERDE 'parquet.hive.serde.ParquetHiveSerDe'
> STORED AS
> INPUTFORMAT 'parquet.hive.DeprecatedParquetInputFormat'
> OUTPUTFORMAT 'parquet.hive.DeprecatedParquetOutputFormat'
> ; 
> {code}
> Error Stack trace:
> {code}
> Java stack trace for Hive 0.12:
> Caused by: java.lang.NoSuchFieldError: DECIMAL
>   at 
> org.apache.hadoop.hive.ql.io.parquet.convert.ETypeConverter.getNewConverter(ETypeConverter.java:146)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:31)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.convert.ArrayWritableGroupConverter.(ArrayWritableGroupConverter.java:45)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:34)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.(DataWritableGroupConverter.java:64)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.(DataWritableGroupConverter.java:47)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:36)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.(DataWritableGroupConverter.java:64)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.(DataWritableGroupConverter.java:40)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableRecordConverter.(DataWritableRecordConverter.java:32)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.read.DataWritableReadSupport.prepareForRead(DataWritableReadSupport.java:128)
>   at 
> parquet.hadoop.InternalParquetRecordReader.initialize(InternalParquetRecordReader.java:142)
>   at 
> parquet.hadoop.ParquetRecordReader.initializeInternalReader(ParquetRecordReader.java:118)
>   at 
> parquet.hadoop.ParquetRecordReader.initialize(ParquetRecordReader.java:107)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.(ParquetRecordReaderWrapper.java:92)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.(ParquetRecordReaderWrapper.java:66)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat.getRecordReader(MapredParquetInputFormat.java:51)
>   at 
> org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.(CombineHiveRecordReader.java:65)
>   ... 16 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7787) Reading Parquet file with enum in Thrift Encoding throws NoSuchFieldError

2014-08-19 Thread Raymond Lau (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond Lau updated HIVE-7787:
--

Description: 
When reading Parquet file, where the original Thrift schema contains a struct 
with an enum, this causes the following error (full stack trace blow): 
{code}
 java.lang.NoSuchFieldError: DECIMAL.
{code} 

Example Thrift Schema:
{code}
enum MyEnumType {
EnumOne,
EnumTwo,
EnumThree
}

struct MyStruct {
1: optional MyEnumType myEnumType;
2: optional string field2;
3: optional string field3;
}

struct outerStruct {
1: optional list myStructs
}
{code}

Hive Table:
{code}
CREATE EXTERNAL TABLE mytable (
  mystructs array>
)
ROW FORMAT SERDE 'parquet.hive.serde.ParquetHiveSerDe'
STORED AS
INPUTFORMAT 'parquet.hive.DeprecatedParquetInputFormat'
OUTPUTFORMAT 'parquet.hive.DeprecatedParquetOutputFormat'
; 
{code}

Error Stack trace:
{code}
Java stack trace for Hive 0.12:
Caused by: java.lang.NoSuchFieldError: DECIMAL
at 
org.apache.hadoop.hive.ql.io.parquet.convert.ETypeConverter.getNewConverter(ETypeConverter.java:146)
at 
org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:31)
at 
org.apache.hadoop.hive.ql.io.parquet.convert.ArrayWritableGroupConverter.(ArrayWritableGroupConverter.java:45)
at 
org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:34)
at 
org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.(DataWritableGroupConverter.java:64)
at 
org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.(DataWritableGroupConverter.java:47)
at 
org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:36)
at 
org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.(DataWritableGroupConverter.java:64)
at 
org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.(DataWritableGroupConverter.java:40)
at 
org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableRecordConverter.(DataWritableRecordConverter.java:32)
at 
org.apache.hadoop.hive.ql.io.parquet.read.DataWritableReadSupport.prepareForRead(DataWritableReadSupport.java:128)
at 
parquet.hadoop.InternalParquetRecordReader.initialize(InternalParquetRecordReader.java:142)
at 
parquet.hadoop.ParquetRecordReader.initializeInternalReader(ParquetRecordReader.java:118)
at 
parquet.hadoop.ParquetRecordReader.initialize(ParquetRecordReader.java:107)
at 
org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.(ParquetRecordReaderWrapper.java:92)
at 
org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.(ParquetRecordReaderWrapper.java:66)
at 
org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat.getRecordReader(MapredParquetInputFormat.java:51)
at 
org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.(CombineHiveRecordReader.java:65)
... 16 more
{code}

  was:
When reading Parquet file, where the original Thrift schema contains a struct 
with an enum, this causes the following error (full stack trace blow): 
{code}
 java.lang.NoSuchFieldError: DECIMAL.
{code} 

Example Thrift Schema:
{code}
enum MyEnumType {
EnumOne,
EnumTwo,
EnumThree
}

struct MyStruct {
1: optional MyEnumType myEnumType;
2: optional string field2;
3: optional string field3;
}

struct outerStruct {
1: optional list myStructs
}
{code}

Error Stack trace:
{code}
Java stack trace for Hive 0.12:
Caused by: java.lang.NoSuchFieldError: DECIMAL
at 
org.apache.hadoop.hive.ql.io.parquet.convert.ETypeConverter.getNewConverter(ETypeConverter.java:146)
at 
org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:31)
at 
org.apache.hadoop.hive.ql.io.parquet.convert.ArrayWritableGroupConverter.(ArrayWritableGroupConverter.java:45)
at 
org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:34)
at 
org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.(DataWritableGroupConverter.java:64)
at 
org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.(DataWritableGroupConverter.java:47)
at 
org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:36)
at 
org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.(DataWritableGroupConverter.java:64)
at 
org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.(DataWritableGroupConverter.java:40)
at 
org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableRecordConverter.(Dat

[jira] [Updated] (HIVE-7787) Reading Parquet file with enum in Thrift Encoding throws NoSuchFieldError

2014-08-19 Thread Raymond Lau (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond Lau updated HIVE-7787:
--

Environment: Hive 0.12 CDH 5.1.0, Hadoop 2.3.0 CDH 5.1.0  (was: Hive 0.12 
CDH 5.1.0, Hadoop 0.23)

> Reading Parquet file with enum in Thrift Encoding throws NoSuchFieldError
> -
>
> Key: HIVE-7787
> URL: https://issues.apache.org/jira/browse/HIVE-7787
> Project: Hive
>  Issue Type: Bug
>  Components: Database/Schema, Thrift API
>Affects Versions: 0.12.0, 0.12.1
> Environment: Hive 0.12 CDH 5.1.0, Hadoop 2.3.0 CDH 5.1.0
>Reporter: Raymond Lau
>Priority: Minor
>
> When reading Parquet file, where the original Thrift schema contains a struct 
> with an enum, this causes the following error (full stack trace blow): 
> {code}
>  java.lang.NoSuchFieldError: DECIMAL.
> {code} 
> Example Thrift Schema:
> {code}
> enum MyEnumType {
> EnumOne,
> EnumTwo,
> EnumThree
> }
> struct MyStruct {
> 1: optional MyEnumType myEnumType;
> 2: optional string field2;
> 3: optional string field3;
> }
> struct outerStruct {
> 1: optional list myStructs
> }
> {code}
> Error Stack trace:
> {code}
> Java stack trace for Hive 0.12:
> Caused by: java.lang.NoSuchFieldError: DECIMAL
>   at 
> org.apache.hadoop.hive.ql.io.parquet.convert.ETypeConverter.getNewConverter(ETypeConverter.java:146)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:31)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.convert.ArrayWritableGroupConverter.(ArrayWritableGroupConverter.java:45)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:34)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.(DataWritableGroupConverter.java:64)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.(DataWritableGroupConverter.java:47)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:36)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.(DataWritableGroupConverter.java:64)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.(DataWritableGroupConverter.java:40)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableRecordConverter.(DataWritableRecordConverter.java:32)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.read.DataWritableReadSupport.prepareForRead(DataWritableReadSupport.java:128)
>   at 
> parquet.hadoop.InternalParquetRecordReader.initialize(InternalParquetRecordReader.java:142)
>   at 
> parquet.hadoop.ParquetRecordReader.initializeInternalReader(ParquetRecordReader.java:118)
>   at 
> parquet.hadoop.ParquetRecordReader.initialize(ParquetRecordReader.java:107)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.(ParquetRecordReaderWrapper.java:92)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.(ParquetRecordReaderWrapper.java:66)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat.getRecordReader(MapredParquetInputFormat.java:51)
>   at 
> org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.(CombineHiveRecordReader.java:65)
>   ... 16 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7787) Reading Parquet file with enum in Thrift Encoding throws NoSuchFieldError

2014-08-19 Thread Raymond Lau (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond Lau updated HIVE-7787:
--

Affects Version/s: 0.12.1

> Reading Parquet file with enum in Thrift Encoding throws NoSuchFieldError
> -
>
> Key: HIVE-7787
> URL: https://issues.apache.org/jira/browse/HIVE-7787
> Project: Hive
>  Issue Type: Bug
>  Components: Database/Schema, Thrift API
>Affects Versions: 0.12.0, 0.12.1
> Environment: Hive 0.12 CDH 5.1.0, Hadoop 0.23
>Reporter: Raymond Lau
>Priority: Minor
>
> When reading Parquet file, where the original Thrift schema contains an enum, 
> this causes the following error (full stack trace blow): 
> {code}
>  java.lang.NoSuchFieldError: DECIMAL.
> {code} 
> Example Thrift Schema:
> {code}
> enum MyEnumType {
> EnumOne,
> EnumTwo,
> EnumThree
> }
> struct MyStruct {
> 1: optional MyEnumType myEnumType;
> 2: optional string field2;
> 3: optional string field3;
> }
> struct outerStruct {
> 1: optional list myStructs
> }
> {code}
> Error Stack trace:
> {code}
> Java stack trace for Hive 0.12:
> Caused by: java.lang.NoSuchFieldError: DECIMAL
>   at 
> org.apache.hadoop.hive.ql.io.parquet.convert.ETypeConverter.getNewConverter(ETypeConverter.java:146)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:31)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.convert.ArrayWritableGroupConverter.(ArrayWritableGroupConverter.java:45)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:34)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.(DataWritableGroupConverter.java:64)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.(DataWritableGroupConverter.java:47)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:36)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.(DataWritableGroupConverter.java:64)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.(DataWritableGroupConverter.java:40)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableRecordConverter.(DataWritableRecordConverter.java:32)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.read.DataWritableReadSupport.prepareForRead(DataWritableReadSupport.java:128)
>   at 
> parquet.hadoop.InternalParquetRecordReader.initialize(InternalParquetRecordReader.java:142)
>   at 
> parquet.hadoop.ParquetRecordReader.initializeInternalReader(ParquetRecordReader.java:118)
>   at 
> parquet.hadoop.ParquetRecordReader.initialize(ParquetRecordReader.java:107)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.(ParquetRecordReaderWrapper.java:92)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.(ParquetRecordReaderWrapper.java:66)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat.getRecordReader(MapredParquetInputFormat.java:51)
>   at 
> org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.(CombineHiveRecordReader.java:65)
>   ... 16 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7787) Reading Parquet file with enum in Thrift Encoding throws NoSuchFieldError

2014-08-19 Thread Raymond Lau (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond Lau updated HIVE-7787:
--

Description: 
When reading Parquet file, where the original Thrift schema contains a struct 
with an enum, this causes the following error (full stack trace blow): 
{code}
 java.lang.NoSuchFieldError: DECIMAL.
{code} 

Example Thrift Schema:
{code}
enum MyEnumType {
EnumOne,
EnumTwo,
EnumThree
}

struct MyStruct {
1: optional MyEnumType myEnumType;
2: optional string field2;
3: optional string field3;
}

struct outerStruct {
1: optional list myStructs
}
{code}

Error Stack trace:
{code}
Java stack trace for Hive 0.12:
Caused by: java.lang.NoSuchFieldError: DECIMAL
at 
org.apache.hadoop.hive.ql.io.parquet.convert.ETypeConverter.getNewConverter(ETypeConverter.java:146)
at 
org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:31)
at 
org.apache.hadoop.hive.ql.io.parquet.convert.ArrayWritableGroupConverter.(ArrayWritableGroupConverter.java:45)
at 
org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:34)
at 
org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.(DataWritableGroupConverter.java:64)
at 
org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.(DataWritableGroupConverter.java:47)
at 
org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:36)
at 
org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.(DataWritableGroupConverter.java:64)
at 
org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.(DataWritableGroupConverter.java:40)
at 
org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableRecordConverter.(DataWritableRecordConverter.java:32)
at 
org.apache.hadoop.hive.ql.io.parquet.read.DataWritableReadSupport.prepareForRead(DataWritableReadSupport.java:128)
at 
parquet.hadoop.InternalParquetRecordReader.initialize(InternalParquetRecordReader.java:142)
at 
parquet.hadoop.ParquetRecordReader.initializeInternalReader(ParquetRecordReader.java:118)
at 
parquet.hadoop.ParquetRecordReader.initialize(ParquetRecordReader.java:107)
at 
org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.(ParquetRecordReaderWrapper.java:92)
at 
org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.(ParquetRecordReaderWrapper.java:66)
at 
org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat.getRecordReader(MapredParquetInputFormat.java:51)
at 
org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.(CombineHiveRecordReader.java:65)
... 16 more
{code}

  was:
When reading Parquet file, where the original Thrift schema contains an enum, 
this causes the following error (full stack trace blow): 
{code}
 java.lang.NoSuchFieldError: DECIMAL.
{code} 

Example Thrift Schema:
{code}
enum MyEnumType {
EnumOne,
EnumTwo,
EnumThree
}

struct MyStruct {
1: optional MyEnumType myEnumType;
2: optional string field2;
3: optional string field3;
}

struct outerStruct {
1: optional list myStructs
}
{code}

Error Stack trace:
{code}
Java stack trace for Hive 0.12:
Caused by: java.lang.NoSuchFieldError: DECIMAL
at 
org.apache.hadoop.hive.ql.io.parquet.convert.ETypeConverter.getNewConverter(ETypeConverter.java:146)
at 
org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:31)
at 
org.apache.hadoop.hive.ql.io.parquet.convert.ArrayWritableGroupConverter.(ArrayWritableGroupConverter.java:45)
at 
org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:34)
at 
org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.(DataWritableGroupConverter.java:64)
at 
org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.(DataWritableGroupConverter.java:47)
at 
org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:36)
at 
org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.(DataWritableGroupConverter.java:64)
at 
org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.(DataWritableGroupConverter.java:40)
at 
org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableRecordConverter.(DataWritableRecordConverter.java:32)
at 
org.apache.hadoop.hive.ql.io.parquet.read.DataWritableReadSupport.prepareForRead(DataWritableReadSupport.java:128)
at 
parquet.hadoop.InternalParquetRecordReader.initialize(InternalParquetRecordReader.java:142)
at 

[jira] [Updated] (HIVE-7787) Reading Parquet file with enum in Thrift Encoding throws NoSuchFieldError

2014-08-19 Thread Raymond Lau (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond Lau updated HIVE-7787:
--

Affects Version/s: (was: 0.14.0)
   (was: 0.13.0)

> Reading Parquet file with enum in Thrift Encoding throws NoSuchFieldError
> -
>
> Key: HIVE-7787
> URL: https://issues.apache.org/jira/browse/HIVE-7787
> Project: Hive
>  Issue Type: Bug
>  Components: Database/Schema, Thrift API
>Affects Versions: 0.12.0, 0.12.1
> Environment: Hive 0.12 CDH 5.1.0, Hadoop 0.23
>Reporter: Raymond Lau
>Priority: Minor
>
> When reading Parquet file, where the original Thrift schema contains an enum, 
> this causes the following error (full stack trace blow): 
> {code}
>  java.lang.NoSuchFieldError: DECIMAL.
> {code} 
> Example Thrift Schema:
> {code}
> enum MyEnumType {
> EnumOne,
> EnumTwo,
> EnumThree
> }
> struct MyStruct {
> 1: optional MyEnumType myEnumType;
> 2: optional string field2;
> 3: optional string field3;
> }
> struct outerStruct {
> 1: optional list myStructs
> }
> {code}
> Error Stack trace:
> {code}
> Java stack trace for Hive 0.12:
> Caused by: java.lang.NoSuchFieldError: DECIMAL
>   at 
> org.apache.hadoop.hive.ql.io.parquet.convert.ETypeConverter.getNewConverter(ETypeConverter.java:146)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:31)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.convert.ArrayWritableGroupConverter.(ArrayWritableGroupConverter.java:45)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:34)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.(DataWritableGroupConverter.java:64)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.(DataWritableGroupConverter.java:47)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:36)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.(DataWritableGroupConverter.java:64)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.(DataWritableGroupConverter.java:40)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableRecordConverter.(DataWritableRecordConverter.java:32)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.read.DataWritableReadSupport.prepareForRead(DataWritableReadSupport.java:128)
>   at 
> parquet.hadoop.InternalParquetRecordReader.initialize(InternalParquetRecordReader.java:142)
>   at 
> parquet.hadoop.ParquetRecordReader.initializeInternalReader(ParquetRecordReader.java:118)
>   at 
> parquet.hadoop.ParquetRecordReader.initialize(ParquetRecordReader.java:107)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.(ParquetRecordReaderWrapper.java:92)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.(ParquetRecordReaderWrapper.java:66)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat.getRecordReader(MapredParquetInputFormat.java:51)
>   at 
> org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.(CombineHiveRecordReader.java:65)
>   ... 16 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7787) Reading Parquet file with enum in Thrift Encoding throws NoSuchFieldError

2014-08-19 Thread Raymond Lau (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond Lau updated HIVE-7787:
--

Environment: Hive 0.12 CDH 5.1.0, Hadoop 0.23  (was: Hive 0.12 CDH 5.1.0)

> Reading Parquet file with enum in Thrift Encoding throws NoSuchFieldError
> -
>
> Key: HIVE-7787
> URL: https://issues.apache.org/jira/browse/HIVE-7787
> Project: Hive
>  Issue Type: Bug
>  Components: Database/Schema, Thrift API
>Affects Versions: 0.12.0, 0.13.0, 0.14.0
> Environment: Hive 0.12 CDH 5.1.0, Hadoop 0.23
>Reporter: Raymond Lau
>Priority: Minor
>
> When reading Parquet file, where the original Thrift schema contains an enum, 
> this causes the following error (full stack trace blow): 
> {code}
>  java.lang.NoSuchFieldError: DECIMAL.
> {code} 
> Example Thrift Schema:
> {code}
> enum MyEnumType {
> EnumOne,
> EnumTwo,
> EnumThree
> }
> struct MyStruct {
> 1: optional MyEnumType myEnumType;
> 2: optional string field2;
> 3: optional string field3;
> }
> struct outerStruct {
> 1: optional list myStructs
> }
> {code}
> Error Stack trace:
> {code}
> Java stack trace for Hive 0.12:
> Caused by: java.lang.NoSuchFieldError: DECIMAL
>   at 
> org.apache.hadoop.hive.ql.io.parquet.convert.ETypeConverter.getNewConverter(ETypeConverter.java:146)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:31)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.convert.ArrayWritableGroupConverter.(ArrayWritableGroupConverter.java:45)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:34)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.(DataWritableGroupConverter.java:64)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.(DataWritableGroupConverter.java:47)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:36)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.(DataWritableGroupConverter.java:64)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.(DataWritableGroupConverter.java:40)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableRecordConverter.(DataWritableRecordConverter.java:32)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.read.DataWritableReadSupport.prepareForRead(DataWritableReadSupport.java:128)
>   at 
> parquet.hadoop.InternalParquetRecordReader.initialize(InternalParquetRecordReader.java:142)
>   at 
> parquet.hadoop.ParquetRecordReader.initializeInternalReader(ParquetRecordReader.java:118)
>   at 
> parquet.hadoop.ParquetRecordReader.initialize(ParquetRecordReader.java:107)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.(ParquetRecordReaderWrapper.java:92)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.(ParquetRecordReaderWrapper.java:66)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat.getRecordReader(MapredParquetInputFormat.java:51)
>   at 
> org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.(CombineHiveRecordReader.java:65)
>   ... 16 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7787) Reading Parquet file with enum in Thrift Encoding throws NoSuchFieldError

2014-08-19 Thread Raymond Lau (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond Lau updated HIVE-7787:
--

Description: 
When reading Parquet file, where the original Thrift schema contains an enum, 
this causes the following error (full stack trace blow): 
{code}
 java.lang.NoSuchFieldError: DECIMAL.
{code} 

Example Thrift Schema:
{code}
enum MyEnumType {
EnumOne,
EnumTwo,
EnumThree
}

struct MyStruct {
1: optional MyEnumType myEnumType;
2: optional string field2;
3: optional string field3;
}

struct outerStruct {
1: optional list myStructs
}
{code}

Error Stack trace:
{code}
Java stack trace for Hive 0.12:
Caused by: java.lang.NoSuchFieldError: DECIMAL
at 
org.apache.hadoop.hive.ql.io.parquet.convert.ETypeConverter.getNewConverter(ETypeConverter.java:146)
at 
org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:31)
at 
org.apache.hadoop.hive.ql.io.parquet.convert.ArrayWritableGroupConverter.(ArrayWritableGroupConverter.java:45)
at 
org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:34)
at 
org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.(DataWritableGroupConverter.java:64)
at 
org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.(DataWritableGroupConverter.java:47)
at 
org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:36)
at 
org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.(DataWritableGroupConverter.java:64)
at 
org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.(DataWritableGroupConverter.java:40)
at 
org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableRecordConverter.(DataWritableRecordConverter.java:32)
at 
org.apache.hadoop.hive.ql.io.parquet.read.DataWritableReadSupport.prepareForRead(DataWritableReadSupport.java:128)
at 
parquet.hadoop.InternalParquetRecordReader.initialize(InternalParquetRecordReader.java:142)
at 
parquet.hadoop.ParquetRecordReader.initializeInternalReader(ParquetRecordReader.java:118)
at 
parquet.hadoop.ParquetRecordReader.initialize(ParquetRecordReader.java:107)
at 
org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.(ParquetRecordReaderWrapper.java:92)
at 
org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.(ParquetRecordReaderWrapper.java:66)
at 
org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat.getRecordReader(MapredParquetInputFormat.java:51)
at 
org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.(CombineHiveRecordReader.java:65)
... 16 more
{code}

  was:
When reading Parquet file, where the original Thrift schema contains an enum, 
this causes the following error: 
{code}
 java.lang.NoSuchFieldError: DECIMAL.
{code} 

Full Stack trace below:

Example Thrift Schema:
{code}
enum MyEnumType {
EnumOne,
EnumTwo,
EnumThree
}

struct MyStruct {
1: optional MyEnumType myEnumType;
2: optional string field2;
3: optional string field3;
}
{code}

Error Stack trace:
{code}
Java stack trace for Hive 0.12:
Caused by: java.lang.NoSuchFieldError: DECIMAL
at 
org.apache.hadoop.hive.ql.io.parquet.convert.ETypeConverter.getNewConverter(ETypeConverter.java:146)
at 
org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:31)
at 
org.apache.hadoop.hive.ql.io.parquet.convert.ArrayWritableGroupConverter.(ArrayWritableGroupConverter.java:45)
at 
org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:34)
at 
org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.(DataWritableGroupConverter.java:64)
at 
org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.(DataWritableGroupConverter.java:47)
at 
org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:36)
at 
org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.(DataWritableGroupConverter.java:64)
at 
org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.(DataWritableGroupConverter.java:40)
at 
org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableRecordConverter.(DataWritableRecordConverter.java:32)
at 
org.apache.hadoop.hive.ql.io.parquet.read.DataWritableReadSupport.prepareForRead(DataWritableReadSupport.java:128)
at 
parquet.hadoop.InternalParquetRecordReader.initialize(InternalParquetRecordReader.java:142)
at 
parquet.hadoop.ParquetRecordReader.initializeInternalReader(Parquet

[jira] [Updated] (HIVE-7787) Reading Parquet file with enum in Thrift Encoding throws NoSuchFieldError

2014-08-19 Thread Raymond Lau (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond Lau updated HIVE-7787:
--

Description: 
When reading Parquet file, where the original Thrift schema contains an enum, 
this causes the following error: 
{code}
 java.lang.NoSuchFieldError: DECIMAL.
{code} 

Full Stack trace below:

Example Thrift Schema:
{code}
enum MyEnumType {
EnumOne,
EnumTwo,
EnumThree
}

struct MyStruct {
1: optional MyEnumType myEnumType;
2: optional string field2;
3: optional string field3;
}
{code}

Error Stack trace:
{code}
Java stack trace for Hive 0.12:
Caused by: java.lang.NoSuchFieldError: DECIMAL
at 
org.apache.hadoop.hive.ql.io.parquet.convert.ETypeConverter.getNewConverter(ETypeConverter.java:146)
at 
org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:31)
at 
org.apache.hadoop.hive.ql.io.parquet.convert.ArrayWritableGroupConverter.(ArrayWritableGroupConverter.java:45)
at 
org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:34)
at 
org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.(DataWritableGroupConverter.java:64)
at 
org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.(DataWritableGroupConverter.java:47)
at 
org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:36)
at 
org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.(DataWritableGroupConverter.java:64)
at 
org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.(DataWritableGroupConverter.java:40)
at 
org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableRecordConverter.(DataWritableRecordConverter.java:32)
at 
org.apache.hadoop.hive.ql.io.parquet.read.DataWritableReadSupport.prepareForRead(DataWritableReadSupport.java:128)
at 
parquet.hadoop.InternalParquetRecordReader.initialize(InternalParquetRecordReader.java:142)
at 
parquet.hadoop.ParquetRecordReader.initializeInternalReader(ParquetRecordReader.java:118)
at 
parquet.hadoop.ParquetRecordReader.initialize(ParquetRecordReader.java:107)
at 
org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.(ParquetRecordReaderWrapper.java:92)
at 
org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.(ParquetRecordReaderWrapper.java:66)
at 
org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat.getRecordReader(MapredParquetInputFormat.java:51)
at 
org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.(CombineHiveRecordReader.java:65)
... 16 more
{code}

  was:
When reading Parquet file, where the original Thrift schema contains an enum, 
this causes the following error:

 java.lang.NoSuchFieldError: DECIMAL

Java stack trace for Hive 0.12:
Caused by: java.lang.NoSuchFieldError: DECIMAL
at 
org.apache.hadoop.hive.ql.io.parquet.convert.ETypeConverter.getNewConverter(ETypeConverter.java:146)
at 
org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:31)
at 
org.apache.hadoop.hive.ql.io.parquet.convert.ArrayWritableGroupConverter.(ArrayWritableGroupConverter.java:45)
at 
org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:34)
at 
org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.(DataWritableGroupConverter.java:64)
at 
org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.(DataWritableGroupConverter.java:47)
at 
org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:36)
at 
org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.(DataWritableGroupConverter.java:64)
at 
org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.(DataWritableGroupConverter.java:40)
at 
org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableRecordConverter.(DataWritableRecordConverter.java:32)
at 
org.apache.hadoop.hive.ql.io.parquet.read.DataWritableReadSupport.prepareForRead(DataWritableReadSupport.java:128)
at 
parquet.hadoop.InternalParquetRecordReader.initialize(InternalParquetRecordReader.java:142)
at 
parquet.hadoop.ParquetRecordReader.initializeInternalReader(ParquetRecordReader.java:118)
at 
parquet.hadoop.ParquetRecordReader.initialize(ParquetRecordReader.java:107)
at 
org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.(ParquetRecordReaderWrapper.java:92)
at 
org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.(ParquetRecordReaderWrapper.jav

[jira] [Updated] (HIVE-7787) Reading Parquet file with enum in Thrift Encoding

2014-08-19 Thread Raymond Lau (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond Lau updated HIVE-7787:
--

Description: 
When reading Parquet file, where the original Thrift schema contains an enum, 
this causes the following error:

 java.lang.NoSuchFieldError: DECIMAL

Java stack trace for Hive 0.12:
Caused by: java.lang.NoSuchFieldError: DECIMAL
at 
org.apache.hadoop.hive.ql.io.parquet.convert.ETypeConverter.getNewConverter(ETypeConverter.java:146)
at 
org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:31)
at 
org.apache.hadoop.hive.ql.io.parquet.convert.ArrayWritableGroupConverter.(ArrayWritableGroupConverter.java:45)
at 
org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:34)
at 
org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.(DataWritableGroupConverter.java:64)
at 
org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.(DataWritableGroupConverter.java:47)
at 
org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:36)
at 
org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.(DataWritableGroupConverter.java:64)
at 
org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.(DataWritableGroupConverter.java:40)
at 
org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableRecordConverter.(DataWritableRecordConverter.java:32)
at 
org.apache.hadoop.hive.ql.io.parquet.read.DataWritableReadSupport.prepareForRead(DataWritableReadSupport.java:128)
at 
parquet.hadoop.InternalParquetRecordReader.initialize(InternalParquetRecordReader.java:142)
at 
parquet.hadoop.ParquetRecordReader.initializeInternalReader(ParquetRecordReader.java:118)
at 
parquet.hadoop.ParquetRecordReader.initialize(ParquetRecordReader.java:107)
at 
org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.(ParquetRecordReaderWrapper.java:92)
at 
org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.(ParquetRecordReaderWrapper.java:66)
at 
org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat.getRecordReader(MapredParquetInputFormat.java:51)
at 
org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.(CombineHiveRecordReader.java:65)
... 16 more

  was:
When reading Parquet file, where the original Thrift schema contains an enum, 
this causes a

 java.lang.NoSuchFieldError: DECIMAL


> Reading Parquet file with enum in Thrift Encoding
> -
>
> Key: HIVE-7787
> URL: https://issues.apache.org/jira/browse/HIVE-7787
> Project: Hive
>  Issue Type: Bug
>  Components: Database/Schema, Thrift API
>Affects Versions: 0.12.0, 0.13.0, 0.14.0
> Environment: Hive 0.12 CDH 5.1.0
>Reporter: Raymond Lau
>Priority: Minor
>
> When reading Parquet file, where the original Thrift schema contains an enum, 
> this causes the following error:
>  java.lang.NoSuchFieldError: DECIMAL
> Java stack trace for Hive 0.12:
> Caused by: java.lang.NoSuchFieldError: DECIMAL
>   at 
> org.apache.hadoop.hive.ql.io.parquet.convert.ETypeConverter.getNewConverter(ETypeConverter.java:146)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:31)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.convert.ArrayWritableGroupConverter.(ArrayWritableGroupConverter.java:45)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:34)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.(DataWritableGroupConverter.java:64)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.(DataWritableGroupConverter.java:47)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:36)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.(DataWritableGroupConverter.java:64)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.(DataWritableGroupConverter.java:40)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableRecordConverter.(DataWritableRecordConverter.java:32)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.read.DataWritableReadSupport.prepareForRead(DataWritableReadSupport.java:128)
>   at 
> parquet.hadoop.InternalParquetRecordReader.initialize(InternalParquetRecordReader.java:142)
>   at 
> parquet.hadoop.ParquetRecordReader.initializeInternalReader(Parq

[jira] [Updated] (HIVE-7787) Reading Parquet file with enum in Thrift Encoding throws NoSuchFieldError

2014-08-19 Thread Raymond Lau (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond Lau updated HIVE-7787:
--

Summary: Reading Parquet file with enum in Thrift Encoding throws 
NoSuchFieldError  (was: Reading Parquet file with enum in Thrift Encoding)

> Reading Parquet file with enum in Thrift Encoding throws NoSuchFieldError
> -
>
> Key: HIVE-7787
> URL: https://issues.apache.org/jira/browse/HIVE-7787
> Project: Hive
>  Issue Type: Bug
>  Components: Database/Schema, Thrift API
>Affects Versions: 0.12.0, 0.13.0, 0.14.0
> Environment: Hive 0.12 CDH 5.1.0
>Reporter: Raymond Lau
>Priority: Minor
>
> When reading Parquet file, where the original Thrift schema contains an enum, 
> this causes the following error:
>  java.lang.NoSuchFieldError: DECIMAL
> Java stack trace for Hive 0.12:
> Caused by: java.lang.NoSuchFieldError: DECIMAL
>   at 
> org.apache.hadoop.hive.ql.io.parquet.convert.ETypeConverter.getNewConverter(ETypeConverter.java:146)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:31)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.convert.ArrayWritableGroupConverter.(ArrayWritableGroupConverter.java:45)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:34)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.(DataWritableGroupConverter.java:64)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.(DataWritableGroupConverter.java:47)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:36)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.(DataWritableGroupConverter.java:64)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.(DataWritableGroupConverter.java:40)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableRecordConverter.(DataWritableRecordConverter.java:32)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.read.DataWritableReadSupport.prepareForRead(DataWritableReadSupport.java:128)
>   at 
> parquet.hadoop.InternalParquetRecordReader.initialize(InternalParquetRecordReader.java:142)
>   at 
> parquet.hadoop.ParquetRecordReader.initializeInternalReader(ParquetRecordReader.java:118)
>   at 
> parquet.hadoop.ParquetRecordReader.initialize(ParquetRecordReader.java:107)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.(ParquetRecordReaderWrapper.java:92)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.(ParquetRecordReaderWrapper.java:66)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat.getRecordReader(MapredParquetInputFormat.java:51)
>   at 
> org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.(CombineHiveRecordReader.java:65)
>   ... 16 more



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7787) Reading Parquet file with enum in Thrift Encoding

2014-08-19 Thread Raymond Lau (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond Lau updated HIVE-7787:
--

Description: 
When reading Parquet file, where the original Thrift schema contains an enum, 
this causes a

 java.lang.NoSuchFieldError: DECIMAL

> Reading Parquet file with enum in Thrift Encoding
> -
>
> Key: HIVE-7787
> URL: https://issues.apache.org/jira/browse/HIVE-7787
> Project: Hive
>  Issue Type: Bug
>  Components: Database/Schema, Thrift API
>Affects Versions: 0.12.0, 0.13.0, 0.14.0
> Environment: Hive 0.12 CDH 5.1.0
>Reporter: Raymond Lau
>Priority: Minor
>
> When reading Parquet file, where the original Thrift schema contains an enum, 
> this causes a
>  java.lang.NoSuchFieldError: DECIMAL



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HIVE-7787) Reading Parquet file with enum in Thrift Encoding

2014-08-19 Thread Raymond Lau (JIRA)
Raymond Lau created HIVE-7787:
-

 Summary: Reading Parquet file with enum in Thrift Encoding
 Key: HIVE-7787
 URL: https://issues.apache.org/jira/browse/HIVE-7787
 Project: Hive
  Issue Type: Bug
  Components: Database/Schema, Thrift API
Affects Versions: 0.13.0, 0.12.0, 0.14.0
 Environment: Hive 0.12 CDH 5.1.0
Reporter: Raymond Lau
Priority: Minor






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-7554) Parquet Hive should resolve column names in case insensitive manner

2014-07-31 Thread Raymond Lau (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14081615#comment-14081615
 ] 

Raymond Lau commented on HIVE-7554:
---

Sorry, this is my first time using attachments on JIRA.
The files are: "part-0.parquet.0" and "Test.thrift"

> Parquet Hive should resolve column names in case insensitive manner
> ---
>
> Key: HIVE-7554
> URL: https://issues.apache.org/jira/browse/HIVE-7554
> Project: Hive
>  Issue Type: Improvement
>Reporter: Brock Noland
>Assignee: Brock Noland
> Attachments: HIVE-7554.patch, Test.thrift, part-0.parquet.0
>
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7554) Parquet Hive should resolve column names in case insensitive manner

2014-07-31 Thread Raymond Lau (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond Lau updated HIVE-7554:
--

Attachment: Test.thrift
part-0.parquet.0

Here's a test parquet file that should demonstrate how Hive will fail to read 
the columns with some upper-case column names.

> Parquet Hive should resolve column names in case insensitive manner
> ---
>
> Key: HIVE-7554
> URL: https://issues.apache.org/jira/browse/HIVE-7554
> Project: Hive
>  Issue Type: Improvement
>Reporter: Brock Noland
>Assignee: Brock Noland
> Attachments: HIVE-7554.patch, Test.thrift, part-0.parquet.0
>
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)