[jira] [Created] (HIVE-10192) insert into table failed for partitioned table.

2015-04-02 Thread Ganesh Sathish (JIRA)
Ganesh Sathish created HIVE-10192:
-

 Summary: insert into table  failed for partitioned table.
 Key: HIVE-10192
 URL: https://issues.apache.org/jira/browse/HIVE-10192
 Project: Hive
  Issue Type: Bug
  Components: File Formats
Affects Versions: 0.12.0
 Environment: os-Unix
Distribution-Pivotal
Reporter: Ganesh Sathish


When i am trying to load the data from the partitioned table in RC format to a 
partitioned table in ORC format.Using the below command to load the data.

create table ORC_Table stored as ORC as select * from RC_Table;
Facing the issue:

ArrayIndexOutofBoundsException:26



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-7850) Hive Query failed if the data type is arraystring with parquet files

2014-08-27 Thread Sathish (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14112528#comment-14112528
 ] 

Sathish commented on HIVE-7850:
---

Thanks Ryan,
Based on your comments it looks like no particular changed needed in the Hive 
serde side for handling Non nullable arrays and Parquet Avro library needs to 
be fixed to convert the schema format properly.
I am planning to work on fixing the parquet Avro library for properly 
generating the parquet files with schema understandable by the Hive, I will 
update my findings once my changes are done.

 Hive Query failed if the data type is arraystring with parquet files
 --

 Key: HIVE-7850
 URL: https://issues.apache.org/jira/browse/HIVE-7850
 Project: Hive
  Issue Type: Bug
  Components: Serializers/Deserializers
Affects Versions: 0.14.0, 0.13.1
Reporter: Sathish
Assignee: Sathish
  Labels: parquet, serde
 Fix For: 0.14.0

 Attachments: HIVE-7850.1.patch, HIVE-7850.2.patch, HIVE-7850.patch


 * Created a parquet file from the Avro file which have 1 array data type and 
 rest are primitive types. Avro Schema of the array data type. Eg: 
 {code}
 { name : action, type : [ { type : array, items : string }, 
 null ] }
 {code}
 * Created External Hive table with the Array type as below, 
 {code}
 create external table paraArray (action Array) partitioned by (partitionid 
 int) row format serde 'parquet.hive.serde.ParquetHiveSerDe' stored as 
 inputformat 'parquet.hive.MapredParquetInputFormat' outputformat 
 'parquet.hive.MapredParquetOutputFormat' location '/testPara'; 
 alter table paraArray add partition(partitionid=1) location '/testPara';
 {code}
 * Run the following query(select action from paraArray limit 10) and the Map 
 reduce jobs are failing with the following exception.
 {code}
 Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime 
 Error while processing row [Error getting row data with exception 
 java.lang.ClassCastException: 
 parquet.hive.writable.BinaryWritable$DicBinaryWritable cannot be cast to 
 org.apache.hadoop.io.ArrayWritable
 at 
 parquet.hive.serde.ParquetHiveArrayInspector.getList(ParquetHiveArrayInspector.java:125)
 at 
 org.apache.hadoop.hive.serde2.SerDeUtils.buildJSONString(SerDeUtils.java:315)
 at 
 org.apache.hadoop.hive.serde2.SerDeUtils.buildJSONString(SerDeUtils.java:371)
 at org.apache.hadoop.hive.serde2.SerDeUtils.getJSONString(SerDeUtils.java:236)
 at org.apache.hadoop.hive.serde2.SerDeUtils.getJSONString(SerDeUtils.java:222)
 at org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:665)
 at org.apache.hadoop.hive.ql.exec.ExecMapper.map(ExecMapper.java:144)
 at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:50)
 at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:405)
 at org.apache.hadoop.mapred.MapTask.run(MapTask.java:336)
 at org.apache.hadoop.mapred.Child$4.run(Child.java:270)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:415)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1126)
 at org.apache.hadoop.mapred.Child.main(Child.java:264)
 ]
 at org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:671)
 at org.apache.hadoop.hive.ql.exec.ExecMapper.map(ExecMapper.java:144)
 ... 8 more
 {code}
 This issue has long back posted on Parquet issues list and Since this is 
 related to Parquet Hive serde, I have created the Hive issue here, The 
 details and history of this information are as shown in the link here 
 https://github.com/Parquet/parquet-mr/issues/281.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-7850) Hive Query failed if the data type is arraystring with parquet files

2014-08-26 Thread Sathish (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14110435#comment-14110435
 ] 

Sathish commented on HIVE-7850:
---

Hi Ryan,
I agree that the Hive should support lists with null elements. But can you give 
some idea on the cases where the no null lists are being generated, Whenever 
the parquet files are being generated from the Avro files most of the files are 
having the array schema as below
{code}
optional group name (LIST) {
  repeated string array_element;
}
{code}
Do you provide any suggestions on how best we can support for both kind of 
arrays. This patch only fix the arrays with no null entries.

 Hive Query failed if the data type is arraystring with parquet files
 --

 Key: HIVE-7850
 URL: https://issues.apache.org/jira/browse/HIVE-7850
 Project: Hive
  Issue Type: Bug
  Components: Serializers/Deserializers
Affects Versions: 0.14.0, 0.13.1
Reporter: Sathish
Assignee: Sathish
  Labels: parquet, serde
 Fix For: 0.14.0

 Attachments: HIVE-7850.1.patch, HIVE-7850.patch


 * Created a parquet file from the Avro file which have 1 array data type and 
 rest are primitive types. Avro Schema of the array data type. Eg: 
 {code}
 { name : action, type : [ { type : array, items : string }, 
 null ] }
 {code}
 * Created External Hive table with the Array type as below, 
 {code}
 create external table paraArray (action Array) partitioned by (partitionid 
 int) row format serde 'parquet.hive.serde.ParquetHiveSerDe' stored as 
 inputformat 'parquet.hive.MapredParquetInputFormat' outputformat 
 'parquet.hive.MapredParquetOutputFormat' location '/testPara'; 
 alter table paraArray add partition(partitionid=1) location '/testPara';
 {code}
 * Run the following query(select action from paraArray limit 10) and the Map 
 reduce jobs are failing with the following exception.
 {code}
 Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime 
 Error while processing row [Error getting row data with exception 
 java.lang.ClassCastException: 
 parquet.hive.writable.BinaryWritable$DicBinaryWritable cannot be cast to 
 org.apache.hadoop.io.ArrayWritable
 at 
 parquet.hive.serde.ParquetHiveArrayInspector.getList(ParquetHiveArrayInspector.java:125)
 at 
 org.apache.hadoop.hive.serde2.SerDeUtils.buildJSONString(SerDeUtils.java:315)
 at 
 org.apache.hadoop.hive.serde2.SerDeUtils.buildJSONString(SerDeUtils.java:371)
 at org.apache.hadoop.hive.serde2.SerDeUtils.getJSONString(SerDeUtils.java:236)
 at org.apache.hadoop.hive.serde2.SerDeUtils.getJSONString(SerDeUtils.java:222)
 at org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:665)
 at org.apache.hadoop.hive.ql.exec.ExecMapper.map(ExecMapper.java:144)
 at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:50)
 at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:405)
 at org.apache.hadoop.mapred.MapTask.run(MapTask.java:336)
 at org.apache.hadoop.mapred.Child$4.run(Child.java:270)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:415)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1126)
 at org.apache.hadoop.mapred.Child.main(Child.java:264)
 ]
 at org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:671)
 at org.apache.hadoop.hive.ql.exec.ExecMapper.map(ExecMapper.java:144)
 ... 8 more
 {code}
 This issue has long back posted on Parquet issues list and Since this is 
 related to Parquet Hive serde, I have created the Hive issue here, The 
 details and history of this information are as shown in the link here 
 https://github.com/Parquet/parquet-mr/issues/281.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-7850) Hive Query failed if the data type is arraystring with parquet files

2014-08-26 Thread Sathish (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14110489#comment-14110489
 ] 

Sathish commented on HIVE-7850:
---

Used Types.primitive(type,repetition) as suggested by ryan and also working on 
separating Maps and Arrays Group converters into two separate classes. I will 
update my patch once done with my changes.

Regarding the LIST structure can you give your suggestions on how we can 
support for both NULL elements list and Normal non null elements lists in Hive.
I am of the opinion to build a separate structure for NULL elements list like 
(NULL_LIST) as shown below,
{code}
// arraystring name
optional group name (NULL_LIST) {
  repeated group bag {
optional string array_element;
  }
}
{code}
Can you provide your suggestions on this.

 Hive Query failed if the data type is arraystring with parquet files
 --

 Key: HIVE-7850
 URL: https://issues.apache.org/jira/browse/HIVE-7850
 Project: Hive
  Issue Type: Bug
  Components: Serializers/Deserializers
Affects Versions: 0.14.0, 0.13.1
Reporter: Sathish
Assignee: Sathish
  Labels: parquet, serde
 Fix For: 0.14.0

 Attachments: HIVE-7850.1.patch, HIVE-7850.patch


 * Created a parquet file from the Avro file which have 1 array data type and 
 rest are primitive types. Avro Schema of the array data type. Eg: 
 {code}
 { name : action, type : [ { type : array, items : string }, 
 null ] }
 {code}
 * Created External Hive table with the Array type as below, 
 {code}
 create external table paraArray (action Array) partitioned by (partitionid 
 int) row format serde 'parquet.hive.serde.ParquetHiveSerDe' stored as 
 inputformat 'parquet.hive.MapredParquetInputFormat' outputformat 
 'parquet.hive.MapredParquetOutputFormat' location '/testPara'; 
 alter table paraArray add partition(partitionid=1) location '/testPara';
 {code}
 * Run the following query(select action from paraArray limit 10) and the Map 
 reduce jobs are failing with the following exception.
 {code}
 Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime 
 Error while processing row [Error getting row data with exception 
 java.lang.ClassCastException: 
 parquet.hive.writable.BinaryWritable$DicBinaryWritable cannot be cast to 
 org.apache.hadoop.io.ArrayWritable
 at 
 parquet.hive.serde.ParquetHiveArrayInspector.getList(ParquetHiveArrayInspector.java:125)
 at 
 org.apache.hadoop.hive.serde2.SerDeUtils.buildJSONString(SerDeUtils.java:315)
 at 
 org.apache.hadoop.hive.serde2.SerDeUtils.buildJSONString(SerDeUtils.java:371)
 at org.apache.hadoop.hive.serde2.SerDeUtils.getJSONString(SerDeUtils.java:236)
 at org.apache.hadoop.hive.serde2.SerDeUtils.getJSONString(SerDeUtils.java:222)
 at org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:665)
 at org.apache.hadoop.hive.ql.exec.ExecMapper.map(ExecMapper.java:144)
 at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:50)
 at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:405)
 at org.apache.hadoop.mapred.MapTask.run(MapTask.java:336)
 at org.apache.hadoop.mapred.Child$4.run(Child.java:270)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:415)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1126)
 at org.apache.hadoop.mapred.Child.main(Child.java:264)
 ]
 at org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:671)
 at org.apache.hadoop.hive.ql.exec.ExecMapper.map(ExecMapper.java:144)
 ... 8 more
 {code}
 This issue has long back posted on Parquet issues list and Since this is 
 related to Parquet Hive serde, I have created the Hive issue here, The 
 details and history of this information are as shown in the link here 
 https://github.com/Parquet/parquet-mr/issues/281.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7850) Hive Query failed if the data type is arraystring with parquet files

2014-08-26 Thread Sathish (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sathish updated HIVE-7850:
--

Attachment: HIVE-7850.2.patch

New patch submitted based on comments and suggestions from ryan.

 Hive Query failed if the data type is arraystring with parquet files
 --

 Key: HIVE-7850
 URL: https://issues.apache.org/jira/browse/HIVE-7850
 Project: Hive
  Issue Type: Bug
  Components: Serializers/Deserializers
Affects Versions: 0.14.0, 0.13.1
Reporter: Sathish
Assignee: Sathish
  Labels: parquet, serde
 Fix For: 0.14.0

 Attachments: HIVE-7850.1.patch, HIVE-7850.2.patch, HIVE-7850.patch


 * Created a parquet file from the Avro file which have 1 array data type and 
 rest are primitive types. Avro Schema of the array data type. Eg: 
 {code}
 { name : action, type : [ { type : array, items : string }, 
 null ] }
 {code}
 * Created External Hive table with the Array type as below, 
 {code}
 create external table paraArray (action Array) partitioned by (partitionid 
 int) row format serde 'parquet.hive.serde.ParquetHiveSerDe' stored as 
 inputformat 'parquet.hive.MapredParquetInputFormat' outputformat 
 'parquet.hive.MapredParquetOutputFormat' location '/testPara'; 
 alter table paraArray add partition(partitionid=1) location '/testPara';
 {code}
 * Run the following query(select action from paraArray limit 10) and the Map 
 reduce jobs are failing with the following exception.
 {code}
 Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime 
 Error while processing row [Error getting row data with exception 
 java.lang.ClassCastException: 
 parquet.hive.writable.BinaryWritable$DicBinaryWritable cannot be cast to 
 org.apache.hadoop.io.ArrayWritable
 at 
 parquet.hive.serde.ParquetHiveArrayInspector.getList(ParquetHiveArrayInspector.java:125)
 at 
 org.apache.hadoop.hive.serde2.SerDeUtils.buildJSONString(SerDeUtils.java:315)
 at 
 org.apache.hadoop.hive.serde2.SerDeUtils.buildJSONString(SerDeUtils.java:371)
 at org.apache.hadoop.hive.serde2.SerDeUtils.getJSONString(SerDeUtils.java:236)
 at org.apache.hadoop.hive.serde2.SerDeUtils.getJSONString(SerDeUtils.java:222)
 at org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:665)
 at org.apache.hadoop.hive.ql.exec.ExecMapper.map(ExecMapper.java:144)
 at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:50)
 at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:405)
 at org.apache.hadoop.mapred.MapTask.run(MapTask.java:336)
 at org.apache.hadoop.mapred.Child$4.run(Child.java:270)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:415)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1126)
 at org.apache.hadoop.mapred.Child.main(Child.java:264)
 ]
 at org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:671)
 at org.apache.hadoop.hive.ql.exec.ExecMapper.map(ExecMapper.java:144)
 ... 8 more
 {code}
 This issue has long back posted on Parquet issues list and Since this is 
 related to Parquet Hive serde, I have created the Hive issue here, The 
 details and history of this information are as shown in the link here 
 https://github.com/Parquet/parquet-mr/issues/281.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7850) Hive Query failed if the data type is arraystring with parquet files

2014-08-25 Thread Sathish (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sathish updated HIVE-7850:
--

Attachment: HIVE-7850.1.patch

New patch file submitted by correcting indentations.

 Hive Query failed if the data type is arraystring with parquet files
 --

 Key: HIVE-7850
 URL: https://issues.apache.org/jira/browse/HIVE-7850
 Project: Hive
  Issue Type: Bug
  Components: Serializers/Deserializers
Affects Versions: 0.14.0, 0.13.1
Reporter: Sathish
Assignee: Sathish
  Labels: parquet, serde
 Fix For: 0.14.0

 Attachments: HIVE-7850.1.patch, HIVE-7850.patch


 * Created a parquet file from the Avro file which have 1 array data type and 
 rest are primitive types. Avro Schema of the array data type. Eg: 
 {code}
 { name : action, type : [ { type : array, items : string }, 
 null ] }
 {code}
 * Created External Hive table with the Array type as below, 
 {code}
 create external table paraArray (action Array) partitioned by (partitionid 
 int) row format serde 'parquet.hive.serde.ParquetHiveSerDe' stored as 
 inputformat 'parquet.hive.MapredParquetInputFormat' outputformat 
 'parquet.hive.MapredParquetOutputFormat' location '/testPara'; 
 alter table paraArray add partition(partitionid=1) location '/testPara';
 {code}
 * Run the following query(select action from paraArray limit 10) and the Map 
 reduce jobs are failing with the following exception.
 {code}
 Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime 
 Error while processing row [Error getting row data with exception 
 java.lang.ClassCastException: 
 parquet.hive.writable.BinaryWritable$DicBinaryWritable cannot be cast to 
 org.apache.hadoop.io.ArrayWritable
 at 
 parquet.hive.serde.ParquetHiveArrayInspector.getList(ParquetHiveArrayInspector.java:125)
 at 
 org.apache.hadoop.hive.serde2.SerDeUtils.buildJSONString(SerDeUtils.java:315)
 at 
 org.apache.hadoop.hive.serde2.SerDeUtils.buildJSONString(SerDeUtils.java:371)
 at org.apache.hadoop.hive.serde2.SerDeUtils.getJSONString(SerDeUtils.java:236)
 at org.apache.hadoop.hive.serde2.SerDeUtils.getJSONString(SerDeUtils.java:222)
 at org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:665)
 at org.apache.hadoop.hive.ql.exec.ExecMapper.map(ExecMapper.java:144)
 at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:50)
 at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:405)
 at org.apache.hadoop.mapred.MapTask.run(MapTask.java:336)
 at org.apache.hadoop.mapred.Child$4.run(Child.java:270)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:415)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1126)
 at org.apache.hadoop.mapred.Child.main(Child.java:264)
 ]
 at org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:671)
 at org.apache.hadoop.hive.ql.exec.ExecMapper.map(ExecMapper.java:144)
 ... 8 more
 {code}
 This issue has long back posted on Parquet issues list and Since this is 
 related to Parquet Hive serde, I have created the Hive issue here, The 
 details and history of this information are as shown in the link here 
 https://github.com/Parquet/parquet-mr/issues/281.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HIVE-7850) Hive Query failed if the data type is arraystring with parquet files

2014-08-22 Thread Sathish (JIRA)
Sathish created HIVE-7850:
-

 Summary: Hive Query failed if the data type is arraystring with 
parquet files
 Key: HIVE-7850
 URL: https://issues.apache.org/jira/browse/HIVE-7850
 Project: Hive
  Issue Type: Bug
  Components: Serializers/Deserializers
Affects Versions: 0.13.1, 0.14.0
Reporter: Sathish


* Created a parquet file from the Avro file which have 1 array data type and 
rest are primitive types. Avro Schema of the array data type. Eg: 
{code}
{ name : action, type : [ { type : array, items : string }, 
null ] }
{code}
* Created External Hive table with the Array type as below, 
{code}
create external table paraArray (action Array) partitioned by (partitionid int) 
row format serde 'parquet.hive.serde.ParquetHiveSerDe' stored as inputformat 
'parquet.hive.MapredParquetInputFormat' outputformat 
'parquet.hive.MapredParquetOutputFormat' location '/testPara'; 
alter table paraArray add partition(partitionid=1) location '/testPara';
{code}
* Run the following query(select action from paraArray limit 10) and the Map 
reduce jobs are failing with the following exception.
{code}
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error 
while processing row [Error getting row data with exception 
java.lang.ClassCastException: 
parquet.hive.writable.BinaryWritable$DicBinaryWritable cannot be cast to 
org.apache.hadoop.io.ArrayWritable
at 
parquet.hive.serde.ParquetHiveArrayInspector.getList(ParquetHiveArrayInspector.java:125)
at org.apache.hadoop.hive.serde2.SerDeUtils.buildJSONString(SerDeUtils.java:315)
at org.apache.hadoop.hive.serde2.SerDeUtils.buildJSONString(SerDeUtils.java:371)
at org.apache.hadoop.hive.serde2.SerDeUtils.getJSONString(SerDeUtils.java:236)
at org.apache.hadoop.hive.serde2.SerDeUtils.getJSONString(SerDeUtils.java:222)
at org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:665)
at org.apache.hadoop.hive.ql.exec.ExecMapper.map(ExecMapper.java:144)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:50)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:405)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:336)
at org.apache.hadoop.mapred.Child$4.run(Child.java:270)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1126)
at org.apache.hadoop.mapred.Child.main(Child.java:264)
]
at org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:671)
at org.apache.hadoop.hive.ql.exec.ExecMapper.map(ExecMapper.java:144)
... 8 more
{code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7850) Hive Query failed if the data type is arraystring with parquet files

2014-08-22 Thread Sathish (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sathish updated HIVE-7850:
--

Description: 
* Created a parquet file from the Avro file which have 1 array data type and 
rest are primitive types. Avro Schema of the array data type. Eg: 
{code}
{ name : action, type : [ { type : array, items : string }, 
null ] }
{code}
* Created External Hive table with the Array type as below, 
{code}
create external table paraArray (action Array) partitioned by (partitionid int) 
row format serde 'parquet.hive.serde.ParquetHiveSerDe' stored as inputformat 
'parquet.hive.MapredParquetInputFormat' outputformat 
'parquet.hive.MapredParquetOutputFormat' location '/testPara'; 
alter table paraArray add partition(partitionid=1) location '/testPara';
{code}
* Run the following query(select action from paraArray limit 10) and the Map 
reduce jobs are failing with the following exception.
{code}
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error 
while processing row [Error getting row data with exception 
java.lang.ClassCastException: 
parquet.hive.writable.BinaryWritable$DicBinaryWritable cannot be cast to 
org.apache.hadoop.io.ArrayWritable
at 
parquet.hive.serde.ParquetHiveArrayInspector.getList(ParquetHiveArrayInspector.java:125)
at org.apache.hadoop.hive.serde2.SerDeUtils.buildJSONString(SerDeUtils.java:315)
at org.apache.hadoop.hive.serde2.SerDeUtils.buildJSONString(SerDeUtils.java:371)
at org.apache.hadoop.hive.serde2.SerDeUtils.getJSONString(SerDeUtils.java:236)
at org.apache.hadoop.hive.serde2.SerDeUtils.getJSONString(SerDeUtils.java:222)
at org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:665)
at org.apache.hadoop.hive.ql.exec.ExecMapper.map(ExecMapper.java:144)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:50)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:405)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:336)
at org.apache.hadoop.mapred.Child$4.run(Child.java:270)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1126)
at org.apache.hadoop.mapred.Child.main(Child.java:264)
]
at org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:671)
at org.apache.hadoop.hive.ql.exec.ExecMapper.map(ExecMapper.java:144)
... 8 more
{code}


This issue has long back posted on Parquet issues list and Since this is 
related to Parquet Hive serde, I have created the Hive issue here, The details 
and history of this information are as shown in the link here 
https://github.com/Parquet/parquet-mr/issues/281.

  was:
* Created a parquet file from the Avro file which have 1 array data type and 
rest are primitive types. Avro Schema of the array data type. Eg: 
{code}
{ name : action, type : [ { type : array, items : string }, 
null ] }
{code}
* Created External Hive table with the Array type as below, 
{code}
create external table paraArray (action Array) partitioned by (partitionid int) 
row format serde 'parquet.hive.serde.ParquetHiveSerDe' stored as inputformat 
'parquet.hive.MapredParquetInputFormat' outputformat 
'parquet.hive.MapredParquetOutputFormat' location '/testPara'; 
alter table paraArray add partition(partitionid=1) location '/testPara';
{code}
* Run the following query(select action from paraArray limit 10) and the Map 
reduce jobs are failing with the following exception.
{code}
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error 
while processing row [Error getting row data with exception 
java.lang.ClassCastException: 
parquet.hive.writable.BinaryWritable$DicBinaryWritable cannot be cast to 
org.apache.hadoop.io.ArrayWritable
at 
parquet.hive.serde.ParquetHiveArrayInspector.getList(ParquetHiveArrayInspector.java:125)
at org.apache.hadoop.hive.serde2.SerDeUtils.buildJSONString(SerDeUtils.java:315)
at org.apache.hadoop.hive.serde2.SerDeUtils.buildJSONString(SerDeUtils.java:371)
at org.apache.hadoop.hive.serde2.SerDeUtils.getJSONString(SerDeUtils.java:236)
at org.apache.hadoop.hive.serde2.SerDeUtils.getJSONString(SerDeUtils.java:222)
at org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:665)
at org.apache.hadoop.hive.ql.exec.ExecMapper.map(ExecMapper.java:144)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:50)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:405)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:336)
at org.apache.hadoop.mapred.Child$4.run(Child.java:270)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1126)
at org.apache.hadoop.mapred.Child.main(Child.java:264)
]
at org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:671)
at 

[jira] [Updated] (HIVE-7850) Hive Query failed if the data type is arraystring with parquet files

2014-08-22 Thread Sathish (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sathish updated HIVE-7850:
--

Fix Version/s: 0.14.0
   Status: Patch Available  (was: Open)

 Hive Query failed if the data type is arraystring with parquet files
 --

 Key: HIVE-7850
 URL: https://issues.apache.org/jira/browse/HIVE-7850
 Project: Hive
  Issue Type: Bug
  Components: Serializers/Deserializers
Affects Versions: 0.13.1, 0.14.0
Reporter: Sathish
  Labels: parquet, serde
 Fix For: 0.14.0


 * Created a parquet file from the Avro file which have 1 array data type and 
 rest are primitive types. Avro Schema of the array data type. Eg: 
 {code}
 { name : action, type : [ { type : array, items : string }, 
 null ] }
 {code}
 * Created External Hive table with the Array type as below, 
 {code}
 create external table paraArray (action Array) partitioned by (partitionid 
 int) row format serde 'parquet.hive.serde.ParquetHiveSerDe' stored as 
 inputformat 'parquet.hive.MapredParquetInputFormat' outputformat 
 'parquet.hive.MapredParquetOutputFormat' location '/testPara'; 
 alter table paraArray add partition(partitionid=1) location '/testPara';
 {code}
 * Run the following query(select action from paraArray limit 10) and the Map 
 reduce jobs are failing with the following exception.
 {code}
 Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime 
 Error while processing row [Error getting row data with exception 
 java.lang.ClassCastException: 
 parquet.hive.writable.BinaryWritable$DicBinaryWritable cannot be cast to 
 org.apache.hadoop.io.ArrayWritable
 at 
 parquet.hive.serde.ParquetHiveArrayInspector.getList(ParquetHiveArrayInspector.java:125)
 at 
 org.apache.hadoop.hive.serde2.SerDeUtils.buildJSONString(SerDeUtils.java:315)
 at 
 org.apache.hadoop.hive.serde2.SerDeUtils.buildJSONString(SerDeUtils.java:371)
 at org.apache.hadoop.hive.serde2.SerDeUtils.getJSONString(SerDeUtils.java:236)
 at org.apache.hadoop.hive.serde2.SerDeUtils.getJSONString(SerDeUtils.java:222)
 at org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:665)
 at org.apache.hadoop.hive.ql.exec.ExecMapper.map(ExecMapper.java:144)
 at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:50)
 at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:405)
 at org.apache.hadoop.mapred.MapTask.run(MapTask.java:336)
 at org.apache.hadoop.mapred.Child$4.run(Child.java:270)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:415)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1126)
 at org.apache.hadoop.mapred.Child.main(Child.java:264)
 ]
 at org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:671)
 at org.apache.hadoop.hive.ql.exec.ExecMapper.map(ExecMapper.java:144)
 ... 8 more
 {code}
 This issue has long back posted on Parquet issues list and Since this is 
 related to Parquet Hive serde, I have created the Hive issue here, The 
 details and history of this information are as shown in the link here 
 https://github.com/Parquet/parquet-mr/issues/281.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7850) Hive Query failed if the data type is arraystring with parquet files

2014-08-22 Thread Sathish (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sathish updated HIVE-7850:
--

Status: Open  (was: Patch Available)

 Hive Query failed if the data type is arraystring with parquet files
 --

 Key: HIVE-7850
 URL: https://issues.apache.org/jira/browse/HIVE-7850
 Project: Hive
  Issue Type: Bug
  Components: Serializers/Deserializers
Affects Versions: 0.13.1, 0.14.0
Reporter: Sathish
  Labels: parquet, serde
 Fix For: 0.14.0


 * Created a parquet file from the Avro file which have 1 array data type and 
 rest are primitive types. Avro Schema of the array data type. Eg: 
 {code}
 { name : action, type : [ { type : array, items : string }, 
 null ] }
 {code}
 * Created External Hive table with the Array type as below, 
 {code}
 create external table paraArray (action Array) partitioned by (partitionid 
 int) row format serde 'parquet.hive.serde.ParquetHiveSerDe' stored as 
 inputformat 'parquet.hive.MapredParquetInputFormat' outputformat 
 'parquet.hive.MapredParquetOutputFormat' location '/testPara'; 
 alter table paraArray add partition(partitionid=1) location '/testPara';
 {code}
 * Run the following query(select action from paraArray limit 10) and the Map 
 reduce jobs are failing with the following exception.
 {code}
 Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime 
 Error while processing row [Error getting row data with exception 
 java.lang.ClassCastException: 
 parquet.hive.writable.BinaryWritable$DicBinaryWritable cannot be cast to 
 org.apache.hadoop.io.ArrayWritable
 at 
 parquet.hive.serde.ParquetHiveArrayInspector.getList(ParquetHiveArrayInspector.java:125)
 at 
 org.apache.hadoop.hive.serde2.SerDeUtils.buildJSONString(SerDeUtils.java:315)
 at 
 org.apache.hadoop.hive.serde2.SerDeUtils.buildJSONString(SerDeUtils.java:371)
 at org.apache.hadoop.hive.serde2.SerDeUtils.getJSONString(SerDeUtils.java:236)
 at org.apache.hadoop.hive.serde2.SerDeUtils.getJSONString(SerDeUtils.java:222)
 at org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:665)
 at org.apache.hadoop.hive.ql.exec.ExecMapper.map(ExecMapper.java:144)
 at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:50)
 at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:405)
 at org.apache.hadoop.mapred.MapTask.run(MapTask.java:336)
 at org.apache.hadoop.mapred.Child$4.run(Child.java:270)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:415)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1126)
 at org.apache.hadoop.mapred.Child.main(Child.java:264)
 ]
 at org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:671)
 at org.apache.hadoop.hive.ql.exec.ExecMapper.map(ExecMapper.java:144)
 ... 8 more
 {code}
 This issue has long back posted on Parquet issues list and Since this is 
 related to Parquet Hive serde, I have created the Hive issue here, The 
 details and history of this information are as shown in the link here 
 https://github.com/Parquet/parquet-mr/issues/281.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7850) Hive Query failed if the data type is arraystring with parquet files

2014-08-22 Thread Sathish (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sathish updated HIVE-7850:
--

Attachment: HIVE-7850.patch

This patch fixes this issue,Since this feature we want to use in the next 
release of Hive. Requesting someone to look into this patch changes and merge 
to the main branch.

 Hive Query failed if the data type is arraystring with parquet files
 --

 Key: HIVE-7850
 URL: https://issues.apache.org/jira/browse/HIVE-7850
 Project: Hive
  Issue Type: Bug
  Components: Serializers/Deserializers
Affects Versions: 0.14.0, 0.13.1
Reporter: Sathish
  Labels: parquet, serde
 Fix For: 0.14.0

 Attachments: HIVE-7850.patch


 * Created a parquet file from the Avro file which have 1 array data type and 
 rest are primitive types. Avro Schema of the array data type. Eg: 
 {code}
 { name : action, type : [ { type : array, items : string }, 
 null ] }
 {code}
 * Created External Hive table with the Array type as below, 
 {code}
 create external table paraArray (action Array) partitioned by (partitionid 
 int) row format serde 'parquet.hive.serde.ParquetHiveSerDe' stored as 
 inputformat 'parquet.hive.MapredParquetInputFormat' outputformat 
 'parquet.hive.MapredParquetOutputFormat' location '/testPara'; 
 alter table paraArray add partition(partitionid=1) location '/testPara';
 {code}
 * Run the following query(select action from paraArray limit 10) and the Map 
 reduce jobs are failing with the following exception.
 {code}
 Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime 
 Error while processing row [Error getting row data with exception 
 java.lang.ClassCastException: 
 parquet.hive.writable.BinaryWritable$DicBinaryWritable cannot be cast to 
 org.apache.hadoop.io.ArrayWritable
 at 
 parquet.hive.serde.ParquetHiveArrayInspector.getList(ParquetHiveArrayInspector.java:125)
 at 
 org.apache.hadoop.hive.serde2.SerDeUtils.buildJSONString(SerDeUtils.java:315)
 at 
 org.apache.hadoop.hive.serde2.SerDeUtils.buildJSONString(SerDeUtils.java:371)
 at org.apache.hadoop.hive.serde2.SerDeUtils.getJSONString(SerDeUtils.java:236)
 at org.apache.hadoop.hive.serde2.SerDeUtils.getJSONString(SerDeUtils.java:222)
 at org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:665)
 at org.apache.hadoop.hive.ql.exec.ExecMapper.map(ExecMapper.java:144)
 at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:50)
 at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:405)
 at org.apache.hadoop.mapred.MapTask.run(MapTask.java:336)
 at org.apache.hadoop.mapred.Child$4.run(Child.java:270)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:415)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1126)
 at org.apache.hadoop.mapred.Child.main(Child.java:264)
 ]
 at org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:671)
 at org.apache.hadoop.hive.ql.exec.ExecMapper.map(ExecMapper.java:144)
 ... 8 more
 {code}
 This issue has long back posted on Parquet issues list and Since this is 
 related to Parquet Hive serde, I have created the Hive issue here, The 
 details and history of this information are as shown in the link here 
 https://github.com/Parquet/parquet-mr/issues/281.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7850) Hive Query failed if the data type is arraystring with parquet files

2014-08-22 Thread Sathish (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sathish updated HIVE-7850:
--

Status: Patch Available  (was: Open)

 Hive Query failed if the data type is arraystring with parquet files
 --

 Key: HIVE-7850
 URL: https://issues.apache.org/jira/browse/HIVE-7850
 Project: Hive
  Issue Type: Bug
  Components: Serializers/Deserializers
Affects Versions: 0.13.1, 0.14.0
Reporter: Sathish
  Labels: parquet, serde
 Fix For: 0.14.0

 Attachments: HIVE-7850.patch


 * Created a parquet file from the Avro file which have 1 array data type and 
 rest are primitive types. Avro Schema of the array data type. Eg: 
 {code}
 { name : action, type : [ { type : array, items : string }, 
 null ] }
 {code}
 * Created External Hive table with the Array type as below, 
 {code}
 create external table paraArray (action Array) partitioned by (partitionid 
 int) row format serde 'parquet.hive.serde.ParquetHiveSerDe' stored as 
 inputformat 'parquet.hive.MapredParquetInputFormat' outputformat 
 'parquet.hive.MapredParquetOutputFormat' location '/testPara'; 
 alter table paraArray add partition(partitionid=1) location '/testPara';
 {code}
 * Run the following query(select action from paraArray limit 10) and the Map 
 reduce jobs are failing with the following exception.
 {code}
 Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime 
 Error while processing row [Error getting row data with exception 
 java.lang.ClassCastException: 
 parquet.hive.writable.BinaryWritable$DicBinaryWritable cannot be cast to 
 org.apache.hadoop.io.ArrayWritable
 at 
 parquet.hive.serde.ParquetHiveArrayInspector.getList(ParquetHiveArrayInspector.java:125)
 at 
 org.apache.hadoop.hive.serde2.SerDeUtils.buildJSONString(SerDeUtils.java:315)
 at 
 org.apache.hadoop.hive.serde2.SerDeUtils.buildJSONString(SerDeUtils.java:371)
 at org.apache.hadoop.hive.serde2.SerDeUtils.getJSONString(SerDeUtils.java:236)
 at org.apache.hadoop.hive.serde2.SerDeUtils.getJSONString(SerDeUtils.java:222)
 at org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:665)
 at org.apache.hadoop.hive.ql.exec.ExecMapper.map(ExecMapper.java:144)
 at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:50)
 at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:405)
 at org.apache.hadoop.mapred.MapTask.run(MapTask.java:336)
 at org.apache.hadoop.mapred.Child$4.run(Child.java:270)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:415)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1126)
 at org.apache.hadoop.mapred.Child.main(Child.java:264)
 ]
 at org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:671)
 at org.apache.hadoop.hive.ql.exec.ExecMapper.map(ExecMapper.java:144)
 ... 8 more
 {code}
 This issue has long back posted on Parquet issues list and Since this is 
 related to Parquet Hive serde, I have created the Hive issue here, The 
 details and history of this information are as shown in the link here 
 https://github.com/Parquet/parquet-mr/issues/281.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-7850) Hive Query failed if the data type is arraystring with parquet files

2014-08-22 Thread Sathish (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14106775#comment-14106775
 ] 

Sathish commented on HIVE-7850:
---

Can someone look into this issue and provide any comments or suggestions for 
this fix. Provided the patch and waiting for this patch to be merged to the 
main branch as this feature of Hive we want use in our next release.

 Hive Query failed if the data type is arraystring with parquet files
 --

 Key: HIVE-7850
 URL: https://issues.apache.org/jira/browse/HIVE-7850
 Project: Hive
  Issue Type: Bug
  Components: Serializers/Deserializers
Affects Versions: 0.14.0, 0.13.1
Reporter: Sathish
  Labels: parquet, serde
 Fix For: 0.14.0

 Attachments: HIVE-7850.patch


 * Created a parquet file from the Avro file which have 1 array data type and 
 rest are primitive types. Avro Schema of the array data type. Eg: 
 {code}
 { name : action, type : [ { type : array, items : string }, 
 null ] }
 {code}
 * Created External Hive table with the Array type as below, 
 {code}
 create external table paraArray (action Array) partitioned by (partitionid 
 int) row format serde 'parquet.hive.serde.ParquetHiveSerDe' stored as 
 inputformat 'parquet.hive.MapredParquetInputFormat' outputformat 
 'parquet.hive.MapredParquetOutputFormat' location '/testPara'; 
 alter table paraArray add partition(partitionid=1) location '/testPara';
 {code}
 * Run the following query(select action from paraArray limit 10) and the Map 
 reduce jobs are failing with the following exception.
 {code}
 Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime 
 Error while processing row [Error getting row data with exception 
 java.lang.ClassCastException: 
 parquet.hive.writable.BinaryWritable$DicBinaryWritable cannot be cast to 
 org.apache.hadoop.io.ArrayWritable
 at 
 parquet.hive.serde.ParquetHiveArrayInspector.getList(ParquetHiveArrayInspector.java:125)
 at 
 org.apache.hadoop.hive.serde2.SerDeUtils.buildJSONString(SerDeUtils.java:315)
 at 
 org.apache.hadoop.hive.serde2.SerDeUtils.buildJSONString(SerDeUtils.java:371)
 at org.apache.hadoop.hive.serde2.SerDeUtils.getJSONString(SerDeUtils.java:236)
 at org.apache.hadoop.hive.serde2.SerDeUtils.getJSONString(SerDeUtils.java:222)
 at org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:665)
 at org.apache.hadoop.hive.ql.exec.ExecMapper.map(ExecMapper.java:144)
 at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:50)
 at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:405)
 at org.apache.hadoop.mapred.MapTask.run(MapTask.java:336)
 at org.apache.hadoop.mapred.Child$4.run(Child.java:270)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:415)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1126)
 at org.apache.hadoop.mapred.Child.main(Child.java:264)
 ]
 at org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:671)
 at org.apache.hadoop.hive.ql.exec.ExecMapper.map(ExecMapper.java:144)
 ... 8 more
 {code}
 This issue has long back posted on Parquet issues list and Since this is 
 related to Parquet Hive serde, I have created the Hive issue here, The 
 details and history of this information are as shown in the link here 
 https://github.com/Parquet/parquet-mr/issues/281.



--
This message was sent by Atlassian JIRA
(v6.2#6252)