[
https://issues.apache.org/jira/browse/HIVE-6914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14224403#comment-14224403
]
Hive QA commented on HIVE-6914:
-------------------------------
{color:green}Overall{color}: +1 all checks pass
Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12683366/HIVE-6914.4.patch
{color:green}SUCCESS:{color} +1 6682 tests passed
Test results:
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/1898/testReport
Console output:
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/1898/console
Test logs:
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-1898/
Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}
This message is automatically generated.
ATTACHMENT ID: 12683366 - PreCommit-HIVE-TRUNK-Build
> parquet-hive cannot write nested map (map value is map)
> -------------------------------------------------------
>
> Key: HIVE-6914
> URL: https://issues.apache.org/jira/browse/HIVE-6914
> Project: Hive
> Issue Type: Bug
> Components: File Formats
> Affects Versions: 0.12.0, 0.13.0
> Reporter: Tongjie Chen
> Assignee: Sergio Peña
> Labels: parquet, serialization
> Attachments: HIVE-6914.1.patch, HIVE-6914.1.patch, HIVE-6914.2.patch,
> HIVE-6914.3.patch, HIVE-6914.4.patch, NestedMap.parquet
>
>
> // table schema (identical for both plain text version and parquet version)
> desc hive> desc text_mmap;
> m map>
> // sample nested map entry
> {"level1":{"level2_key1":"value1","level2_key2":"value2"}}
> The following query will fail,
> insert overwrite table parquet_mmap select * from text_mmap;
> Caused by: parquet.io.ParquetEncodingException: This should be an
> ArrayWritable or MapWritable:
> org.apache.hadoop.hive.ql.io.parquet.writable.BinaryWritable@f2f8106
> at
> org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriter.writeData(DataWritableWriter.java:85)
> at
> org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriter.writeArray(DataWritableWriter.java:118)
> at
> org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriter.writeData(DataWritableWriter.java:80)
> at
> org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriter.writeData(DataWritableWriter.java:82)
> at
> org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriter.write(DataWritableWriter.java:55)
> at
> org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriteSupport.write(DataWritableWriteSupport.java:59)
> at
> org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriteSupport.write(DataWritableWriteSupport.java:31)
> at
> parquet.hadoop.InternalParquetRecordWriter.write(InternalParquetRecordWriter.java:115)
> at parquet.hadoop.ParquetRecordWriter.write(ParquetRecordWriter.java:81)
> at parquet.hadoop.ParquetRecordWriter.write(ParquetRecordWriter.java:37)
> at
> org.apache.hadoop.hive.ql.io.parquet.write.ParquetRecordWriterWrapper.write(ParquetRecordWriterWrapper.java:77)
> at
> org.apache.hadoop.hive.ql.io.parquet.write.ParquetRecordWriterWrapper.write(ParquetRecordWriterWrapper.java:90)
> at
> org.apache.hadoop.hive.ql.exec.FileSinkOperator.processOp(FileSinkOperator.java:622)
> at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:793)
> at
> org.apache.hadoop.hive.ql.exec.SelectOperator.processOp(SelectOperator.java:87)
> at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:793)
> at
> org.apache.hadoop.hive.ql.exec.TableScanOperator.processOp(TableScanOperator.java:92)
> at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:793)
> at org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:540)
> ... 9 more
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)