[
https://issues.apache.org/jira/browse/HIVE-4329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14080666#comment-14080666
]
Hive QA commented on HIVE-4329:
-------------------------------
{color:red}Overall{color}: -1 at least one tests failed
Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12658803/HIVE-4329.0.patch
{color:red}ERROR:{color} -1 due to 21 failed/errored test(s), 5868 tests
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_ql_rewrite_gbtoidx
org.apache.hive.hcatalog.hbase.TestPigHBaseStorageHandler.testPigPopulation
org.apache.hive.hcatalog.mapreduce.TestHCatExternalPartitioned.testHCatPartitionedTable[0]
org.apache.hive.hcatalog.mapreduce.TestHCatExternalPartitioned.testHCatPartitionedTable[1]
org.apache.hive.hcatalog.mapreduce.TestHCatExternalPartitioned.testHCatPartitionedTable[2]
org.apache.hive.hcatalog.mapreduce.TestHCatExternalPartitioned.testHCatPartitionedTable[4]
org.apache.hive.hcatalog.mapreduce.TestHCatExternalPartitioned.testHCatPartitionedTable[5]
org.apache.hive.hcatalog.mapreduce.TestHCatExternalPartitioned.testHCatPartitionedTable[6]
org.apache.hive.hcatalog.mapreduce.TestHCatMutablePartitioned.testHCatPartitionedTable[0]
org.apache.hive.hcatalog.mapreduce.TestHCatMutablePartitioned.testHCatPartitionedTable[1]
org.apache.hive.hcatalog.mapreduce.TestHCatMutablePartitioned.testHCatPartitionedTable[2]
org.apache.hive.hcatalog.mapreduce.TestHCatMutablePartitioned.testHCatPartitionedTable[4]
org.apache.hive.hcatalog.mapreduce.TestHCatMutablePartitioned.testHCatPartitionedTable[5]
org.apache.hive.hcatalog.mapreduce.TestHCatMutablePartitioned.testHCatPartitionedTable[6]
org.apache.hive.hcatalog.mapreduce.TestHCatPartitioned.testHCatPartitionedTable[0]
org.apache.hive.hcatalog.mapreduce.TestHCatPartitioned.testHCatPartitionedTable[1]
org.apache.hive.hcatalog.mapreduce.TestHCatPartitioned.testHCatPartitionedTable[2]
org.apache.hive.hcatalog.mapreduce.TestHCatPartitioned.testHCatPartitionedTable[4]
org.apache.hive.hcatalog.mapreduce.TestHCatPartitioned.testHCatPartitionedTable[5]
org.apache.hive.hcatalog.mapreduce.TestHCatPartitioned.testHCatPartitionedTable[6]
org.apache.hive.hcatalog.pig.TestHCatStorer.testStoreFuncAllSimpleTypes
{noformat}
Test results:
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/117/testReport
Console output:
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/117/console
Test logs:
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-117/
Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 21 tests failed
{noformat}
This message is automatically generated.
ATTACHMENT ID: 12658803
> HCatalog should use getHiveRecordWriter rather than getRecordWriter
> -------------------------------------------------------------------
>
> Key: HIVE-4329
> URL: https://issues.apache.org/jira/browse/HIVE-4329
> Project: Hive
> Issue Type: Bug
> Components: HCatalog, Serializers/Deserializers
> Affects Versions: 0.14.0
> Environment: discovered in Pig, but it looks like the root cause
> impacts all non-Hive users
> Reporter: Sean Busbey
> Assignee: David Chen
> Attachments: HIVE-4329.0.patch
>
>
> Attempting to write to a HCatalog defined table backed by the AvroSerde fails
> with the following stacktrace:
> {code}
> java.lang.ClassCastException: org.apache.hadoop.io.NullWritable cannot be
> cast to org.apache.hadoop.io.LongWritable
> at
> org.apache.hadoop.hive.ql.io.avro.AvroContainerOutputFormat$1.write(AvroContainerOutputFormat.java:84)
> at
> org.apache.hcatalog.mapreduce.FileRecordWriterContainer.write(FileRecordWriterContainer.java:253)
> at
> org.apache.hcatalog.mapreduce.FileRecordWriterContainer.write(FileRecordWriterContainer.java:53)
> at
> org.apache.hcatalog.pig.HCatBaseStorer.putNext(HCatBaseStorer.java:242)
> at org.apache.hcatalog.pig.HCatStorer.putNext(HCatStorer.java:52)
> at
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:139)
> at
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:98)
> at
> org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.write(MapTask.java:559)
> at
> org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:85)
> {code}
> The proximal cause of this failure is that the AvroContainerOutputFormat's
> signature mandates a LongWritable key and HCat's FileRecordWriterContainer
> forces a NullWritable. I'm not sure of a general fix, other than redefining
> HiveOutputFormat to mandate a WritableComparable.
> It looks like accepting WritableComparable is what's done in the other Hive
> OutputFormats, and there's no reason AvroContainerOutputFormat couldn't also
> be changed, since it's ignoring the key. That way fixing things so
> FileRecordWriterContainer can always use NullWritable could get spun into a
> different issue?
> The underlying cause for failure to write to AvroSerde tables is that
> AvroContainerOutputFormat doesn't meaningfully implement getRecordWriter, so
> fixing the above will just push the failure into the placeholder RecordWriter.
--
This message was sent by Atlassian JIRA
(v6.2#6252)