[
https://issues.apache.org/jira/browse/HIVE-4518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13822688#comment-13822688
]
Hive QA commented on HIVE-4518:
-------------------------------
{color:red}Overall{color}: -1 at least one tests failed
Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12613662/HIVE-4518.9.patch
{color:red}ERROR:{color} -1 due to 3 failed/errored test(s), 4610 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_bucket_num_reducers
org.apache.hadoop.hive.ql.exec.vector.TestVectorFilterOperator.testBasicFilterLargeData
org.apache.hadoop.hive.ql.exec.vector.TestVectorFilterOperator.testBasicFilterOperator
{noformat}
Test results:
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/277/testReport
Console output:
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/277/console
Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests failed with: TestsFailedException: 3 tests failed
{noformat}
This message is automatically generated.
ATTACHMENT ID: 12613662
> Counter Strike: Operation Operator
> ----------------------------------
>
> Key: HIVE-4518
> URL: https://issues.apache.org/jira/browse/HIVE-4518
> Project: Hive
> Issue Type: Improvement
> Reporter: Gunther Hagleitner
> Assignee: Gunther Hagleitner
> Attachments: HIVE-4518.1.patch, HIVE-4518.2.patch, HIVE-4518.3.patch,
> HIVE-4518.4.patch, HIVE-4518.5.patch, HIVE-4518.6.patch.txt,
> HIVE-4518.7.patch, HIVE-4518.8.patch, HIVE-4518.9.patch
>
>
> Queries of the form:
> from foo
> insert overwrite table bar partition (p) select ...
> insert overwrite table bar partition (p) select ...
> insert overwrite table bar partition (p) select ...
> Generate a huge amount of counters. The reason is that task.progress is
> turned on for dynamic partitioning queries.
> The counters not only make queries slower than necessary (up to 50%) you will
> also eventually run out. That's because we're wrapping them in enum values to
> comply with hadoop 0.17.
> The real reason we turn task.progress on is that we need CREATED_FILES and
> FATAL counters to ensure dynamic partitioning queries don't go haywire.
> The counters have counter-intuitive names like C1 through C1000 and don't
> seem really useful by themselves.
> With hadoop 20+ you don't need to wrap the counters anymore, each operator
> can simply create and increment counters. That should simplify the code a lot.
--
This message was sent by Atlassian JIRA
(v6.1#6144)