[ 
https://issues.apache.org/jira/browse/HIVE-21338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16786761#comment-16786761
 ] 

Hive QA commented on HIVE-21338:
--------------------------------



Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12961475/HIVE-21338.5.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 36 failed/errored test(s), 15819 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[allcolref_in_udf] 
(batchId=57)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[druid_topn] (batchId=3)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[lateral_view_noalias] 
(batchId=40)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_udf_col] (batchId=36)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[udf_inline] (batchId=62)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[udf_sentences] 
(batchId=42)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[udtf_explode] 
(batchId=55)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[udtf_json_tuple] 
(batchId=86)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[udtf_parse_url_tuple] 
(batchId=80)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_groupby_reduce] 
(batchId=61)
org.apache.hadoop.hive.cli.TestContribCliDriver.testCliDriver[udtf_explode2] 
(batchId=270)
org.apache.hadoop.hive.cli.TestMiniDruidCliDriver.testCliDriver[druidmini_expressions]
 (batchId=195)
org.apache.hadoop.hive.cli.TestMiniDruidCliDriver.testCliDriver[druidmini_extractTime]
 (batchId=195)
org.apache.hadoop.hive.cli.TestMiniDruidCliDriver.testCliDriver[druidmini_floorTime]
 (batchId=195)
org.apache.hadoop.hive.cli.TestMiniDruidCliDriver.testCliDriver[druidmini_test1]
 (batchId=195)
org.apache.hadoop.hive.cli.TestMiniDruidCliDriver.testCliDriver[druidmini_test_ts]
 (batchId=195)
org.apache.hadoop.hive.cli.TestMiniDruidKafkaCliDriver.testCliDriver[druidkafkamini_basic]
 (batchId=275)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[external_jdbc_table]
 (batchId=161)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_udf_inline]
 (batchId=183)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[cbo_limit] 
(batchId=152)
org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query28] 
(batchId=277)
org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query61] 
(batchId=277)
org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query90] 
(batchId=277)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[cbo_query28] 
(batchId=275)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[cbo_query61] 
(batchId=275)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[cbo_query90] 
(batchId=275)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query28] 
(batchId=275)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query61] 
(batchId=275)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query90] 
(batchId=275)
org.apache.hadoop.hive.cli.TestTezPerfConstraintsCliDriver.testCliDriver[cbo_query23]
 (batchId=275)
org.apache.hadoop.hive.cli.TestTezPerfConstraintsCliDriver.testCliDriver[cbo_query28]
 (batchId=275)
org.apache.hadoop.hive.cli.TestTezPerfConstraintsCliDriver.testCliDriver[cbo_query61]
 (batchId=275)
org.apache.hadoop.hive.cli.TestTezPerfConstraintsCliDriver.testCliDriver[cbo_query90]
 (batchId=275)
org.apache.hadoop.hive.cli.TestTezPerfConstraintsCliDriver.testCliDriver[query28]
 (batchId=275)
org.apache.hadoop.hive.cli.TestTezPerfConstraintsCliDriver.testCliDriver[query61]
 (batchId=275)
org.apache.hadoop.hive.cli.TestTezPerfConstraintsCliDriver.testCliDriver[query90]
 (batchId=275)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/16383/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/16383/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-16383/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 36 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12961475 - PreCommit-HIVE-Build

> Remove order by and limit for aggregates
> ----------------------------------------
>
>                 Key: HIVE-21338
>                 URL: https://issues.apache.org/jira/browse/HIVE-21338
>             Project: Hive
>          Issue Type: Improvement
>          Components: Query Planning
>            Reporter: Vineet Garg
>            Assignee: Vineet Garg
>            Priority: Major
>              Labels: pull-request-available
>         Attachments: HIVE-21338.1.patch, HIVE-21338.2.patch, 
> HIVE-21338.3.patch, HIVE-21338.4.patch, HIVE-21338.5.patch
>
>          Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> If a query is guaranteed to produce at most one row LIMIT and ORDER BY could 
> be removed. This saves unnecessary vertex for LIMIT/ORDER BY.
> {code:sql}
> explain select count(*) cs from store_sales where ss_ext_sales_price > 100.00 
> order by cs limit 100
> {code}
> {code}
> STAGE PLANS:
>   Stage: Stage-1
>     Tez
>       DagId: vgarg_20190227131959_2914830f-eab6-425d-b9f0-b8cb56f8a1e9:4
>       Edges:
>         Reducer 2 <- Map 1 (CUSTOM_SIMPLE_EDGE)
>         Reducer 3 <- Reducer 2 (SIMPLE_EDGE)
>       DagName: vgarg_20190227131959_2914830f-eab6-425d-b9f0-b8cb56f8a1e9:4
>       Vertices:
>         Map 1
>             Map Operator Tree:
>                 TableScan
>                   alias: store_sales
>                   filterExpr: (ss_ext_sales_price > 100) (type: boolean)
>                   Statistics: Num rows: 1 Data size: 112 Basic stats: 
> COMPLETE Column stats: NONE
>                   Filter Operator
>                     predicate: (ss_ext_sales_price > 100) (type: boolean)
>                     Statistics: Num rows: 1 Data size: 112 Basic stats: 
> COMPLETE Column stats: NONE
>                     Select Operator
>                       Statistics: Num rows: 1 Data size: 112 Basic stats: 
> COMPLETE Column stats: NONE
>                       Group By Operator
>                         aggregations: count()
>                         mode: hash
>                         outputColumnNames: _col0
>                         Statistics: Num rows: 1 Data size: 120 Basic stats: 
> COMPLETE Column stats: NONE
>                         Reduce Output Operator
>                           sort order:
>                           Statistics: Num rows: 1 Data size: 120 Basic stats: 
> COMPLETE Column stats: NONE
>                           value expressions: _col0 (type: bigint)
>             Execution mode: vectorized
>         Reducer 2
>             Execution mode: vectorized
>             Reduce Operator Tree:
>               Group By Operator
>                 aggregations: count(VALUE._col0)
>                 mode: mergepartial
>                 outputColumnNames: _col0
>                 Statistics: Num rows: 1 Data size: 120 Basic stats: COMPLETE 
> Column stats: NONE
>                 Reduce Output Operator
>                   key expressions: _col0 (type: bigint)
>                   sort order: +
>                   Statistics: Num rows: 1 Data size: 120 Basic stats: 
> COMPLETE Column stats: NONE
>                   TopN Hash Memory Usage: 0.1
>         Reducer 3
>             Execution mode: vectorized
>             Reduce Operator Tree:
>               Select Operator
>                 expressions: KEY.reducesinkkey0 (type: bigint)
>                 outputColumnNames: _col0
>                 Statistics: Num rows: 1 Data size: 120 Basic stats: COMPLETE 
> Column stats: NONE
>                 Limit
>                   Number of rows: 100
>                   Statistics: Num rows: 1 Data size: 120 Basic stats: 
> COMPLETE Column stats: NONE
>                   File Output Operator
>                     compressed: false
>                     Statistics: Num rows: 1 Data size: 120 Basic stats: 
> COMPLETE Column stats: NONE
>                     table:
>                         input format: 
> org.apache.hadoop.mapred.SequenceFileInputFormat
>                         output format: 
> org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat
>                         serde: 
> org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
>   Stage: Stage-0
>     Fetch Operator
>       limit: 100
>       Processor Tree:
>         ListSink
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to