[ 
https://issues.apache.org/jira/browse/HIVE-8842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14209353#comment-14209353
 ] 

Hive QA commented on HIVE-8842:
-------------------------------



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12681245/HIVE-8842.3-spark.patch

{color:red}ERROR:{color} -1 due to 5 failed/errored test(s), 7234 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_sample_islocalmode_hook
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_optimize_nullscan
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_optimize_nullscan
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_smb_mapjoin_3
org.apache.hadoop.hive.ql.exec.spark.TestHiveKVResultCache.testResultList
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-SPARK-Build/347/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-SPARK-Build/347/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-SPARK-Build-347/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 5 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12681245 - PreCommit-HIVE-SPARK-Build

> auto_join2.q produces incorrect tree [Spark Branch]
> ---------------------------------------------------
>
>                 Key: HIVE-8842
>                 URL: https://issues.apache.org/jira/browse/HIVE-8842
>             Project: Hive
>          Issue Type: Sub-task
>          Components: Spark
>            Reporter: Szehon Ho
>            Assignee: Chao
>         Attachments: HIVE-8842.1-spark.patch, HIVE-8842.2-spark.patch, 
> HIVE-8842.3-spark.patch
>
>
> Enabling the SparkMapJoinResolver and SparkReduceSinkMapJoinProc, I see the 
> following:
> {noformat}
> explain select * from src src1 JOIN src src2 ON (src1.key = src2.key) JOIN 
> src src3 ON (src1.key + src2.key = src3.key);
> {noformat}
> produces too many stages (six), and too many HashTableSink.
> {noformat}
> STAGE DEPENDENCIES:
>   Stage-5 is a root stage
>   Stage-4 depends on stages: Stage-5
>   Stage-3 depends on stages: Stage-4
>   Stage-7 is a root stage
>   Stage-6 depends on stages: Stage-7
>   Stage-0 is a root stage
> STAGE PLANS:
>   Stage: Stage-5
>     Spark
>       DagName: szehon_20141112105656_dd50e07d-94ad-4f9d-899e-bcb6d9a39c13:3
>       Vertices:
>         Map 1 
>             Map Operator Tree:
>                 TableScan
>                   alias: src2
>                   Statistics: Num rows: 29 Data size: 5812 Basic stats: 
> COMPLETE Column stats: NONE
>                   Filter Operator
>                     predicate: key is not null (type: boolean)
>                     Statistics: Num rows: 15 Data size: 3006 Basic stats: 
> COMPLETE Column stats: NONE
>                     HashTable Sink Operator
>                       condition expressions:
>                         0 {key} {value}
>                         1 {key} {value}
>                       keys:
>                         0 key (type: string)
>                         1 key (type: string)
>   Stage: Stage-4
>     Spark
>       DagName: szehon_20141112105656_dd50e07d-94ad-4f9d-899e-bcb6d9a39c13:2
>       Vertices:
>         Map 3 
>             Map Operator Tree:
>                 TableScan
>                   alias: src1
>                   Statistics: Num rows: 29 Data size: 5812 Basic stats: 
> COMPLETE Column stats: NONE
>                   Filter Operator
>                     predicate: key is not null (type: boolean)
>                     Statistics: Num rows: 15 Data size: 3006 Basic stats: 
> COMPLETE Column stats: NONE
>                     Map Join Operator
>                       condition map:
>                            Inner Join 0 to 1
>                       condition expressions:
>                         0 {key} {value}
>                         1 {key} {value}
>                       keys:
>                         0 key (type: string)
>                         1 key (type: string)
>                       outputColumnNames: _col0, _col1, _col5, _col6
>                       input vertices:
>                         1 Map 1
>                       Statistics: Num rows: 16 Data size: 3306 Basic stats: 
> COMPLETE Column stats: NONE
>                       Filter Operator
>                         predicate: (_col0 + _col5) is not null (type: boolean)
>                         Statistics: Num rows: 8 Data size: 1653 Basic stats: 
> COMPLETE Column stats: NONE
>                         HashTable Sink Operator
>                           condition expressions:
>                             0 {_col0} {_col1} {_col5} {_col6}
>                             1 {key} {value}
>                           keys:
>                             0 (_col0 + _col5) (type: double)
>                             1 UDFToDouble(key) (type: double)
>   Stage: Stage-3
>     Spark
>       DagName: szehon_20141112105656_dd50e07d-94ad-4f9d-899e-bcb6d9a39c13:1
>       Vertices:
>         Map 2 
>             Map Operator Tree:
>                 TableScan
>                   alias: src3
>                   Statistics: Num rows: 29 Data size: 5812 Basic stats: 
> COMPLETE Column stats: NONE
>                   Filter Operator
>                     predicate: UDFToDouble(key) is not null (type: boolean)
>                     Statistics: Num rows: 15 Data size: 3006 Basic stats: 
> COMPLETE Column stats: NONE
>                     Map Join Operator
>                       condition map:
>                            Inner Join 0 to 1
>                       condition expressions:
>                         0 {_col0} {_col1} {_col5} {_col6}
>                         1 {key} {value}
>                       keys:
>                         0 (_col0 + _col5) (type: double)
>                         1 UDFToDouble(key) (type: double)
>                       outputColumnNames: _col0, _col1, _col5, _col6, _col10, 
> _col11
>                       input vertices:
>                         0 Map 3
>                       Statistics: Num rows: 16 Data size: 3306 Basic stats: 
> COMPLETE Column stats: NONE
>                       Select Operator
>                         expressions: _col0 (type: string), _col1 (type: 
> string), _col5 (type: string), _col6 (type: string), _col10 (type: string), 
> _col11 (type: string)
>                         outputColumnNames: _col0, _col1, _col2, _col3, _col4, 
> _col5
>                         Statistics: Num rows: 16 Data size: 3306 Basic stats: 
> COMPLETE Column stats: NONE
>                         File Output Operator
>                           compressed: false
>                           Statistics: Num rows: 16 Data size: 3306 Basic 
> stats: COMPLETE Column stats: NONE
>                           table:
>                               input format: 
> org.apache.hadoop.mapred.TextInputFormat
>                               output format: 
> org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
>                               serde: 
> org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
>   Stage: Stage-7
>     Spark
>       DagName: szehon_20141112105656_dd50e07d-94ad-4f9d-899e-bcb6d9a39c13:3
>       Vertices:
>         Map 1 
>             Map Operator Tree:
>                 TableScan
>                   alias: src2
>                   Statistics: Num rows: 29 Data size: 5812 Basic stats: 
> COMPLETE Column stats: NONE
>                   Filter Operator
>                     predicate: key is not null (type: boolean)
>                     Statistics: Num rows: 15 Data size: 3006 Basic stats: 
> COMPLETE Column stats: NONE
>                     HashTable Sink Operator
>                       condition expressions:
>                         0 {key} {value}
>                         1 {key} {value}
>                       keys:
>                         0 key (type: string)
>                         1 key (type: string)
>   Stage: Stage-6
>     Spark
>       DagName: szehon_20141112105656_dd50e07d-94ad-4f9d-899e-bcb6d9a39c13:2
>       Vertices:
>         Map 3 
>             Map Operator Tree:
>                 TableScan
>                   alias: src1
>                   Statistics: Num rows: 29 Data size: 5812 Basic stats: 
> COMPLETE Column stats: NONE
>                   Filter Operator
>                     predicate: key is not null (type: boolean)
>                     Statistics: Num rows: 15 Data size: 3006 Basic stats: 
> COMPLETE Column stats: NONE
>                     Map Join Operator
>                       condition map:
>                            Inner Join 0 to 1
>                       condition expressions:
>                         0 {key} {value}
>                         1 {key} {value}
>                       keys:
>                         0 key (type: string)
>                         1 key (type: string)
>                       outputColumnNames: _col0, _col1, _col5, _col6
>                       input vertices:
>                         1 Map 1
>                       Statistics: Num rows: 16 Data size: 3306 Basic stats: 
> COMPLETE Column stats: NONE
>                       Filter Operator
>                         predicate: (_col0 + _col5) is not null (type: boolean)
>                         Statistics: Num rows: 8 Data size: 1653 Basic stats: 
> COMPLETE Column stats: NONE
>                         HashTable Sink Operator
>                           condition expressions:
>                             0 {_col0} {_col1} {_col5} {_col6}
>                             1 {key} {value}
>                           keys:
>                             0 (_col0 + _col5) (type: double)
>                             1 UDFToDouble(key) (type: double)
>   Stage: Stage-0
>     Fetch Operator
>       limit: -1
>       Processor Tree:
>         ListSink
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to