[ 
https://issues.apache.org/jira/browse/HIVE-8836?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14225779#comment-14225779
 ] 

Xuefu Zhang commented on HIVE-8836:
-----------------------------------

Many test failures above are shown as plan diff, somehting like: 
{code}
211c211
<         Reducer 2 <- Map 1 (GROUP, 3)
---
>         Reducer 2 <- Map 1 (GROUP, 1)
{code}
This can be explained by the fact that the current algorithm determining the 
number of reducers has included the factors such as number of executors in the 
cluster. Because we switched to local-cluster with two nodes, the number of 
reducers is different from previous value. This is not good, because every time 
we change the testing cluster, we might have a different plan.

> Enable automatic tests with remote spark client [Spark Branch]
> --------------------------------------------------------------
>
>                 Key: HIVE-8836
>                 URL: https://issues.apache.org/jira/browse/HIVE-8836
>             Project: Hive
>          Issue Type: Sub-task
>          Components: Spark
>    Affects Versions: spark-branch
>            Reporter: Chengxiang Li
>            Assignee: Rui Li
>              Labels: Spark-M3
>             Fix For: spark-branch
>
>         Attachments: HIVE-8836.1-spark.patch, HIVE-8836.2-spark.patch, 
> HIVE-8836.3-spark.patch, HIVE-8836.4-spark.patch, HIVE-8836.5-spark.patch, 
> HIVE-8836.6-spark.patch, HIVE-8836.6-spark.patch
>
>
> In real production environment, remote spark client should be used to submit 
> spark job for Hive mostly, we should enable automatic test with remote spark 
> client to make sure the Hive feature workable with it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to