[
https://issues.apache.org/jira/browse/HIVE-9251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14264160#comment-14264160
]
Xuefu Zhang commented on HIVE-9251:
-----------------------------------
I see. Thanks for the explanation. It seems that we interpreted the param to
Utilities.estimateReducers() slightly different. I think we can get rid of code
for getting reducer memory once [~jxiang] also agrees to the proposal.
> SetSparkReducerParallelism is likely to set too small number of reducers
> [Spark Branch]
> ---------------------------------------------------------------------------------------
>
> Key: HIVE-9251
> URL: https://issues.apache.org/jira/browse/HIVE-9251
> Project: Hive
> Issue Type: Sub-task
> Components: Spark
> Reporter: Rui Li
> Assignee: Rui Li
> Attachments: HIVE-9251.1-spark.patch
>
>
> This may hurt performance or even lead to task failures. For example, spark's
> netty-based shuffle limits the max frame size to be 2G.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)