[ 
https://issues.apache.org/jira/browse/HIVE-490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12712403#action_12712403
 ] 

He Yongqiang commented on HIVE-490:
-----------------------------------

It seems there are some misunderstanding on the configure parameter 
hive.exec.reducers.max. Some one please help me confirm it.
{noformat}
+<property>
+  <name>hive.exec.reducers.max</name>
+  <value>999</value>
+  <description>max number of reducers will be used. If it exceeds the one 
specified in the configuration parameter
+       mapred.reduce.tasks, hive will use the one specified in 
mapred.reduce.tasks.</description>
+</property>
{noformat}

should the description be:
{noformat}
max number of reducers will be used. If the one specified in the configuration 
parameter mapred.reduce.tasks is negative, hive will use this one as the max 
number of reducers when automatically determine number of reducers.
{noformat}


> set mapred.reduce.tasks to -1 in hive-default.xml
> -------------------------------------------------
>
>                 Key: HIVE-490
>                 URL: https://issues.apache.org/jira/browse/HIVE-490
>             Project: Hadoop Hive
>          Issue Type: Bug
>          Components: Clients
>            Reporter: Joydeep Sen Sarma
>         Attachments: hive-490-2009-05-23-2.patch, 
> hive-490-2009-05-23-3.patch, hive-490-2009-05-23.patch
>
>
> this is profoundly irritating. we can estimate reducers. but since 
> hadoop-default.xml sets this to '1' - by default we don't. if we expect most 
> users to set this to -1 - then we might as well set it to -1 ourselves.
> to not shoot ourselves in the foot - we might want to cap number of reducers 
> based on cluster size (like 2*Nodes).

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to