[ 
https://issues.apache.org/jira/browse/YARN-4499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tianyin Xu updated YARN-4499:
-----------------------------
    Description: 
Currently, the default value of {{yarn.scheduler.maximum-allocation-vcores}} is 
{{32}}, according to 
[yarn-default.xml|https://hadoop.apache.org/docs/r2.7.1/hadoop-yarn/hadoop-yarn-common/yarn-default.xml].

However, in {{YarnConfiguration.java}}, we specify the default to be {{4}}.
{code}
  public static final String RM_SCHEDULER_MAXIMUM_ALLOCATION_VCORES =
      YARN_PREFIX + "scheduler.maximum-allocation-vcores";
  public static final int DEFAULT_RM_SCHEDULER_MAXIMUM_ALLOCATION_VCORES = 4;
{code}

The default in the code looks correct to me. Actually I feel that the default 
value should be the same as {{yarn.nodemanager.resource.cpu-vcores}} (whose 
default is {{8}}) ---if we have {{8}} cores for scheduling, there's few reason 
we only allow the maximum of {{4}}...

The Cloudera's article on [Tuning the Cluster for MapReduce v2 (YARN) 
|http://www.cloudera.com/content/www/en-us/documentation/enterprise/5-3-x/topics/cdh_ig_yarn_tuning.html]
 also suggests that "the maximum value ( 
{{yarn.scheduler.maximum-allocation-vcores}}) is usually equal to 
{{yarn.nodemanager.resource.cpu-vcores}}..."

At the very least, we should fix 
[yarn-default.xml|https://hadoop.apache.org/docs/r2.7.1/hadoop-yarn/hadoop-yarn-common/yarn-default.xml].
 The error is pretty bad. A simple search on the Internet shows some ppl are 
confused by this error, for example,
https://community.cloudera.com/t5/Cloudera-Manager-Installation/yarn-nodemanager-resource-cpu-vcores-and-yarn-scheduler-maximum/td-p/31098
\\

(but seriously, I think we should have an automatic defaults with the min as 1 
and the max equal to the number of cores on the machine...




  was:
Currently, the default value of {{yarn.scheduler.maximum-allocation-vcores}} is 
{{4}}, according to {{YarnConfiguration.java}}

{code}
  public static final String RM_SCHEDULER_MAXIMUM_ALLOCATION_VCORES =
      YARN_PREFIX + "scheduler.maximum-allocation-vcores";
  public static final int DEFAULT_RM_SCHEDULER_MAXIMUM_ALLOCATION_VCORES = 4;
{code}

However, in 
[yarn-default.xml|https://hadoop.apache.org/docs/r2.7.1/hadoop-yarn/hadoop-yarn-common/yarn-default.xml],
 this value is {{32}}.

Yes, this seems to be an error. I feel that the default value should be the 
same as {{yarn.nodemanager.resource.cpu-vcores}} (whose default is {{8}}) ---if 
we have {{8}} cores for scheduling, there's few reason we only allow the 
maximum of {{4}}...

The Cloudera's article on [Tuning the Cluster for MapReduce v2 (YARN) 
|http://www.cloudera.com/content/www/en-us/documentation/enterprise/5-3-x/topics/cdh_ig_yarn_tuning.html]
 also suggests that "the maximum value ( 
{{yarn.scheduler.maximum-allocation-vcores}}) is usually equal to 
{{yarn.nodemanager.resource.cpu-vcores}}..."

At least, we should fix 
[yarn-default.xml|https://hadoop.apache.org/docs/r2.7.1/hadoop-yarn/hadoop-yarn-common/yarn-default.xml].
 The error is pretty bad. A simple search on the Internet shows some ppl are 
confused by this error, for example,
https://community.cloudera.com/t5/Cloudera-Manager-Installation/yarn-nodemanager-resource-cpu-vcores-and-yarn-scheduler-maximum/td-p/31098
\\

(but seriously, I think we should have an automatic defaults with the min as 1 
and the max equal to the number of cores on the machine...





> Bad config values of "yarn.scheduler.maximum-allocation-vcores"
> ---------------------------------------------------------------
>
>                 Key: YARN-4499
>                 URL: https://issues.apache.org/jira/browse/YARN-4499
>             Project: Hadoop YARN
>          Issue Type: Bug
>          Components: scheduler
>    Affects Versions: 2.7.1, 2.6.2
>            Reporter: Tianyin Xu
>
> Currently, the default value of {{yarn.scheduler.maximum-allocation-vcores}} 
> is {{32}}, according to 
> [yarn-default.xml|https://hadoop.apache.org/docs/r2.7.1/hadoop-yarn/hadoop-yarn-common/yarn-default.xml].
> However, in {{YarnConfiguration.java}}, we specify the default to be {{4}}.
> {code}
>   public static final String RM_SCHEDULER_MAXIMUM_ALLOCATION_VCORES =
>       YARN_PREFIX + "scheduler.maximum-allocation-vcores";
>   public static final int DEFAULT_RM_SCHEDULER_MAXIMUM_ALLOCATION_VCORES = 4;
> {code}
> The default in the code looks correct to me. Actually I feel that the default 
> value should be the same as {{yarn.nodemanager.resource.cpu-vcores}} (whose 
> default is {{8}}) ---if we have {{8}} cores for scheduling, there's few 
> reason we only allow the maximum of {{4}}...
> The Cloudera's article on [Tuning the Cluster for MapReduce v2 (YARN) 
> |http://www.cloudera.com/content/www/en-us/documentation/enterprise/5-3-x/topics/cdh_ig_yarn_tuning.html]
>  also suggests that "the maximum value ( 
> {{yarn.scheduler.maximum-allocation-vcores}}) is usually equal to 
> {{yarn.nodemanager.resource.cpu-vcores}}..."
> At the very least, we should fix 
> [yarn-default.xml|https://hadoop.apache.org/docs/r2.7.1/hadoop-yarn/hadoop-yarn-common/yarn-default.xml].
>  The error is pretty bad. A simple search on the Internet shows some ppl are 
> confused by this error, for example,
> https://community.cloudera.com/t5/Cloudera-Manager-Installation/yarn-nodemanager-resource-cpu-vcores-and-yarn-scheduler-maximum/td-p/31098
> \\
> (but seriously, I think we should have an automatic defaults with the min as 
> 1 and the max equal to the number of cores on the machine...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to