[jira] [Commented] (YARN-9720) MR job submitted to a queue with default partition accessing the non-exclusive label resources

2019-08-11 Thread ANANDA G B (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16904838#comment-16904838
 ] 

ANANDA G B commented on YARN-9720:
--

[~eepayne]: can you check this, i have added capacity-scheduler.xml

> MR job submitted to a queue with default partition accessing the 
> non-exclusive label resources
> --
>
> Key: YARN-9720
> URL: https://issues.apache.org/jira/browse/YARN-9720
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacityscheduler, resourcemanager
>Affects Versions: 3.1.1, 3.1.2
>Reporter: ANANDA G B
>Assignee: ANANDA G B
>Priority: Major
> Attachments: Issue.png
>
>
> When MR job is submitted to a queue1 with default partition, then it is 
> accessing non-exclusive partition resources. Please find the attachments.
> MR Job command:
> ./yarn jar ../share/hadoop/mapreduce/hadoop-mapreduce-examples-3.1.1.0201.jar 
> pi -Dmapreduce.job.queuename=queue1 -Dmapreduce.job.node-label-expression= 10 
> 10
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9720) MR job submitted to a queue with default partition accessing the non-exclusive label resources

2019-08-08 Thread ANANDA G B (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16903167#comment-16903167
 ] 

ANANDA G B commented on YARN-9720:
--

Hi Eric Payne, here is my CapacityScheduler.xml configuration:

 
 yarn.scheduler.capacity.maximum-applications
 1
 
 Maximum number of applications that can be pending and running.
 
 


 yarn.scheduler.capacity.resource-calculator
 org.apache.hadoop.yarn.util.resource.DefaultResourceCalculator
 
 The ResourceCalculator implementation to be used to compare 
 Resources in the scheduler.
 The default i.e. DefaultResourceCalculator only uses Memory while
 DominantResourceCalculator uses dominant-resource to compare 
 multi-dimensional resources such as Memory, CPU etc.
 
 


 yarn.scheduler.capacity.root.queues
 default,root-default,queue1
 
 The queues at the this level (root is the root queue).
 
 


yarn.scheduler.capacity.root.accessible-node-labels
pool1
 

yarn.scheduler.capacity.root.accessible-node-labels.pool1.capacity
100


yarn.scheduler.capacity.root.maximum-am-resource-percent
1



yarn.scheduler.capacity.root.default.capacity
20


yarn.scheduler.capacity.root.default.maximum-capacity
100


yarn.scheduler.capacity.root.default.state
RUNNING


yarn.scheduler.capacity.root.default.maximum-am-resource-percent
0.1


yarn.scheduler.capacity.root.default.accessible-node-labels
 




yarn.scheduler.capacity.root.root-default.capacity
70.0


yarn.scheduler.capacity.root.root-default.maximum-capacity
100


yarn.scheduler.capacity.root.root-default.state
RUNNING



yarn.scheduler.capacity.root.root-default.maximum-am-resource-percent
0.1


yarn.scheduler.capacity.root.root-default.accessible-node-labels
pool1
 

yarn.scheduler.capacity.root.root-default.default-node-label-expression
pool1


yarn.scheduler.capacity.root.root-default.accessible-node-labels.pool1.capacity
80.0
 

yarn.scheduler.capacity.root.root-default.accessible-node-labels.pool1.maximum-capacity
100.0


 



yarn.scheduler.capacity.root.queue1.capacity
10.0


yarn.scheduler.capacity.root.queue1.maximum-capacity
100


yarn.scheduler.capacity.root.queue1.state
RUNNING



yarn.scheduler.capacity.root.queue1.maximum-am-resource-percent
0.8


yarn.scheduler.capacity.root.queue1.accessible-node-labels
pool1


yarn.scheduler.capacity.root.queue1.default-node-label-expression
pool1


yarn.scheduler.capacity.root.queue1.accessible-node-labels.pool1.capacity
20.0


yarn.scheduler.capacity.root.queue1.accessible-node-labels.pool1.maximum-capacity
100.0



 yarn.scheduler.capacity.root.default.user-limit-factor
 1
 
 Default queue user limit a percentage from 0.0 to 1.0.
 
 

 


 yarn.scheduler.capacity.root.default.acl_submit_applications
 *
 
 The ACL of who can submit jobs to the default queue.
 
 


 yarn.scheduler.capacity.root.default.acl_administer_queue
 *
 
 The ACL of who can administer jobs on the default queue.
 
 


 yarn.scheduler.capacity.root.default.acl_application_max_priority
 *
 
 The ACL of who can submit applications with configured priority.
 
 
 


 yarn.scheduler.capacity.root.default.maximum-application-lifetime
 
 -1
 
 Maximum lifetime of an application which is submitted to a queue
 in seconds. Any value less than or equal to zero will be considered as
 disabled.
 This will be a hard time limit for all applications in this
 queue. If positive value is configured then any application submitted
 to this queue will be killed after exceeds the configured lifetime.
 User can also specify lifetime per application basis in
 application submission context. But user lifetime will be
 overridden if it exceeds queue maximum lifetime. It is point-in-time
 configuration.
 Note : Configuring too low value will result in killing application
 sooner. This feature is applicable only for leaf queue.
 
 


 yarn.scheduler.capacity.root.default.default-application-lifetime
 
 -1
 
 Default lifetime of an application which is submitted to a queue
 in seconds. Any value less than or equal to zero will be considered as
 disabled.
 If the user has not submitted application with lifetime value then this
 value will be taken. It is point-in-time configuration.
 Note : Default lifetime can't exceed maximum lifetime. This feature is
 applicable only for leaf queue.
 
 


 yarn.scheduler.capacity.node-locality-delay
 40
 
 Number of missed scheduling opportunities after which the CapacityScheduler 
 attempts to schedule rack-local containers.
 When setting this parameter, the size of the cluster should be taken into 
account.
 We use 40 as the default value, which is approximately the number of nodes in 
one rack.
 Note, if this value is -1, the locality constraint in the container request
 will be ignored, which disables the delay scheduling.
 
 


 yarn.scheduler.capacity.rack-locality-additional-delay
 -1
 
 Number of additional missed scheduling opportunities over the 
node-locality-delay
 ones, aft

[jira] [Commented] (YARN-9720) MR job submitted to a queue with default partition accessing the non-exclusive label resources

2019-08-06 Thread Eric Payne (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16901499#comment-16901499
 ] 

Eric Payne commented on YARN-9720:
--

[~gb.ana...@gmail.com], can you please attach a copy of your 
capacity-scheduler.xml to show the queue and label configuration properties?

> MR job submitted to a queue with default partition accessing the 
> non-exclusive label resources
> --
>
> Key: YARN-9720
> URL: https://issues.apache.org/jira/browse/YARN-9720
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacityscheduler, resourcemanager
>Affects Versions: 3.1.1, 3.1.2
>Reporter: ANANDA G B
>Assignee: ANANDA G B
>Priority: Major
> Attachments: Issue.png
>
>
> When MR job is submitted to a queue1 with default partition, then it is 
> accessing non-exclusive partition resources. Please find the attachments.
> MR Job command:
> ./yarn jar ../share/hadoop/mapreduce/hadoop-mapreduce-examples-3.1.1.0201.jar 
> pi -Dmapreduce.job.queuename=queue1 -Dmapreduce.job.node-label-expression= 10 
> 10
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org