[jira] [Updated] (YARN-4628) TopCli to support Application priority

2016-02-07 Thread Bibin A Chundatt (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bibin A Chundatt updated YARN-4628:
---
Priority: Minor  (was: Major)

> TopCli to support Application priority 
> ---
>
> Key: YARN-4628
> URL: https://issues.apache.org/jira/browse/YARN-4628
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>Priority: Minor
> Attachments: 0001-YARN-4628.patch
>
>
> In Topcli support  Apppriority 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4678) Cluster used capacity is > 100 when container reserved

2016-02-07 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15136594#comment-15136594
 ] 

Brahma Reddy Battula commented on YARN-4678:


 *As reserved memory is include in the used resources. queue capacity > 100* 
..Please let me know any thoughts on this..

2016-02-22 01:15:09,359 | INFO  | ResourceManager Event Processor | 
completedContainer queue=root usedCapacity=0.958 
absoluteUsedCapacity=0.958  *{color:red}used= 
cluster={color}*  | ParentQueue.java:631
2016-02-22 01:15:09,359 | INFO  | ResourceManager Event Processor | Re-sorting 
completed queue: root.QueueA stats: QueueA: capacity=0.4, absoluteCapacity=0.4, 
usedResources=, usedCapacity=1.8229909, 
absoluteUsedCapacity=0.7291667, numApps=1, numContainers=5 | 
ParentQueue.java:648
2016-02-22 01:15:09,359 | INFO  | ResourceManager Event Processor | Application 
attempt appattempt_1456061422894_0003_01 released container 
container_e05_1456061422894_0003_01_004504 on  *{color:blue}node: host: 
vm-3:26009{color}*  #containers=1 available= 
used= with event: FINISHED | CapacityScheduler.java:1452
2016-02-22 01:15:09,359 | INFO  | ResourceManager Event Processor | Trying to 
fulfill reservation for application application_1456061422894_0003 on node: 
vm-3:26009 | CapacityScheduler.java:1110
2016-02-22 01:15:09,359 | INFO  | ResourceManager Event Processor | Application 
application_1456061422894_0003 *{color:blue}unreserved  on node host: 
vm-3:26009{color}*  #containers=1 available= 
used=, currently has 0 at priority 20; 
currentReservation  on node-label= | 
FiCaSchedulerApp.java:229
2016-02-22 01:15:09,359 | INFO  | ResourceManager Event Processor | 
container_e05_1456061422894_0003_01_004505 Container Transitioned from NEW to 
ALLOCATED | RMContainerImpl.java:419
2016-02-22 01:15:09,359 | INFO  | ResourceManager Event Processor | Assigned 
container container_e05_1456061422894_0003_01_004505 of capacity  on  *{color:blue}host vm-3:26009{color}* , which has 2 containers, 
 used and  available after 
allocation | SchedulerNode.java:154
2016-02-22 01:15:09,359 | INFO  | ResourceManager Event Processor | 
assignedContainer application attempt=appattempt_1456061422894_0003_01 
container=Container: [ContainerId: container_e05_1456061422894_0003_01_004505, 
NodeId: vm-3:26009, NodeHttpAddress: vm-3:26010, Resource: , Priority: 20, Token: null, ] queue=QueueA: capacity=0.4, 
absoluteCapacity=0.4, usedResources=, 
usedCapacity=1.8229909, absoluteUsedCapacity=0.7291667, numApps=1, 
numContainers=5 clusterResource= | LeafQueue.java:1731
2016-02-22 01:15:09,360 | INFO  | ResourceManager Event Processor | 
container_e05_1456061422894_0003_01_004508 Container Transitioned from 
*{color:blue}NEW to RESERVED{color}*  | RMContainerImpl.java:419
2016-02-22 01:15:09,360 | INFO  | ResourceManager Event Processor |  *Reserved 
container  application=application_1456061422894_0003 resource=*  queue=QueueA: capacity=0.4, absoluteCapacity=0.4, 
usedResources=, usedCapacity=1.8229909, 
absoluteUsedCapacity=0.7291667, numApps=1, numContainers=5 
usedCapacity=1.8229909 absoluteUsedCapacity=0.7291667 used= cluster= | LeafQueue.java:1764
2016-02-22 01:15:09,360 | INFO  | ResourceManager Event Processor | Re-sorting 
assigned queue: root.QueueA stats: QueueA: capacity=0.4, absoluteCapacity=0.4, 
usedResources=, usedCapacity=2.2396743, 
absoluteUsedCapacity=0.8958333, numApps=1, numContainers=6 | 
ParentQueue.java:585
2016-02-22 01:15:09,360 | INFO  | ResourceManager Event Processor |  
*assignedContainer queue=root {color:red}usedCapacity=1.125 
absoluteUsedCapacity=1.125 
used= 
cluster={color}*  | ParentQueue.java:465

> Cluster used capacity is > 100 when container reserved 
> ---
>
> Key: YARN-4678
> URL: https://issues.apache.org/jira/browse/YARN-4678
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>
>  *Scenario:* 
> * Start cluster with Three NM's each having 8GB (cluster memory:24GB).
> * Configure queues with elasticity and userlimitfactor=10.
> * disable pre-emption.
> * run two job with different priority in different queue at the same time
> ** yarn jar hadoop-mapreduce-examples-2.7.2.jar pi -Dyarn.app.priority=LOW 
> -Dmapreduce.job.queuename=QueueA -Dmapreduce.map.memory.mb=4096 
> -Dyarn.app.mapreduce.am.resource.mb=1536 
> -Dmapreduce.job.reduce.slowstart.completedmaps=1.0 10 1
> ** yarn jar hadoop-mapreduce-examples-2.7.2.jar pi -Dyarn.app.priority=HIGH 
> -Dmapreduce.job.queuename=QueueB -Dmapreduce.map.memory.mb=4096 
> -Dyarn.app.mapreduce.am.resource.mb=1536 3 1
> * observe the cluster capacity which was used in RM web UI



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Moved] (YARN-4678) Cluster used capacity is > 100 when container reserved

2016-02-07 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4678?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula moved MAPREDUCE-6629 to YARN-4678:
---

Key: YARN-4678  (was: MAPREDUCE-6629)
Project: Hadoop YARN  (was: Hadoop Map/Reduce)

> Cluster used capacity is > 100 when container reserved 
> ---
>
> Key: YARN-4678
> URL: https://issues.apache.org/jira/browse/YARN-4678
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>
>  *Scenario:* 
> * Start cluster with Three NM's each having 8GB (cluster memory:24GB).
> * Configure queues with elasticity and userlimitfactor=10.
> * disable pre-emption.
> * run two job with different priority in different queue at the same time
> ** yarn jar hadoop-mapreduce-examples-2.7.2.jar pi -Dyarn.app.priority=LOW 
> -Dmapreduce.job.queuename=QueueA -Dmapreduce.map.memory.mb=4096 
> -Dyarn.app.mapreduce.am.resource.mb=1536 
> -Dmapreduce.job.reduce.slowstart.completedmaps=1.0 10 1
> ** yarn jar hadoop-mapreduce-examples-2.7.2.jar pi -Dyarn.app.priority=HIGH 
> -Dmapreduce.job.queuename=QueueB -Dmapreduce.map.memory.mb=4096 
> -Dyarn.app.mapreduce.am.resource.mb=1536 3 1
> * observe the cluster capacity which was used in RM web UI



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-475) Remove ApplicationConstants.AM_APP_ATTEMPT_ID_ENV as it is no longer set in an AM's environment

2016-02-07 Thread Johannes Zillmann (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15136201#comment-15136201
 ] 

Johannes Zillmann commented on YARN-475:


Btw, the documentation 
[http://hadoop.apache.org/docs/r2.7.2/hadoop-yarn/hadoop-yarn-site/WritingYarnApplications.html]
 uses still the now non-existing 
{{ApplicationConstants.AM_APP_ATTEMPT_ID_ENV.}}!

> Remove ApplicationConstants.AM_APP_ATTEMPT_ID_ENV as it is no longer set in 
> an AM's environment
> ---
>
> Key: YARN-475
> URL: https://issues.apache.org/jira/browse/YARN-475
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Hitesh Shah
>Assignee: Hitesh Shah
> Fix For: 2.1.0-beta
>
> Attachments: YARN-475.1.patch
>
>
> AMs are expected to use ApplicationConstants.AM_CONTAINER_ID_ENV and derive 
> the application attempt id from the container id. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4409) Fix javadoc and checkstyle issues in timelineservice code

2016-02-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15136184#comment-15136184
 ] 

Hadoop QA commented on YARN-4409:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 2m 20s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
10s {color} | {color:green} YARN-2928 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 6s 
{color} | {color:green} YARN-2928 passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 8s 
{color} | {color:green} YARN-2928 passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
24s {color} | {color:green} YARN-2928 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 4m 11s 
{color} | {color:green} YARN-2928 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 2m 
7s {color} | {color:green} YARN-2928 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 12s 
{color} | {color:red} 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core 
in YARN-2928 has 2 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 53s 
{color} | {color:green} YARN-2928 passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 5m 36s 
{color} | {color:green} YARN-2928 passed with JDK v1.7.0_95 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 23s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
23s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 9s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 56s 
{color} | {color:green} root-jdk1.8.0_72 with JDK v1.8.0_72 generated 0 new + 
731 unchanged - 1 fixed = 731 total (was 732) {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 9s 
{color} | {color:green} root in the patch passed with JDK v1.8.0_72. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 12s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 18m 8s {color} 
| {color:red} root-jdk1.7.0_95 with JDK v1.7.0_95 generated 1 new + 724 
unchanged - 1 fixed = 725 total (was 725) {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 12s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 15s 
{color} | {color:red} root: patch generated 2 new + 770 unchanged - 493 fixed = 
772 total (was 1263) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 4m 15s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 2m 
5s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 9m 
48s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 4m 29s 
{color} | {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api-jdk1.8.0_72 with JDK v1.8.0_72 
generated 6 new + 94 unchanged - 6 fixed = 100 total (was 100) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 24s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 11m 7s 
{color} | {color:green} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common-jdk1.7.0_95 with JDK 
v1.7.0_95 generated 0 new + 0 unchanged - 3 fixed = 0 total (was 3) {col