[jira] [Updated] (YARN-4498) Application level node labels stats to be available in REST

2016-11-02 Thread Bibin A Chundatt (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bibin A Chundatt updated YARN-4498:
---
Attachment: YARN-4498.branch-2.8.addendum.001.patch

> Application level node labels stats to be available in REST
> ---
>
> Key: YARN-4498
> URL: https://issues.apache.org/jira/browse/YARN-4498
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api, client, resourcemanager
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>  Labels: oct16-medium
> Fix For: 2.8.0, 2.9.0, 3.0.0-alpha2
>
> Attachments: 0001-YARN-4498.patch, YARN-4498.0002.patch, 
> YARN-4498.0003.patch, YARN-4498.0004.patch, YARN-4498.branch-2.8.0001.patch, 
> YARN-4498.branch-2.8.addendum.001.patch, YARN-4498.trunk.addendum.001.patch, 
> apps.xml
>
>
> Currently nodelabel stats per application is not available through REST like 
> currently used labels by all live containers, total stats of containers per 
> label for app etc..
> CLI and web UI scenarios will be handled separately.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-5694) ZKRMStateStore should always start its verification thread to prevent accidental state store corruption

2016-11-02 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15629737#comment-15629737
 ] 

Jian He edited comment on YARN-5694 at 11/2/16 5:32 PM:


sorry, I don't quite get it. why is the active thread needed to run in non-HA 
mode ? 
Even in HA mode, it may not be needed, because the curator leader election 
library can automatically detect whether the RM is still active and send 
notification.  IIUC, no need a separate thread to detect that 


was (Author: jianhe):
sorry, I don't quite get it. why is the active thread needed to run in non-HA 
mode ? 
Even in HA mode, it may not be needed, because the curator library can 
automatically detect whether the RM is still active and send notification.  
IIUC, no need a separate thread to detect that 

> ZKRMStateStore should always start its verification thread to prevent 
> accidental state store corruption
> ---
>
> Key: YARN-5694
> URL: https://issues.apache.org/jira/browse/YARN-5694
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 3.0.0-alpha1
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Critical
>  Labels: oct16-medium
> Attachments: YARN-5694.001.patch, YARN-5694.002.patch, 
> YARN-5694.003.patch, YARN-5694.004.patch, YARN-5694.004.patch, 
> YARN-5694.005.patch, YARN-5694.006.patch, YARN-5694.007.patch, 
> YARN-5694.branch-2.7.001.patch, YARN-5694.branch-2.7.002.patch
>
>
> There are two cases.  In branch-2.7, the 
> {{ZKRMStateStore.VerifyActiveStatusThread}} is always started, even when 
> using embedded or Curator failover.  In branch-2.8, the 
> {{ZKRMStateStore.VerifyActiveStatusThread}} is only started when HA is 
> disabled, which makes no sense.  Based on the JIRA that introduced that 
> change (YARN-4559), I believe the intent was to start it only when embedded 
> failover is disabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4498) Application level node labels stats to be available in REST

2016-11-02 Thread Bibin A Chundatt (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bibin A Chundatt updated YARN-4498:
---
Attachment: YARN-4498.trunk.addendum.001.patch

> Application level node labels stats to be available in REST
> ---
>
> Key: YARN-4498
> URL: https://issues.apache.org/jira/browse/YARN-4498
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api, client, resourcemanager
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>  Labels: oct16-medium
> Fix For: 2.8.0, 2.9.0, 3.0.0-alpha2
>
> Attachments: 0001-YARN-4498.patch, YARN-4498.0002.patch, 
> YARN-4498.0003.patch, YARN-4498.0004.patch, YARN-4498.branch-2.8.0001.patch, 
> YARN-4498.trunk.addendum.001.patch, apps.xml
>
>
> Currently nodelabel stats per application is not available through REST like 
> currently used labels by all live containers, total stats of containers per 
> label for app etc..
> CLI and web UI scenarios will be handled separately.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5611) Provide an API to update lifetime of an application.

2016-11-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15629646#comment-15629646
 ] 

Hadoop QA commented on YARN-5611:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
48s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  6m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
53s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 41s{color} | {color:orange} root: The patch generated 24 new + 700 unchanged 
- 3 fixed = 724 total (was 703) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 11 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m  
4s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
23s{color} | {color:red} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api 
generated 2 new + 123 unchanged - 0 fixed = 125 total (was 123) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
26s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
32s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
23s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 14m 
51s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 35m 30s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}104m 
52s{color} | {color:green} hadoop-mapreduce-client-jobclient in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
40s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}231m 26s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests 

[jira] [Updated] (YARN-4498) Application level node labels stats to be available in REST

2016-11-02 Thread Bibin A Chundatt (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bibin A Chundatt updated YARN-4498:
---
Attachment: (was: YARN-4498.trunk.addendum.001.patch)

> Application level node labels stats to be available in REST
> ---
>
> Key: YARN-4498
> URL: https://issues.apache.org/jira/browse/YARN-4498
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api, client, resourcemanager
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>  Labels: oct16-medium
> Fix For: 2.8.0, 2.9.0, 3.0.0-alpha2
>
> Attachments: 0001-YARN-4498.patch, YARN-4498.0002.patch, 
> YARN-4498.0003.patch, YARN-4498.0004.patch, YARN-4498.branch-2.8.0001.patch, 
> YARN-4498.trunk.addendum.001.patch, apps.xml
>
>
> Currently nodelabel stats per application is not available through REST like 
> currently used labels by all live containers, total stats of containers per 
> label for app etc..
> CLI and web UI scenarios will be handled separately.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5808) Add gc log options to the yarn daemon script when starting services-api

2016-11-02 Thread Billie Rinaldi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15629793#comment-15629793
 ] 

Billie Rinaldi commented on YARN-5808:
--

Does this mean I should use hadoop_translate_cygwin_path?

> Add gc log options to the yarn daemon script when starting services-api
> ---
>
> Key: YARN-5808
> URL: https://issues.apache.org/jira/browse/YARN-5808
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gour Saha
>Assignee: Billie Rinaldi
> Fix For: yarn-native-services
>
> Attachments: YARN-5808-yarn-native-services.001.patch
>
>
> We need to add the gc log options as below when starting services-api using 
> the yarn-daemon.sh script -
> {code}
> -XX:+PrintGC -Xloggc:$YARN_LOG_DIR/services-api-gc.log -XX:+PrintGCDetails 
> -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4498) Application level node labels stats to be available in REST

2016-11-02 Thread Bibin A Chundatt (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bibin A Chundatt updated YARN-4498:
---
Attachment: YARN-4498.branch-2.8.addendum.001.patch
YARN-4498.trunk.addendum.001.patch

Attaching patch for the same

> Application level node labels stats to be available in REST
> ---
>
> Key: YARN-4498
> URL: https://issues.apache.org/jira/browse/YARN-4498
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api, client, resourcemanager
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>  Labels: oct16-medium
> Fix For: 2.8.0, 2.9.0, 3.0.0-alpha2
>
> Attachments: 0001-YARN-4498.patch, YARN-4498.0002.patch, 
> YARN-4498.0003.patch, YARN-4498.0004.patch, YARN-4498.branch-2.8.0001.patch, 
> YARN-4498.branch-2.8.addendum.001.patch, YARN-4498.trunk.addendum.001.patch, 
> apps.xml
>
>
> Currently nodelabel stats per application is not available through REST like 
> currently used labels by all live containers, total stats of containers per 
> label for app etc..
> CLI and web UI scenarios will be handled separately.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5808) Add gc log options to the yarn daemon script when starting services-api

2016-11-02 Thread Gour Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15629582#comment-15629582
 ] 

Gour Saha commented on YARN-5808:
-

Changed state to "submit patch". Will wait for the QA report. I opened 
YARN-5817 for the yarn.cmd change.

> Add gc log options to the yarn daemon script when starting services-api
> ---
>
> Key: YARN-5808
> URL: https://issues.apache.org/jira/browse/YARN-5808
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gour Saha
>Assignee: Billie Rinaldi
> Fix For: yarn-native-services
>
> Attachments: YARN-5808-yarn-native-services.001.patch
>
>
> We need to add the gc log options as below when starting services-api using 
> the yarn-daemon.sh script -
> {code}
> -XX:+PrintGC -Xloggc:$YARN_LOG_DIR/services-api-gc.log -XX:+PrintGCDetails 
> -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5808) Add gc log options to the yarn daemon script when starting services-api

2016-11-02 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15629692#comment-15629692
 ] 

Allen Wittenauer commented on YARN-5808:


Also, that path needs to get cygwin'd.

> Add gc log options to the yarn daemon script when starting services-api
> ---
>
> Key: YARN-5808
> URL: https://issues.apache.org/jira/browse/YARN-5808
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gour Saha
>Assignee: Billie Rinaldi
> Fix For: yarn-native-services
>
> Attachments: YARN-5808-yarn-native-services.001.patch
>
>
> We need to add the gc log options as below when starting services-api using 
> the yarn-daemon.sh script -
> {code}
> -XX:+PrintGC -Xloggc:$YARN_LOG_DIR/services-api-gc.log -XX:+PrintGCDetails 
> -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5694) ZKRMStateStore should always start its verification thread to prevent accidental state store corruption

2016-11-02 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15629737#comment-15629737
 ] 

Jian He commented on YARN-5694:
---

sorry, I don't quite get it. why is the active thread needed to run in non-HA 
mode ? 
Even in HA mode, it may not be needed, because the curator library can 
automatically detect whether the RM is still active and send notification.  
IIUC, no need a separate thread to detect that 

> ZKRMStateStore should always start its verification thread to prevent 
> accidental state store corruption
> ---
>
> Key: YARN-5694
> URL: https://issues.apache.org/jira/browse/YARN-5694
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 3.0.0-alpha1
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Critical
>  Labels: oct16-medium
> Attachments: YARN-5694.001.patch, YARN-5694.002.patch, 
> YARN-5694.003.patch, YARN-5694.004.patch, YARN-5694.004.patch, 
> YARN-5694.005.patch, YARN-5694.006.patch, YARN-5694.007.patch, 
> YARN-5694.branch-2.7.001.patch, YARN-5694.branch-2.7.002.patch
>
>
> There are two cases.  In branch-2.7, the 
> {{ZKRMStateStore.VerifyActiveStatusThread}} is always started, even when 
> using embedded or Curator failover.  In branch-2.8, the 
> {{ZKRMStateStore.VerifyActiveStatusThread}} is only started when HA is 
> disabled, which makes no sense.  Based on the JIRA that introduced that 
> change (YARN-4559), I believe the intent was to start it only when embedded 
> failover is disabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5808) Add gc log options to the yarn daemon script when starting services-api

2016-11-02 Thread Billie Rinaldi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15629790#comment-15629790
 ] 

Billie Rinaldi commented on YARN-5808:
--

Nope, I'm just learning how the new scripts work. Love it, btw.

> Add gc log options to the yarn daemon script when starting services-api
> ---
>
> Key: YARN-5808
> URL: https://issues.apache.org/jira/browse/YARN-5808
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gour Saha
>Assignee: Billie Rinaldi
> Fix For: yarn-native-services
>
> Attachments: YARN-5808-yarn-native-services.001.patch
>
>
> We need to add the gc log options as below when starting services-api using 
> the yarn-daemon.sh script -
> {code}
> -XX:+PrintGC -Xloggc:$YARN_LOG_DIR/services-api-gc.log -XX:+PrintGCDetails 
> -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5808) Add gc log options to the yarn daemon script when starting services-api

2016-11-02 Thread Billie Rinaldi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5808?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Billie Rinaldi updated YARN-5808:
-
Attachment: YARN-5808-yarn-native-services.002.patch

Attaching a new patch addressing. [~aw]'s comments. Thanks for the review, 
Allen!

> Add gc log options to the yarn daemon script when starting services-api
> ---
>
> Key: YARN-5808
> URL: https://issues.apache.org/jira/browse/YARN-5808
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gour Saha
>Assignee: Billie Rinaldi
> Fix For: yarn-native-services
>
> Attachments: YARN-5808-yarn-native-services.001.patch, 
> YARN-5808-yarn-native-services.002.patch
>
>
> We need to add the gc log options as below when starting services-api using 
> the yarn-daemon.sh script -
> {code}
> -XX:+PrintGC -Xloggc:$YARN_LOG_DIR/services-api-gc.log -XX:+PrintGCDetails 
> -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-5618) Support for Intra queue preemption framework

2016-11-02 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5618?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G resolved YARN-5618.
---
Resolution: Fixed

Closing this Jira as DONE as this change also went along with YARN-2009

> Support for Intra queue preemption framework
> 
>
> Key: YARN-5618
> URL: https://issues.apache.org/jira/browse/YARN-5618
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler
>Reporter: Sunil G
>Assignee: Sunil G
>
> Currently inter-queue preemption framework covers the basics (configs and 
> scheduling monitor interval etc). This new framework will come as new 
> CandidateSelector policy. Priority and user-limit will be a part of this 
> framework.
> This is a tracking jira for the framework impl alone.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4498) Application level node labels stats to be available in REST

2016-11-02 Thread Bibin A Chundatt (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bibin A Chundatt updated YARN-4498:
---
Attachment: (was: YARN-4498.branch-2.8.addendum.001.patch)

> Application level node labels stats to be available in REST
> ---
>
> Key: YARN-4498
> URL: https://issues.apache.org/jira/browse/YARN-4498
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api, client, resourcemanager
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>  Labels: oct16-medium
> Fix For: 2.8.0, 2.9.0, 3.0.0-alpha2
>
> Attachments: 0001-YARN-4498.patch, YARN-4498.0002.patch, 
> YARN-4498.0003.patch, YARN-4498.0004.patch, YARN-4498.branch-2.8.0001.patch, 
> YARN-4498.trunk.addendum.001.patch, apps.xml
>
>
> Currently nodelabel stats per application is not available through REST like 
> currently used labels by all live containers, total stats of containers per 
> label for app etc..
> CLI and web UI scenarios will be handled separately.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5808) Add gc log options to the yarn daemon script when starting services-api

2016-11-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15629632#comment-15629632
 ] 

Hadoop QA commented on YARN-5808:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
14s{color} | {color:green} yarn-native-services passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  3m 
14s{color} | {color:red} hadoop-yarn in yarn-native-services failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  3m  
9s{color} | {color:red} hadoop-yarn in the patch failed. {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
13s{color} | {color:green} The patch generated 0 new + 90 unchanged - 1 fixed = 
90 total (was 91) {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m 
12s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
36s{color} | {color:green} hadoop-yarn in the patch passed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
21s{color} | {color:red} The patch generated 11 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 17m 43s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | YARN-5808 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12836404/YARN-5808-yarn-native-services.001.patch
 |
| Optional Tests |  asflicense  shellcheck  shelldocs  mvnsite  unit  |
| uname | Linux f9719405415e 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | yarn-native-services / 87f09be |
| mvnsite | 
https://builds.apache.org/job/PreCommit-YARN-Build/13755/artifact/patchprocess/branch-mvnsite-hadoop-yarn-project_hadoop-yarn.txt
 |
| shellcheck | v0.4.4 |
| mvnsite | 
https://builds.apache.org/job/PreCommit-YARN-Build/13755/artifact/patchprocess/patch-mvnsite-hadoop-yarn-project_hadoop-yarn.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/13755/testReport/ |
| asflicense | 
https://builds.apache.org/job/PreCommit-YARN-Build/13755/artifact/patchprocess/patch-asflicense-problems.txt
 |
| modules | C: hadoop-yarn-project/hadoop-yarn U: 
hadoop-yarn-project/hadoop-yarn |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/13755/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add gc log options to the yarn daemon script when starting services-api
> ---
>
> Key: YARN-5808
> URL: https://issues.apache.org/jira/browse/YARN-5808
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gour Saha
>Assignee: Billie Rinaldi
> Fix For: yarn-native-services
>
> Attachments: YARN-5808-yarn-native-services.001.patch
>
>
> We need to add the gc log options as below when starting services-api using 
> the yarn-daemon.sh script -
> {code}
> -XX:+PrintGC -Xloggc:$YARN_LOG_DIR/services-api-gc.log -XX:+PrintGCDetails 
> -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-5618) Support for Intra queue preemption framework

2016-11-02 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5618?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G resolved YARN-5618.
---
Resolution: Done

> Support for Intra queue preemption framework
> 
>
> Key: YARN-5618
> URL: https://issues.apache.org/jira/browse/YARN-5618
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler
>Reporter: Sunil G
>Assignee: Sunil G
>
> Currently inter-queue preemption framework covers the basics (configs and 
> scheduling monitor interval etc). This new framework will come as new 
> CandidateSelector policy. Priority and user-limit will be a part of this 
> framework.
> This is a tracking jira for the framework impl alone.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Reopened] (YARN-5618) Support for Intra queue preemption framework

2016-11-02 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5618?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G reopened YARN-5618:
---

Changing resolution status.

> Support for Intra queue preemption framework
> 
>
> Key: YARN-5618
> URL: https://issues.apache.org/jira/browse/YARN-5618
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler
>Reporter: Sunil G
>Assignee: Sunil G
>
> Currently inter-queue preemption framework covers the basics (configs and 
> scheduling monitor interval etc). This new framework will come as new 
> CandidateSelector policy. Priority and user-limit will be a part of this 
> framework.
> This is a tracking jira for the framework impl alone.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5818) Support the Docker Live Restore feature

2016-11-02 Thread Shane Kumpf (JIRA)
Shane Kumpf created YARN-5818:
-

 Summary: Support the Docker Live Restore feature
 Key: YARN-5818
 URL: https://issues.apache.org/jira/browse/YARN-5818
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: yarn
Reporter: Shane Kumpf


Docker 1.12.x introduced the docker [Live 
Restore|https://docs.docker.com/engine/admin/live-restore/] feature which 
allows docker containers to survive docker daemon restarts/upgrades. Support 
for this feature should be added to YARN to allow docker changes and upgrades 
to be less impactful to existing containers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5818) Support the Docker Live Restore feature

2016-11-02 Thread Shane Kumpf (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15629691#comment-15629691
 ] 

Shane Kumpf commented on YARN-5818:
---

Did some initial testing here and unfortunately, given that docker is a 
client/server model, when the docker daemon is down for restart/upgrade, client 
operations fail with an EOF exception. Our use of {{docker wait}} for 
retrieving the containers exit code breaks down as the client operation 
failures during the restart/upgrade.
{code}
An error occurred trying to connect: Post 
http://%2Fvar%2Frun%2Fdocker.sock/v1.24/containers/c11692777816e44049d610c4ad358a24eefbff707cdbd85c24df3d153c80401e/wait:
 EOF
{code}

The docker community believes this is working as intended and does not plan to 
fix this behavior. It appears we will have to handle retries in c-e.

> Support the Docker Live Restore feature
> ---
>
> Key: YARN-5818
> URL: https://issues.apache.org/jira/browse/YARN-5818
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Shane Kumpf
>
> Docker 1.12.x introduced the docker [Live 
> Restore|https://docs.docker.com/engine/admin/live-restore/] feature which 
> allows docker containers to survive docker daemon restarts/upgrades. Support 
> for this feature should be added to YARN to allow docker changes and upgrades 
> to be less impactful to existing containers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5808) Add gc log options to the yarn daemon script when starting services-api

2016-11-02 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15629688#comment-15629688
 ] 

Allen Wittenauer commented on YARN-5808:


Is there a reason this patch is directly appending instead of using 
hadoop_add_param'ing the slider.libdir?

> Add gc log options to the yarn daemon script when starting services-api
> ---
>
> Key: YARN-5808
> URL: https://issues.apache.org/jira/browse/YARN-5808
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gour Saha
>Assignee: Billie Rinaldi
> Fix For: yarn-native-services
>
> Attachments: YARN-5808-yarn-native-services.001.patch
>
>
> We need to add the gc log options as below when starting services-api using 
> the yarn-daemon.sh script -
> {code}
> -XX:+PrintGC -Xloggc:$YARN_LOG_DIR/services-api-gc.log -XX:+PrintGCDetails 
> -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5694) ZKRMStateStore should always start its verification thread to prevent accidental state store corruption

2016-11-02 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15629870#comment-15629870
 ] 

Daniel Templeton commented on YARN-5694:


The concern is that the leader election and state store can be configured to 
use different ZK instances in HA mode.  In that case, the state store still has 
to protect itself.  In non-HA, it may still be possible for a second RM to 
start using the same cluster ID and same ZK instance, which would corrupt the 
state store.  By having the state store be always vigilant, we protect 
ourselves from state store corruption in all cases.

> ZKRMStateStore should always start its verification thread to prevent 
> accidental state store corruption
> ---
>
> Key: YARN-5694
> URL: https://issues.apache.org/jira/browse/YARN-5694
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 3.0.0-alpha1
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Critical
>  Labels: oct16-medium
> Attachments: YARN-5694.001.patch, YARN-5694.002.patch, 
> YARN-5694.003.patch, YARN-5694.004.patch, YARN-5694.004.patch, 
> YARN-5694.005.patch, YARN-5694.006.patch, YARN-5694.007.patch, 
> YARN-5694.branch-2.7.001.patch, YARN-5694.branch-2.7.002.patch
>
>
> There are two cases.  In branch-2.7, the 
> {{ZKRMStateStore.VerifyActiveStatusThread}} is always started, even when 
> using embedded or Curator failover.  In branch-2.8, the 
> {{ZKRMStateStore.VerifyActiveStatusThread}} is only started when HA is 
> disabled, which makes no sense.  Based on the JIRA that introduced that 
> change (YARN-4559), I believe the intent was to start it only when embedded 
> failover is disabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5817) Make yarn.cmd changes required for slider and servicesapi

2016-11-02 Thread Gour Saha (JIRA)
Gour Saha created YARN-5817:
---

 Summary: Make yarn.cmd changes required for slider and servicesapi
 Key: YARN-5817
 URL: https://issues.apache.org/jira/browse/YARN-5817
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Gour Saha
 Fix For: yarn-native-services


As per YARN-5808 and other changes made to yarn script, there are probably some 
corresponding changes required in 
_hadoop-yarn-project/hadoop-yarn/bin/yarn.cmd_. We need to identify and make 
those changes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5819) Verify preemption works between applications in the same leaf queue

2016-11-02 Thread Karthik Kambatla (JIRA)
Karthik Kambatla created YARN-5819:
--

 Summary: Verify preemption works between applications in the same 
leaf queue
 Key: YARN-5819
 URL: https://issues.apache.org/jira/browse/YARN-5819
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: fairscheduler
Affects Versions: 2.9.0
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla


JIRA to track the unit test(s) for tracking preemption between applications in 
the same queue. Note that this can only be fairshare preemption



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5336) Put in some limit for accepting key-values in hbase writer

2016-11-02 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated YARN-5336:
-
Assignee: Vrushali C  (was: Haibo Chen)

> Put in some limit for accepting key-values in hbase writer
> --
>
> Key: YARN-5336
> URL: https://issues.apache.org/jira/browse/YARN-5336
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Vrushali C
>  Labels: YARN-5355
>
> As recommended by [~jrottinghuis] , need to add in some limit (default and 
> configurable) for accepting key values to be written to the backend.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5815) Random failure of TestApplicationPriority.testOrderOfActivatingThePriorityApplicationOnRMRestart

2016-11-02 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5815?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated YARN-5815:
---
Summary: Random failure of 
TestApplicationPriority.testOrderOfActivatingThePriorityApplicationOnRMRestart  
(was: Random failure 
TestApplicationPriority.testOrderOfActivatingThePriorityApplicationOnRMRestart)

> Random failure of 
> TestApplicationPriority.testOrderOfActivatingThePriorityApplicationOnRMRestart
> 
>
> Key: YARN-5815
> URL: https://issues.apache.org/jira/browse/YARN-5815
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
> Attachments: YARN-5815.0001.patch, YARN-5815.0002.patch, 
> YARN-5815.0003.patch
>
>
> {noformat}
> java.lang.AssertionError: expected:<2> but was:<0>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestApplicationPriority.testOrderOfActivatingThePriorityApplicationOnRMRestart(TestApplicationPriority.java:707)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5336) Put in some limit for accepting key-values in hbase writer

2016-11-02 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15630097#comment-15630097
 ] 

Haibo Chen commented on YARN-5336:
--

Hey, [~vrushalic]. I have not got any change to work on this. Assigning it to 
you.

> Put in some limit for accepting key-values in hbase writer
> --
>
> Key: YARN-5336
> URL: https://issues.apache.org/jira/browse/YARN-5336
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Haibo Chen
>  Labels: YARN-5355
>
> As recommended by [~jrottinghuis] , need to add in some limit (default and 
> configurable) for accepting key values to be written to the backend.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5820) yarn node CLI help should be clearer

2016-11-02 Thread Grant Sohn (JIRA)
Grant Sohn created YARN-5820:


 Summary: yarn node CLI help should be clearer
 Key: YARN-5820
 URL: https://issues.apache.org/jira/browse/YARN-5820
 Project: Hadoop YARN
  Issue Type: Bug
  Components: client
Affects Versions: 2.6.0
Reporter: Grant Sohn
Priority: Trivial


Current message is:
{noformat}
usage: node
 -all   Works with -list to list all nodes.
 -list  List all running nodes. Supports optional use of
-states to filter nodes based on node state, all -all
to list all nodes.
 -statesWorks with -list to filter nodes based on input
comma-separated list of node states.
 -statusPrints the status report of the node.
{noformat}

It should be either this:
{noformat}
usage: yarn node [-list [-states |-all] | -status ]

 -all   Works with -list to list all nodes.
 -list  List all running nodes. Supports optional use of
-states to filter nodes based on node state, all -all
to list all nodes.
 -statesWorks with -list to filter nodes based on input
comma-separated list of node states.
 -statusPrints the status report of the node.
{noformat}

or that.
{noformat}
usage: yarn node -list [-states |-all] 
   yarn node -status 

 -all   Works with -list to list all nodes.
 -list  List all running nodes. Supports optional use of
-states to filter nodes based on node state, all -all
to list all nodes.
 -statesWorks with -list to filter nodes based on input
comma-separated list of node states.
 -statusPrints the status report of the node.
{noformat}

The latter is the least ambiguous.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5815) Random failure of TestApplicationPriority.testOrderOfActivatingThePriorityApplicationOnRMRestart

2016-11-02 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15630080#comment-15630080
 ] 

Varun Saxena commented on YARN-5815:


Thanks [~bibinchundatt] for the latest patch.
LGTM. Will commit it shortly.

Apologies for missing it during review of YARN-5773.

> Random failure of 
> TestApplicationPriority.testOrderOfActivatingThePriorityApplicationOnRMRestart
> 
>
> Key: YARN-5815
> URL: https://issues.apache.org/jira/browse/YARN-5815
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
> Attachments: YARN-5815.0001.patch, YARN-5815.0002.patch, 
> YARN-5815.0003.patch
>
>
> {noformat}
> java.lang.AssertionError: expected:<2> but was:<0>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestApplicationPriority.testOrderOfActivatingThePriorityApplicationOnRMRestart(TestApplicationPriority.java:707)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5336) Put in some limit for accepting key-values in hbase writer

2016-11-02 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15630081#comment-15630081
 ] 

Vrushali C commented on YARN-5336:
--

Hi [~haibochen] 
Wanted to check in, are you actively working on this?  If not, I actually am 
looking at a related thing and wanted to put up a patch for this.

thanks
Vrushali

> Put in some limit for accepting key-values in hbase writer
> --
>
> Key: YARN-5336
> URL: https://issues.apache.org/jira/browse/YARN-5336
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Haibo Chen
>  Labels: YARN-5355
>
> As recommended by [~jrottinghuis] , need to add in some limit (default and 
> configurable) for accepting key values to be written to the backend.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5808) Add gc log options to the yarn daemon script when starting services-api

2016-11-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15630062#comment-15630062
 ] 

Hadoop QA commented on YARN-5808:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
56s{color} | {color:green} yarn-native-services passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  3m 
11s{color} | {color:red} hadoop-yarn in yarn-native-services failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  3m  
6s{color} | {color:red} hadoop-yarn in the patch failed. {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
12s{color} | {color:green} The patch generated 0 new + 90 unchanged - 1 fixed = 
90 total (was 91) {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m 
11s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
36s{color} | {color:green} hadoop-yarn in the patch passed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
20s{color} | {color:red} The patch generated 11 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 17m 16s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | YARN-5808 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12836633/YARN-5808-yarn-native-services.002.patch
 |
| Optional Tests |  asflicense  shellcheck  shelldocs  mvnsite  unit  |
| uname | Linux f8e177842387 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | yarn-native-services / 87f09be |
| mvnsite | 
https://builds.apache.org/job/PreCommit-YARN-Build/13757/artifact/patchprocess/branch-mvnsite-hadoop-yarn-project_hadoop-yarn.txt
 |
| shellcheck | v0.4.4 |
| mvnsite | 
https://builds.apache.org/job/PreCommit-YARN-Build/13757/artifact/patchprocess/patch-mvnsite-hadoop-yarn-project_hadoop-yarn.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/13757/testReport/ |
| asflicense | 
https://builds.apache.org/job/PreCommit-YARN-Build/13757/artifact/patchprocess/patch-asflicense-problems.txt
 |
| modules | C: hadoop-yarn-project/hadoop-yarn U: 
hadoop-yarn-project/hadoop-yarn |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/13757/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add gc log options to the yarn daemon script when starting services-api
> ---
>
> Key: YARN-5808
> URL: https://issues.apache.org/jira/browse/YARN-5808
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gour Saha
>Assignee: Billie Rinaldi
> Fix For: yarn-native-services
>
> Attachments: YARN-5808-yarn-native-services.001.patch, 
> YARN-5808-yarn-native-services.002.patch
>
>
> We need to add the gc log options as below when starting services-api using 
> the yarn-daemon.sh script -
> {code}
> -XX:+PrintGC -Xloggc:$YARN_LOG_DIR/services-api-gc.log -XX:+PrintGCDetails 
> -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5815) Random failure of TestApplicationPriority.testOrderOfActivatingThePriorityApplicationOnRMRestart

2016-11-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15630131#comment-15630131
 ] 

Hudson commented on YARN-5815:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10754 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10754/])
YARN-5815. Random failure of (varunsaxena: rev 
377919010b687dbf95f62082201cf91f5a7a2318)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestApplicationPriority.java


> Random failure of 
> TestApplicationPriority.testOrderOfActivatingThePriorityApplicationOnRMRestart
> 
>
> Key: YARN-5815
> URL: https://issues.apache.org/jira/browse/YARN-5815
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
> Fix For: 2.8.0, 2.9.0, 3.0.0-alpha2
>
> Attachments: YARN-5815.0001.patch, YARN-5815.0002.patch, 
> YARN-5815.0003.patch
>
>
> {noformat}
> java.lang.AssertionError: expected:<2> but was:<0>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestApplicationPriority.testOrderOfActivatingThePriorityApplicationOnRMRestart(TestApplicationPriority.java:707)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4498) Application level node labels stats to be available in REST

2016-11-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15630348#comment-15630348
 ] 

Hadoop QA commented on YARN-4498:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
 7s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} branch-2.8 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
30s{color} | {color:green} branch-2.8 passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
38s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
18s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
10s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} branch-2.8 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} branch-2.8 passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 15s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 1 new + 44 unchanged - 1 fixed = 45 total (was 45) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 71m 45s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK 
v1.7.0_111. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}162m  5s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_101 Failed junit tests | 
hadoop.yarn.server.resourcemanager.TestClientRMTokens |
|   | hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer |
|   | hadoop.yarn.server.resourcemanager.TestAMAuthorization |
| JDK v1.7.0_111 Failed junit tests | 
hadoop.yarn.server.resourcemanager.TestClientRMTokens |
|   | hadoop.yarn.server.resourcemanager.TestAMAuthorization |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:5af2af1 |
| JIRA Issue | YARN-4498 |
| JIRA Patch URL | 

[jira] [Commented] (YARN-5783) Unit tests to verify the identification of starved applications

2016-11-02 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15630385#comment-15630385
 ] 

Daniel Templeton commented on YARN-5783:


Thanks, [~kasha].  I think we're close.

* Does {{totalAppsEverAdded()}} need to be public, or would default privacy do? 
 Same for {{numStarvedApps()}}.
* Can the {{resourceManager.stop()}} call in {{TestFSAppStarvation.tearDown()}} 
throw an exception that would prevent the deletion of the {{ALLOC_FILE}}?
* Is it worth adding a test to make sure that the same app can be starved 
multiple times in a row?


> Unit tests to verify the identification of starved applications
> ---
>
> Key: YARN-5783
> URL: https://issues.apache.org/jira/browse/YARN-5783
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Affects Versions: 2.8.0
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
>  Labels: oct16-medium
> Attachments: yarn-5783.YARN-4752.1.patch, 
> yarn-5783.YARN-4752.2.patch, yarn-5783.YARN-4752.3.patch, 
> yarn-5783.YARN-4752.4.patch
>
>
> JIRA to track unit tests to verify the identification of starved 
> applications. An application should be marked starved only when:
> # Cluster allocation is over the configured threshold for preemption.
> # Preemption is enabled for a queue and any of the following:
> ## The queue is under its minshare for longer than minsharePreemptionTimeout
> ## One of the queue’s applications is under its fairshare for longer than 
> fairsharePreemptionTimeout.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5802) Application priority updates add pending apps to running ordering policy

2016-11-02 Thread Bibin A Chundatt (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5802?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bibin A Chundatt updated YARN-5802:
---
Attachment: YARN-5802.0006.patch

> Application priority updates add pending apps to running ordering policy
> 
>
> Key: YARN-5802
> URL: https://issues.apache.org/jira/browse/YARN-5802
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>Priority: Critical
> Attachments: YARN-5802.0001.patch, YARN-5802.0002.patch, 
> YARN-5802.0003.patch, YARN-5802.0004.patch, YARN-5802.0005.patch, 
> YARN-5802.0006.patch
>
>
> {{LeafQueue#updateApplicationPriority}}
> {code}
>  getOrderingPolicy().removeSchedulableEntity(attempt);
>   // Update new priority in SchedulerApplication
>   attempt.setPriority(newAppPriority);
>   getOrderingPolicy().addSchedulableEntity(attempt);
> {code}
> We should add again to ordering policy only when  attempt available in first 
> case.Else during application attempt removal will try to iterate on killed 
> application still available in pending Ordering policy.Which can cause RM to 
> crash.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5802) Application priority updates add pending apps to running ordering policy

2016-11-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15628725#comment-15628725
 ] 

Hadoop QA commented on YARN-5802:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
 7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 19s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 1 new + 125 unchanged - 0 fixed = 126 total (was 125) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 37m  0s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 52m 45s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestApplicationPriority |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | YARN-5802 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12836537/YARN-5802.0006.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 85f9c2b378b3 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / cb5cc0d |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/13750/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/13750/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/13750/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 

[jira] [Updated] (YARN-5811) ConfigurationProvider must implement Closeable interface

2016-11-02 Thread Denis Bolshakov (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denis Bolshakov updated YARN-5811:
--
  Labels: newbie  (was: )
Priority: Minor  (was: Major)

> ConfigurationProvider must implement Closeable interface
> 
>
> Key: YARN-5811
> URL: https://issues.apache.org/jira/browse/YARN-5811
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: yarn
>Reporter: Denis Bolshakov
>Priority: Minor
>  Labels: newbie
> Attachments: YARN-5811.1.patch, YARN-5811.3.patch, YARN-5811.5.patch, 
> YARN-5811.6.patch
>
>
> ConfigurationProvider declares close method, it would be so nice if the class 
> implements Closeable interface allowing to use `try with resources`



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5802) Application priority updates add pending apps to running ordering policy

2016-11-02 Thread Bibin A Chundatt (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15628578#comment-15628578
 ] 

Bibin A Chundatt commented on YARN-5802:


Thank you [~sunilg] for review comment.
Update patch handling all review comments..

> Application priority updates add pending apps to running ordering policy
> 
>
> Key: YARN-5802
> URL: https://issues.apache.org/jira/browse/YARN-5802
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>Priority: Critical
> Attachments: YARN-5802.0001.patch, YARN-5802.0002.patch, 
> YARN-5802.0003.patch, YARN-5802.0004.patch, YARN-5802.0005.patch, 
> YARN-5802.0006.patch
>
>
> {{LeafQueue#updateApplicationPriority}}
> {code}
>  getOrderingPolicy().removeSchedulableEntity(attempt);
>   // Update new priority in SchedulerApplication
>   attempt.setPriority(newAppPriority);
>   getOrderingPolicy().addSchedulableEntity(attempt);
> {code}
> We should add again to ordering policy only when  attempt available in first 
> case.Else during application attempt removal will try to iterate on killed 
> application still available in pending Ordering policy.Which can cause RM to 
> crash.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5276) print more info when event queue is blocked

2016-11-02 Thread sandflee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15628654#comment-15628654
 ] 

sandflee commented on YARN-5276:


Thanks [~miklos.szeg...@cloudera.com] for your detailed reply, seems no much  
necessary to add a UT :(

> print more info when event queue is blocked
> ---
>
> Key: YARN-5276
> URL: https://issues.apache.org/jira/browse/YARN-5276
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager, resourcemanager
>Reporter: sandflee
>Assignee: sandflee
>  Labels: oct16-easy
> Attachments: YARN-5276.01.patch, YARN-5276.02.patch, 
> YARN-5276.03.patch, YARN-5276.04.patch
>
>
> we now see logs like "Size of event-queue is 498000, Size of event-queue is 
> 499000" and difficult to know which event flood the queue. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-5336) Put in some limit for accepting key-values in hbase writer

2016-11-02 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15630472#comment-15630472
 ] 

Vrushali C edited comment on YARN-5336 at 11/2/16 8:57 PM:
---

Some other interesting points to keep in mind:

As per https://hbase.apache.org/book.html#table_schema_rules_of_thumb , we 
should aim to have cells no larger than 10 MB, or 50 MB if we use mob. 
Otherwise, consider storing your cell data in HDFS and store a pointer to the 
data in HBase.

Aim to have regions sized between 10 and 50 GB.

Aim to have cells no larger than 10 MB, or 50 MB if you use mob. Otherwise, 
consider storing your cell data in HDFS and store a pointer to the data in 
HBase.

A typical schema has between 1 and 3 column families per table. HBase tables 
should not be designed to mimic RDBMS tables. Around 50-100 regions is a good 
number for a table with 1 or 2 column families. Remember that a region is a 
contiguous segment of a column family.

Keep your column family names as short as possible. The column family names are 
stored for every value (ignoring prefix encoding). They should not be 
self-documenting and descriptive like in a typical RDBMS.

About Medium sized objects (https://hbase.apache.org/book.html#hbase_mob)

While HBase can technically handle binary objects with cells that are larger 
than 100 KB in size, HBase’s normal read and write paths are optimized for 
values smaller than 100KB in size. When HBase deals with large numbers of 
objects over this threshold, referred to here as medium objects, or MOBs, 
performance is degraded due to write amplification caused by splits and 
compactions. When using MOBs, ideally your objects will be between 100KB and 
10MB. HBase FIX_VERSION_NUMBER adds support for better managing large numbers 
of MOBs while maintaining performance, consistency, and low operational 
overhead. MOB support is provided by the work done in HBASE-11339. To take 
advantage of MOB, you need to use HFile version 3. Optionally, configure the 
MOB file reader’s cache settings for each RegionServer (see Configuring the MOB 
Cache), then configure specific columns to hold MOB data. Client code does not 
need to change to take advantage of HBase MOB support. The feature is 
transparent to the client.




was (Author: vrushalic):
Some other interesting points to keep in mind:

As per https://hbase.apache.org/book.html#table_schema_rules_of_thumb , we 
should aim to have cells no larger than 10 MB, or 50 MB if we use mob. 
Otherwise, consider storing your cell data in HDFS and store a pointer to the 
data in HBase.

Aim to have regions sized between 10 and 50 GB.

Aim to have cells no larger than 10 MB, or 50 MB if you use mob. Otherwise, 
consider storing your cell data in HDFS and store a pointer to the data in 
HBase.

A typical schema has between 1 and 3 column families per table. HBase tables 
should not be designed to mimic RDBMS tables.

Around 50-100 regions is a good number for a table with 1 or 2 column families. 
Remember that a region is a contiguous segment of a column family.

Keep your column family names as short as possible. The column family names are 
stored for every value (ignoring prefix encoding). They should not be 
self-documenting and descriptive like in a typical RDBMS.

About Medium sized objects (https://hbase.apache.org/book.html#hbase_mob)

While HBase can technically handle binary objects with cells that are larger 
than 100 KB in size, HBase’s normal read and write paths are optimized for 
values smaller than 100KB in size. When HBase deals with large numbers of 
objects over this threshold, referred to here as medium objects, or MOBs, 
performance is degraded due to write amplification caused by splits and 
compactions. When using MOBs, ideally your objects will be between 100KB and 
10MB. HBase FIX_VERSION_NUMBER adds support for better managing large numbers 
of MOBs while maintaining performance, consistency, and low operational 
overhead. MOB support is provided by the work done in HBASE-11339. To take 
advantage of MOB, you need to use HFile version 3. Optionally, configure the 
MOB file reader’s cache settings for each RegionServer (see Configuring the MOB 
Cache), then configure specific columns to hold MOB data. Client code does not 
need to change to take advantage of HBase MOB support. The feature is 
transparent to the client.



> Put in some limit for accepting key-values in hbase writer
> --
>
> Key: YARN-5336
> URL: https://issues.apache.org/jira/browse/YARN-5336
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Vrushali C
>  Labels: YARN-5355
>
> As recommended by [~jrottinghuis] , need to add in some limit (default and 
> 

[jira] [Updated] (YARN-5780) [YARN-5079] Allowing YARN native services to post data to timeline service V.2

2016-11-02 Thread Li Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Lu updated YARN-5780:

Summary: [YARN-5079] Allowing YARN native services to post data to timeline 
service V.2  (was: [YARN native service] Allowing YARN native services to post 
data to timeline service V.2)

> [YARN-5079] Allowing YARN native services to post data to timeline service V.2
> --
>
> Key: YARN-5780
> URL: https://issues.apache.org/jira/browse/YARN-5780
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Li Lu
>Assignee: Vrushali C
> Attachments: YARN-5780.poc.patch
>
>
> The basic end-to-end workflow of timeline service v.2 has been merged into 
> trunk. In YARN native services, we would like to post some service-specific 
> data to timeline v.2. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5336) Put in some limit for accepting key-values in hbase writer

2016-11-02 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15630472#comment-15630472
 ] 

Vrushali C commented on YARN-5336:
--

Some other interesting points to keep in mind:

As per https://hbase.apache.org/book.html#table_schema_rules_of_thumb , we 
should aim to have cells no larger than 10 MB, or 50 MB if we use mob. 
Otherwise, consider storing your cell data in HDFS and store a pointer to the 
data in HBase.

Aim to have regions sized between 10 and 50 GB.

Aim to have cells no larger than 10 MB, or 50 MB if you use mob. Otherwise, 
consider storing your cell data in HDFS and store a pointer to the data in 
HBase.

A typical schema has between 1 and 3 column families per table. HBase tables 
should not be designed to mimic RDBMS tables.

Around 50-100 regions is a good number for a table with 1 or 2 column families. 
Remember that a region is a contiguous segment of a column family.

Keep your column family names as short as possible. The column family names are 
stored for every value (ignoring prefix encoding). They should not be 
self-documenting and descriptive like in a typical RDBMS.

About Medium sized objects (https://hbase.apache.org/book.html#hbase_mob)

While HBase can technically handle binary objects with cells that are larger 
than 100 KB in size, HBase’s normal read and write paths are optimized for 
values smaller than 100KB in size. When HBase deals with large numbers of 
objects over this threshold, referred to here as medium objects, or MOBs, 
performance is degraded due to write amplification caused by splits and 
compactions. When using MOBs, ideally your objects will be between 100KB and 
10MB. HBase FIX_VERSION_NUMBER adds support for better managing large numbers 
of MOBs while maintaining performance, consistency, and low operational 
overhead. MOB support is provided by the work done in HBASE-11339. To take 
advantage of MOB, you need to use HFile version 3. Optionally, configure the 
MOB file reader’s cache settings for each RegionServer (see Configuring the MOB 
Cache), then configure specific columns to hold MOB data. Client code does not 
need to change to take advantage of HBase MOB support. The feature is 
transparent to the client.



> Put in some limit for accepting key-values in hbase writer
> --
>
> Key: YARN-5336
> URL: https://issues.apache.org/jira/browse/YARN-5336
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Vrushali C
>  Labels: YARN-5355
>
> As recommended by [~jrottinghuis] , need to add in some limit (default and 
> configurable) for accepting key values to be written to the backend.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-2995) Enhance UI to show cluster resource utilization of various container types

2016-11-02 Thread Konstantinos Karanasos (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantinos Karanasos updated YARN-2995:
-
Attachment: all-nodes.png

Attaching new screenshot after some final fixes.

> Enhance UI to show cluster resource utilization of various container types
> --
>
> Key: YARN-2995
> URL: https://issues.apache.org/jira/browse/YARN-2995
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Sriram Rao
>Assignee: Konstantinos Karanasos
> Attachments: YARN-2995.001.patch, YARN-2995.002.patch, 
> YARN-2995.003.patch, all-nodes.png, all-nodes.png, opp-container.png
>
>
> This JIRA proposes to extend the Resource manager UI to show how cluster 
> resources are being used to run *guaranteed start* and *queueable* 
> containers.  For example, a graph that shows over time, the fraction of  
> running containers that are *guaranteed start* and the fraction of running 
> containers that are *queueable*. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-2995) Enhance UI to show cluster resource utilization of various container types

2016-11-02 Thread Konstantinos Karanasos (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantinos Karanasos updated YARN-2995:
-
Attachment: YARN-2995.004.patch

Adding new version of the patch.
Rebased against trunk, fixed some more issues, and addressed the unit test 
failures.

Note that there is a javadoc issue regarding using '_' as an identifier" 
(related to Java 8). I did not fix that, because it is actually used in 
multiple classes in the Web UI, and I followed the same style as in the rest of 
the code. I assume this should be fixed in all places at some point.

> Enhance UI to show cluster resource utilization of various container types
> --
>
> Key: YARN-2995
> URL: https://issues.apache.org/jira/browse/YARN-2995
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Sriram Rao
>Assignee: Konstantinos Karanasos
> Attachments: YARN-2995.001.patch, YARN-2995.002.patch, 
> YARN-2995.003.patch, YARN-2995.004.patch, all-nodes.png, all-nodes.png, 
> opp-container.png
>
>
> This JIRA proposes to extend the Resource manager UI to show how cluster 
> resources are being used to run *guaranteed start* and *queueable* 
> containers.  For example, a graph that shows over time, the fraction of  
> running containers that are *guaranteed start* and the fraction of running 
> containers that are *queueable*. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5774) MR Job stuck in ACCEPTED status without any progress in Fair Scheduler if set yarn.scheduler.minimum-allocation-mb to 0.

2016-11-02 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15630486#comment-15630486
 ] 

Daniel Templeton commented on YARN-5774:


Thanks, [~yufeigu].  In addition to [~miklos.szeg...@cloudera.com]'s comments, 
I have a couple of minor points:

* {{AbstractYarnScheduler.normailzeRequest(List<...> ask...)}} should be 
{{normalizeRequests()}} to avoid confusion.
* While you're in there, you may as well correct the typo (hte/the) in the 
javadoc for {{ResourceCalculator.normalize()}}
* To add onto [~miklos.szeg...@cloudera.com]'s comments, 
{{ResourceCalculator.normalize()}} should check memory and CPU independently.  
Also, I think you can leave out the 0 check in 
{{SchedulerUtils.normalizeRequest()}} since it's redundant.
* Is throwing an exception the right thing to do if the min allocation is 0?  
Looks to me like that exception my be pretty hard to diagnose.

> MR Job stuck in ACCEPTED status without any progress in Fair Scheduler if set 
> yarn.scheduler.minimum-allocation-mb to 0.
> 
>
> Key: YARN-5774
> URL: https://issues.apache.org/jira/browse/YARN-5774
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 3.0.0-alpha1
>Reporter: Yufei Gu
>Assignee: Yufei Gu
>  Labels: oct16-easy
> Attachments: YARN-5774.001.patch, YARN-5774.002.patch, 
> YARN-5774.003.patch
>
>
> MR Job stuck in ACCEPTED status without any progress in Fair Scheduler 
> because there is no resource request for the AM. This happened when you 
> configure {{yarn.scheduler.minimum-allocation-mb}} to zero.
> The problem is in the code used by both Capacity Scheduler and Fair 
> Scheduler. {{scheduler.increment-allocation-mb}} is a concept in FS, but not 
> CS. So the common code in class RMAppManager passes the 
> {{yarn.scheduler.minimum-allocation-mb}} as incremental one because there is 
> no incremental one for CS when it tried to normalize the resource requests.
> {code}
>  SchedulerUtils.normalizeRequest(amReq, scheduler.getResourceCalculator(),
>   scheduler.getClusterResource(),
>   scheduler.getMinimumResourceCapability(),
>   scheduler.getMaximumResourceCapability(),
>   scheduler.getMinimumResourceCapability());  --> incrementResource 
> should be passed here.
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5552) Add Builder methods for common yarn API records

2016-11-02 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15630820#comment-15630820
 ] 

Wangda Tan commented on YARN-5552:
--

[~Tao Jie] could you check Java docs warnings as well? Our policy is make sure 
no new java docs warnings added for committed patch. 

Thanks,

> Add Builder methods for common yarn API records
> ---
>
> Key: YARN-5552
> URL: https://issues.apache.org/jira/browse/YARN-5552
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Arun Suresh
>Assignee: Tao Jie
> Attachments: YARN-5552.000.patch, YARN-5552.001.patch, 
> YARN-5552.002.patch, YARN-5552.003.patch, YARN-5552.004.patch, 
> YARN-5552.005.patch, YARN-5552.006.patch, YARN-5552.007.patch, 
> YARN-5552.008.patch
>
>
> Currently yarn API records such as ResourceRequest, AllocateRequest/Respone 
> as well as AMRMClient.ContainerRequest have multiple constructors / 
> newInstance methods. This makes it very difficult to add new fields to these 
> records.
> It would probably be better if we had Builder classes for many of these 
> records, which would make evolution of these records a bit easier.
> (suggested by [~kasha])



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5797) Add metrics to the node manager for cleaning the PUBLIC and PRIVATE caches

2016-11-02 Thread Chris Trezzo (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15630595#comment-15630595
 ] 

Chris Trezzo commented on YARN-5797:


Note that the patch exposes the following metrics about the cache cleanup:
# cacheSizeBeforeClean - The local cache size (public and private) before clean 
in Bytes
# totalBytesDeleted - # of total bytes deleted from the public and private 
local cache
# publicBytesDeleted - # of bytes deleted from the public local cache
# privateBytesDeleted - # of bytes deleted from the private local cache

{{LocalCacheCleanerStats}} also exposes the individual amounts deleted (in 
bytes) from each user private cache. I wasn't quite sure of a good way to 
expose this via metrics, so I left it out of the current patch.

> Add metrics to the node manager for cleaning the PUBLIC and PRIVATE caches
> --
>
> Key: YARN-5797
> URL: https://issues.apache.org/jira/browse/YARN-5797
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Chris Trezzo
>Assignee: Chris Trezzo
> Attachments: YARN-5797-trunk-v1.patch
>
>
> Add new metrics to the node manager around the local cache sizes and how much 
> is being cleaned from them on a regular bases. For example, we can expose 
> information contained in the {{LocalCacheCleanerStats}} class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5552) Add Builder methods for common yarn API records

2016-11-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15631581#comment-15631581
 ] 

Hadoop QA commented on YARN-5552:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
 9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
23s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} hadoop-yarn-project/hadoop-yarn: The patch generated 
0 new + 109 unchanged - 8 fixed = 109 total (was 117) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
25s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
18s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 35m 52s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 16m 
24s{color} | {color:green} hadoop-yarn-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 95m 20s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRestart |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | YARN-5552 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12836714/YARN-5552.009.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux f9d578bde5d4 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git 

[jira] [Commented] (YARN-5697) Use CliParser to parse options in RMAdminCLI

2016-11-02 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15631604#comment-15631604
 ] 

Naganarasimha G R commented on YARN-5697:
-

Thanks [~Tao Jie], all the test cases passes locally for me too !,
Committing it to branch-2.8, Thanks for the contribution [~Tao Jie] and 
additional review from [~sunilg] & [~wangda]

> Use CliParser to parse options in RMAdminCLI
> 
>
> Key: YARN-5697
> URL: https://issues.apache.org/jira/browse/YARN-5697
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.8.0
>Reporter: Tao Jie
>Assignee: Tao Jie
> Attachments: YARN-5697.001.patch, YARN-5697.002.patch, 
> YARN-5697.003.patch, YARN-5697.004.patch, YARN-5697.005-branch-2.8.patch, 
> YARN-5697.005.patch
>
>
> As discussed in YARN-4855, it is better to use CliParser rather than args to 
> parse command line options in RMAdminCli.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5783) Unit tests to verify the identification of starved applications

2016-11-02 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5783?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated YARN-5783:
---
Attachment: yarn-5783.YARN-4752.5.patch

> Unit tests to verify the identification of starved applications
> ---
>
> Key: YARN-5783
> URL: https://issues.apache.org/jira/browse/YARN-5783
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Affects Versions: 2.8.0
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
>  Labels: oct16-medium
> Attachments: yarn-5783.YARN-4752.1.patch, 
> yarn-5783.YARN-4752.2.patch, yarn-5783.YARN-4752.3.patch, 
> yarn-5783.YARN-4752.4.patch, yarn-5783.YARN-4752.5.patch
>
>
> JIRA to track unit tests to verify the identification of starved 
> applications. An application should be marked starved only when:
> # Cluster allocation is over the configured threshold for preemption.
> # Preemption is enabled for a queue and any of the following:
> ## The queue is under its minshare for longer than minsharePreemptionTimeout
> ## One of the queue’s applications is under its fairshare for longer than 
> fairsharePreemptionTimeout.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5783) Verify applications are identified starved

2016-11-02 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5783?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated YARN-5783:
---
Summary: Verify applications are identified starved  (was: Unit tests to 
verify the identification of starved applications)

> Verify applications are identified starved
> --
>
> Key: YARN-5783
> URL: https://issues.apache.org/jira/browse/YARN-5783
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Affects Versions: 2.8.0
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
>  Labels: oct16-medium
> Attachments: yarn-5783.YARN-4752.1.patch, 
> yarn-5783.YARN-4752.2.patch, yarn-5783.YARN-4752.3.patch, 
> yarn-5783.YARN-4752.4.patch, yarn-5783.YARN-4752.5.patch
>
>
> JIRA to track unit tests to verify the identification of starved 
> applications. An application should be marked starved only when:
> # Cluster allocation is over the configured threshold for preemption.
> # Preemption is enabled for a queue and any of the following:
> ## The queue is under its minshare for longer than minsharePreemptionTimeout
> ## One of the queue’s applications is under its fairshare for longer than 
> fairsharePreemptionTimeout.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5824) Verify app starvation under custom preemption thresholds and timeouts

2016-11-02 Thread Karthik Kambatla (JIRA)
Karthik Kambatla created YARN-5824:
--

 Summary: Verify app starvation under custom preemption thresholds 
and timeouts
 Key: YARN-5824
 URL: https://issues.apache.org/jira/browse/YARN-5824
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Karthik Kambatla


YARN-5783 adds basic tests to verify applications are identified to be starved. 
This JIRA is to add more advanced tests for different values of preemption 
thresholds and timeouts. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-5820) yarn node CLI help should be clearer

2016-11-02 Thread Ajith S (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajith S reassigned YARN-5820:
-

Assignee: Ajith S

> yarn node CLI help should be clearer
> 
>
> Key: YARN-5820
> URL: https://issues.apache.org/jira/browse/YARN-5820
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: client
>Affects Versions: 2.6.0
>Reporter: Grant Sohn
>Assignee: Ajith S
>Priority: Trivial
>
> Current message is:
> {noformat}
> usage: node
>  -all   Works with -list to list all nodes.
>  -list  List all running nodes. Supports optional use of
> -states to filter nodes based on node state, all -all
> to list all nodes.
>  -statesWorks with -list to filter nodes based on input
> comma-separated list of node states.
>  -statusPrints the status report of the node.
> {noformat}
> It should be either this:
> {noformat}
> usage: yarn node [-list [-states |-all] | -status ]
>  -all   Works with -list to list all nodes.
>  -list  List all running nodes. Supports optional use of
> -states to filter nodes based on node state, all -all
> to list all nodes.
>  -statesWorks with -list to filter nodes based on input
> comma-separated list of node states.
>  -statusPrints the status report of the node.
> {noformat}
> or that.
> {noformat}
> usage: yarn node -list [-states |-all] 
>yarn node -status 
>  -all   Works with -list to list all nodes.
>  -list  List all running nodes. Supports optional use of
> -states to filter nodes based on node state, all -all
> to list all nodes.
>  -statesWorks with -list to filter nodes based on input
> comma-separated list of node states.
>  -statusPrints the status report of the node.
> {noformat}
> The latter is the least ambiguous.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5823) Update NMTokens in case of requests with only opportunistic containers

2016-11-02 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-5823:
--
Target Version/s: 3.0.0-alpha2

> Update NMTokens in case of requests with only opportunistic containers
> --
>
> Key: YARN-5823
> URL: https://issues.apache.org/jira/browse/YARN-5823
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Attachments: YARN-5823.001.patch
>
>
> At the moment, when an {{AllocateRequest}} contains only opportunistic 
> {{ResourceRequests}}, the updated NMTokens are not properly added to the 
> {{AllocateResponse}}.
> In such a case the AM does not get back the needed NMTokens that are required 
> to start the opportunistic containers at the respective nodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5783) Verify applications are identified starved

2016-11-02 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15631629#comment-15631629
 ] 

Karthik Kambatla commented on YARN-5783:


Good points, [~templedf].

Your last suggestion got me thinking. It seemed like mocking the preemption 
thread to consume starved apps but not actually preempt would allow us to test 
starvation logic better. Did the same. We could potentially add more advanced 
tests to verify starvation based on thresholds and timeouts in a subsequent 
JIRA (YARN-5824).

By the way, this patch conflicts with YARN-5821 which is mostly cosmetic 
changes and I believe is ready to go. 

> Verify applications are identified starved
> --
>
> Key: YARN-5783
> URL: https://issues.apache.org/jira/browse/YARN-5783
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Affects Versions: 2.8.0
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
>  Labels: oct16-medium
> Attachments: yarn-5783.YARN-4752.1.patch, 
> yarn-5783.YARN-4752.2.patch, yarn-5783.YARN-4752.3.patch, 
> yarn-5783.YARN-4752.4.patch, yarn-5783.YARN-4752.5.patch
>
>
> JIRA to track unit tests to verify the identification of starved 
> applications. An application should be marked starved only when:
> # Cluster allocation is over the configured threshold for preemption.
> # Preemption is enabled for a queue and any of the following:
> ## The queue is under its minshare for longer than minsharePreemptionTimeout
> ## One of the queue’s applications is under its fairshare for longer than 
> fairsharePreemptionTimeout.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-2995) Enhance UI to show cluster resource utilization of various container types

2016-11-02 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-2995:
--
Target Version/s: 3.0.0-alpha2

> Enhance UI to show cluster resource utilization of various container types
> --
>
> Key: YARN-2995
> URL: https://issues.apache.org/jira/browse/YARN-2995
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Sriram Rao
>Assignee: Konstantinos Karanasos
> Attachments: YARN-2995.001.patch, YARN-2995.002.patch, 
> YARN-2995.003.patch, YARN-2995.004.patch, all-nodes.png, all-nodes.png, 
> opp-container.png
>
>
> This JIRA proposes to extend the Resource manager UI to show how cluster 
> resources are being used to run *guaranteed start* and *queueable* 
> containers.  For example, a graph that shows over time, the fraction of  
> running containers that are *guaranteed start* and the fraction of running 
> containers that are *queueable*. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5646) Documentation for scheduling of OPPORTUNISTIC containers

2016-11-02 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-5646:
--
Target Version/s: 3.0.0-alpha2

> Documentation for scheduling of OPPORTUNISTIC containers
> 
>
> Key: YARN-5646
> URL: https://issues.apache.org/jira/browse/YARN-5646
> Project: Hadoop YARN
>  Issue Type: Task
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
>
> This is for adding documentation regarding the scheduling of OPPORTUNISTIC 
> containers.
> It includes both the centralized (YARN-5220) and the distributed (YARN-2877) 
> scheduling.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5823) Update NMTokens in case of requests with only opportunistic containers

2016-11-02 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-5823:
--
Priority: Blocker  (was: Major)

> Update NMTokens in case of requests with only opportunistic containers
> --
>
> Key: YARN-5823
> URL: https://issues.apache.org/jira/browse/YARN-5823
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
>Priority: Blocker
> Attachments: YARN-5823.001.patch
>
>
> At the moment, when an {{AllocateRequest}} contains only opportunistic 
> {{ResourceRequests}}, the updated NMTokens are not properly added to the 
> {{AllocateResponse}}.
> In such a case the AM does not get back the needed NMTokens that are required 
> to start the opportunistic containers at the respective nodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5646) Documentation for scheduling of OPPORTUNISTIC containers

2016-11-02 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-5646:
--
Priority: Blocker  (was: Major)

> Documentation for scheduling of OPPORTUNISTIC containers
> 
>
> Key: YARN-5646
> URL: https://issues.apache.org/jira/browse/YARN-5646
> Project: Hadoop YARN
>  Issue Type: Task
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
>Priority: Blocker
>
> This is for adding documentation regarding the scheduling of OPPORTUNISTIC 
> containers.
> It includes both the centralized (YARN-5220) and the distributed (YARN-2877) 
> scheduling.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5820) yarn node CLI help should be clearer

2016-11-02 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15631647#comment-15631647
 ] 

Naganarasimha G R commented on YARN-5820:
-

+1 for the latter !

> yarn node CLI help should be clearer
> 
>
> Key: YARN-5820
> URL: https://issues.apache.org/jira/browse/YARN-5820
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: client
>Affects Versions: 2.6.0
>Reporter: Grant Sohn
>Assignee: Ajith S
>Priority: Trivial
>
> Current message is:
> {noformat}
> usage: node
>  -all   Works with -list to list all nodes.
>  -list  List all running nodes. Supports optional use of
> -states to filter nodes based on node state, all -all
> to list all nodes.
>  -statesWorks with -list to filter nodes based on input
> comma-separated list of node states.
>  -statusPrints the status report of the node.
> {noformat}
> It should be either this:
> {noformat}
> usage: yarn node [-list [-states |-all] | -status ]
>  -all   Works with -list to list all nodes.
>  -list  List all running nodes. Supports optional use of
> -states to filter nodes based on node state, all -all
> to list all nodes.
>  -statesWorks with -list to filter nodes based on input
> comma-separated list of node states.
>  -statusPrints the status report of the node.
> {noformat}
> or that.
> {noformat}
> usage: yarn node -list [-states |-all] 
>yarn node -status 
>  -all   Works with -list to list all nodes.
>  -list  List all running nodes. Supports optional use of
> -states to filter nodes based on node state, all -all
> to list all nodes.
>  -statesWorks with -list to filter nodes based on input
> comma-separated list of node states.
>  -statusPrints the status report of the node.
> {noformat}
> The latter is the least ambiguous.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-2995) Enhance UI to show cluster resource utilization of various container types

2016-11-02 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-2995:
--
Priority: Blocker  (was: Major)

> Enhance UI to show cluster resource utilization of various container types
> --
>
> Key: YARN-2995
> URL: https://issues.apache.org/jira/browse/YARN-2995
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Sriram Rao
>Assignee: Konstantinos Karanasos
>Priority: Blocker
> Attachments: YARN-2995.001.patch, YARN-2995.002.patch, 
> YARN-2995.003.patch, YARN-2995.004.patch, all-nodes.png, all-nodes.png, 
> opp-container.png
>
>
> This JIRA proposes to extend the Resource manager UI to show how cluster 
> resources are being used to run *guaranteed start* and *queueable* 
> containers.  For example, a graph that shows over time, the fraction of  
> running containers that are *guaranteed start* and the fraction of running 
> containers that are *queueable*. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4498) Application level node labels stats to be available in REST

2016-11-02 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15631667#comment-15631667
 ] 

Rohith Sharma K S commented on YARN-4498:
-

I am curious to know what is the inference from this json output? Could you 
give which field need to be checked? 

> Application level node labels stats to be available in REST
> ---
>
> Key: YARN-4498
> URL: https://issues.apache.org/jira/browse/YARN-4498
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api, client, resourcemanager
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>  Labels: oct16-medium
> Fix For: 2.8.0, 2.9.0, 3.0.0-alpha2
>
> Attachments: 0001-YARN-4498.patch, YARN-4498.0002.patch, 
> YARN-4498.0003.patch, YARN-4498.0004.patch, YARN-4498.addendum.001.patch, 
> YARN-4498.branch-2.8.0001.patch, YARN-4498.branch-2.8.addendum.001.patch, 
> apps.xml
>
>
> Currently nodelabel stats per application is not available through REST like 
> currently used labels by all live containers, total stats of containers per 
> label for app etc..
> CLI and web UI scenarios will be handled separately.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4498) Application level node labels stats to be available in REST

2016-11-02 Thread Bibin A Chundatt (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15631685#comment-15631685
 ] 

Bibin A Chundatt commented on YARN-4498:


[~rohithsharma]
{quote}
I am curious to know what is the inference from this json output
{quote}
As per Latest patch for completed apps.{{resourceInfo}} is removed

Initial implementation {{new ResourceInfo()}} was returned when attempt is null 
ie, Empty {{ResourcesInfo#resourceUsagesByPartition}} gets returned.
In current impl {{resourceInfo}} we have set as null. So that for finished apps 
{{resourceInfo}} not required.


> Application level node labels stats to be available in REST
> ---
>
> Key: YARN-4498
> URL: https://issues.apache.org/jira/browse/YARN-4498
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api, client, resourcemanager
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>  Labels: oct16-medium
> Fix For: 2.8.0, 2.9.0, 3.0.0-alpha2
>
> Attachments: 0001-YARN-4498.patch, YARN-4498.0002.patch, 
> YARN-4498.0003.patch, YARN-4498.0004.patch, YARN-4498.addendum.001.patch, 
> YARN-4498.branch-2.8.0001.patch, YARN-4498.branch-2.8.addendum.001.patch, 
> apps.xml
>
>
> Currently nodelabel stats per application is not available through REST like 
> currently used labels by all live containers, total stats of containers per 
> label for app etc..
> CLI and web UI scenarios will be handled separately.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5600) Add a parameter to ContainerLaunchContext to emulate yarn.nodemanager.delete.debug-delay-sec on a per-application basis

2016-11-02 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15630951#comment-15630951
 ] 

Daniel Templeton commented on YARN-5600:


Thanks for the patch, [~miklos.szeg...@cloudera.com].  Some comments:

* It seems to me that you're doing extra work to keep the delete time as a 
{{Date}}, not to mention adding potential time zone concerns.  Millis since the 
epoch may be simpler.
* Ignoring the {{IOException}} in 
{{ResourceLocalizationService.submitDirForDeletion()}} seems bad.  While you're 
in there, it might be good to do something more useful.
* In your javadoc, the param text should start with a lower case letter, e.g. 
{{DeletionService#deleteWithDelay()}}
* The {{DeletionService.scheduleFileDeletionTask()}} methods can and probably 
should be private.
* In your tests, instead of sleeping and asserting, sleep for short periods in 
a loop to minimize the test time.
* In {{TestContainerManager}} you have {code}-for (File f : new File[] { 
containerDir, containerSysDir }) {
+for (File f : new File[] {containerDir, containerSysDir }) {{code}  You 
may as well remove the trailing space as well.
* In {{TestContainerManager.verifyContainerDir()}}, your 
if-if-else-else-if-else would be cleaner as if-elseif-elseif-else.  Also, the 
messages could be a little more descriptive so that someone reading it without 
the source code has some clue what's happening.  And I don't think we need the 
exclamation points. :)

Otherwise, the general approach looks fine.

> Add a parameter to ContainerLaunchContext to emulate 
> yarn.nodemanager.delete.debug-delay-sec on a per-application basis
> ---
>
> Key: YARN-5600
> URL: https://issues.apache.org/jira/browse/YARN-5600
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Affects Versions: 3.0.0-alpha1
>Reporter: Daniel Templeton
>Assignee: Miklos Szegedi
>  Labels: oct16-medium
> Attachments: YARN-5600.000.patch, YARN-5600.001.patch, 
> YARN-5600.002.patch
>
>
> To make debugging application launch failures simpler, I'd like to add a 
> parameter to the CLC to allow an application owner to request delayed 
> deletion of the application's launch artifacts.
> This JIRA solves largely the same problem as YARN-5599, but for cases where 
> ATS is not in use, e.g. branch-2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4329) Allow fetching exact reason as to why a submitted app is in ACCEPTED state in Fair Scheduler

2016-11-02 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15630980#comment-15630980
 ] 

Daniel Templeton commented on YARN-4329:


Latest patch looks good to me, but Jenkins doesn't seem to like it.

> Allow fetching exact reason as to why a submitted app is in ACCEPTED state in 
> Fair Scheduler
> 
>
> Key: YARN-4329
> URL: https://issues.apache.org/jira/browse/YARN-4329
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler, resourcemanager
>Reporter: Naganarasimha G R
>Assignee: Yufei Gu
> Attachments: Screen Shot 2016-10-18 at 3.13.59 PM.png, 
> YARN-4329.001.patch, YARN-4329.002.patch, YARN-4329.003.patch, 
> YARN-4329.004.patch
>
>
> Similar to YARN-3946, it would be useful to capture possible reason why the 
> Application is in accepted state in FairScheduler



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5821) Drop left-over preemption-related code and clean up method visibilities in the Schedulable hierarchy

2016-11-02 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5821?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated YARN-5821:
---
Attachment: yarn-5821.YARN-4752.1.patch

> Drop left-over preemption-related code and clean up method visibilities in 
> the Schedulable hierarchy
> 
>
> Key: YARN-5821
> URL: https://issues.apache.org/jira/browse/YARN-5821
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
> Attachments: yarn-5821.YARN-4752.1.patch, yarn-5821.YARN-4752.1.patch
>
>
> There is some code left-over from old preemption. We need to drop that.
> Also, looks like the visibilities in the {{Schedulable}} hierarchy need to be 
> revisited. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5739) Provide timeline reader API to list available timeline entity types for one application

2016-11-02 Thread Li Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Lu updated YARN-5739:

Attachment: YARN-5739-YARN-5355.001.patch

First draft to perform a list operation to entities belong to the same 
application. 

> Provide timeline reader API to list available timeline entity types for one 
> application
> ---
>
> Key: YARN-5739
> URL: https://issues.apache.org/jira/browse/YARN-5739
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Li Lu
>Assignee: Li Lu
> Attachments: YARN-5739-YARN-5355.001.patch
>
>
> Right now we only show a part of available timeline entity data in the new 
> YARN UI. However, some data (especially library specific data) are not 
> possible to be queried out by the web UI. It will be appealing for the UI to 
> provide an "entity browser" for each YARN application. Actually, simply 
> dumping out available timeline entities (with proper pagination, of course) 
> would be pretty helpful for UI users. 
> On timeline side, we're not far away from this goal. Right now I believe the 
> only thing missing is to list all available entity types within one 
> application. The challenge here is that we're not storing this data for each 
> application, but given this kind of call is relatively rare (compare to 
> writes and updates) we can perform some scanning during the read time. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5821) Drop left-over preemption-related code and clean up method visibilities in the Schedulable hierarchy

2016-11-02 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5821?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated YARN-5821:
---
Attachment: yarn-5821.YARN-4752.1.patch

Straight-forward patch. 

> Drop left-over preemption-related code and clean up method visibilities in 
> the Schedulable hierarchy
> 
>
> Key: YARN-5821
> URL: https://issues.apache.org/jira/browse/YARN-5821
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
> Attachments: yarn-5821.YARN-4752.1.patch
>
>
> There is some code left-over from old preemption. We need to drop that.
> Also, looks like the visibilities in the {{Schedulable}} hierarchy need to be 
> revisited. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5611) Provide an API to update lifetime of an application.

2016-11-02 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15630958#comment-15630958
 ] 

Jian He commented on YARN-5611:
---

- can you add comments about the type of the timeout value ?
{code}
  /**
   * Set the ApplicationTimeouts for the application in seconds.
   * All pre-existing Map entries are cleared before adding the new Map.
   * 
{code}
- Revert RMAppEventType change
- for roll back, there's no need to use the future object. 
{code}
// do roll back
future = SettableFuture.create();
app.updateApplicationTimeout(RMAppUpdateType.ROLLBACK, newExpireTime,
currentApplicationTimeouts, future);
// Roll back can fail only when application is in completing state.
try {
  Futures.get(future, YarnException.class);
} catch (YarnException e) {
  LOG.warn("Roll back failed for an application "
  + app.getApplicationId() + " with message" + e.getMessage());
}
{code}
- fix indentation of second line
{code}
  for (Map.Entry timeout : 
  app.applicationTimeouts.entrySet()) {
{code}


> Provide an API to update lifetime of an application.
> 
>
> Key: YARN-5611
> URL: https://issues.apache.org/jira/browse/YARN-5611
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>  Labels: oct16-hard
> Attachments: 0001-YARN-5611.patch, 0002-YARN-5611.patch, 
> 0003-YARN-5611.patch, YARN-5611.0004.patch, YARN-5611.0005.patch, 
> YARN-5611.v0.patch
>
>
> YARN-4205 monitors an Lifetime of an applications is monitored if required. 
> Add an client api to update lifetime of an application. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2995) Enhance UI to show cluster resource utilization of various container types

2016-11-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15631014#comment-15631014
 ] 

Hadoop QA commented on YARN-2995:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 7 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
35s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  6m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
56s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 35s{color} | {color:orange} root: The patch generated 1 new + 430 unchanged 
- 4 fixed = 431 total (was 434) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
18s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
23s{color} | {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager
 generated 1 new + 235 unchanged - 0 fixed = 236 total (was 235) {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
32s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 15m 50s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 38m 
57s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
26s{color} | {color:green} hadoop-mapreduce-client-app in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 26s{color} 
| {color:red} hadoop-sls in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}135m 30s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.nodemanager.containermanager.queuing.TestQueuingContainerManager
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | YARN-2995 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12836674/YARN-2995.004.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite 

[jira] [Created] (YARN-5821) Drop left-over preemption-related code and clean up method visibilities in the Schedulable hierarchy

2016-11-02 Thread Karthik Kambatla (JIRA)
Karthik Kambatla created YARN-5821:
--

 Summary: Drop left-over preemption-related code and clean up 
method visibilities in the Schedulable hierarchy
 Key: YARN-5821
 URL: https://issues.apache.org/jira/browse/YARN-5821
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: fairscheduler
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla


There is some code left-over from old preemption. We need to drop that.

Also, looks like the visibilities in the {{Schedulable}} hierarchy need to be 
revisited. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5821) Drop left-over preemption-related code and clean up method visibilities in the Schedulable hierarchy

2016-11-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15630888#comment-15630888
 ] 

Hadoop QA commented on YARN-5821:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  6s{color} 
| {color:red} YARN-5821 does not apply to YARN-4752. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-5821 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12836685/yarn-5821.YARN-4752.1.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/13759/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Drop left-over preemption-related code and clean up method visibilities in 
> the Schedulable hierarchy
> 
>
> Key: YARN-5821
> URL: https://issues.apache.org/jira/browse/YARN-5821
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
> Attachments: yarn-5821.YARN-4752.1.patch
>
>
> There is some code left-over from old preemption. We need to drop that.
> Also, looks like the visibilities in the {{Schedulable}} hierarchy need to be 
> revisited. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5552) Add Builder methods for common yarn API records

2016-11-02 Thread Tao Jie (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5552?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tao Jie updated YARN-5552:
--
Attachment: YARN-5552.009.patch

> Add Builder methods for common yarn API records
> ---
>
> Key: YARN-5552
> URL: https://issues.apache.org/jira/browse/YARN-5552
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Arun Suresh
>Assignee: Tao Jie
> Attachments: YARN-5552.000.patch, YARN-5552.001.patch, 
> YARN-5552.002.patch, YARN-5552.003.patch, YARN-5552.004.patch, 
> YARN-5552.005.patch, YARN-5552.006.patch, YARN-5552.007.patch, 
> YARN-5552.008.patch, YARN-5552.009.patch
>
>
> Currently yarn API records such as ResourceRequest, AllocateRequest/Respone 
> as well as AMRMClient.ContainerRequest have multiple constructors / 
> newInstance methods. This makes it very difficult to add new fields to these 
> records.
> It would probably be better if we had Builder classes for many of these 
> records, which would make evolution of these records a bit easier.
> (suggested by [~kasha])



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4498) Application level node labels stats to be available in REST

2016-11-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15631443#comment-15631443
 ] 

Hadoop QA commented on YARN-4498:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 18s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 1 new + 42 unchanged - 1 fixed = 43 total (was 43) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 36m 
25s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 53m  7s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | YARN-4498 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12836710/YARN-4498.addendum.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 8ce1c2f760da 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 7e521c5 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/13763/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/13763/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/13763/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Application level node labels stats to be available in REST
> ---
>
> 

[jira] [Commented] (YARN-4498) Application level node labels stats to be available in REST

2016-11-02 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15631448#comment-15631448
 ] 

Naganarasimha G R commented on YARN-4498:
-

Thanks [~rohithsharma], as Bibin mentioned we had already discussed and started 
working towards it!

> Application level node labels stats to be available in REST
> ---
>
> Key: YARN-4498
> URL: https://issues.apache.org/jira/browse/YARN-4498
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api, client, resourcemanager
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>  Labels: oct16-medium
> Fix For: 2.8.0, 2.9.0, 3.0.0-alpha2
>
> Attachments: 0001-YARN-4498.patch, YARN-4498.0002.patch, 
> YARN-4498.0003.patch, YARN-4498.0004.patch, YARN-4498.addendum.001.patch, 
> YARN-4498.branch-2.8.0001.patch, YARN-4498.branch-2.8.addendum.001.patch, 
> apps.xml
>
>
> Currently nodelabel stats per application is not available through REST like 
> currently used labels by all live containers, total stats of containers per 
> label for app etc..
> CLI and web UI scenarios will be handled separately.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4498) Application level node labels stats to be available in REST

2016-11-02 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15631458#comment-15631458
 ] 

Naganarasimha G R commented on YARN-4498:
-

Thanks [~bibinchundatt] for sharing the addendum patches, hope you can publish 
about this issue in the forum, so that others are aware about this difference 
from 2.8 to trunk due to change in dependent jar version REST can behave 
differently, but wonder how none of the test fails in the trunk due to it!

> Application level node labels stats to be available in REST
> ---
>
> Key: YARN-4498
> URL: https://issues.apache.org/jira/browse/YARN-4498
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api, client, resourcemanager
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>  Labels: oct16-medium
> Fix For: 2.8.0, 2.9.0, 3.0.0-alpha2
>
> Attachments: 0001-YARN-4498.patch, YARN-4498.0002.patch, 
> YARN-4498.0003.patch, YARN-4498.0004.patch, YARN-4498.addendum.001.patch, 
> YARN-4498.branch-2.8.0001.patch, YARN-4498.branch-2.8.addendum.001.patch, 
> apps.xml
>
>
> Currently nodelabel stats per application is not available through REST like 
> currently used labels by all live containers, total stats of containers per 
> label for app etc..
> CLI and web UI scenarios will be handled separately.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5605) Preempt containers (all on one node) to meet the requirement of starved applications

2016-11-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15631464#comment-15631464
 ] 

ASF GitHub Bot commented on YARN-5605:
--

Github user kambatla closed the pull request at:

https://github.com/apache/hadoop/pull/124


> Preempt containers (all on one node) to meet the requirement of starved 
> applications
> 
>
> Key: YARN-5605
> URL: https://issues.apache.org/jira/browse/YARN-5605
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
> Attachments: yarn-5605-1.patch, yarn-5605-2.patch, yarn-5605-3.patch, 
> yarn-5605-4.patch
>
>
> Required items:
> # Identify starved applications
> # Identify a node that has enough containers from applications over their 
> fairshare.
> # Preempt those containers



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5823) Update NMTokens in case of requests with only opportunistic containers

2016-11-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15631535#comment-15631535
 ] 

Hadoop QA commented on YARN-5823:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
29s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 28s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server: The patch generated 2 new + 
13 unchanged - 1 fixed = 15 total (was 14) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
43s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common 
generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
26s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 15m 
46s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 38m 
42s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 84m  0s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | 
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common
 |
|  |  Should 
org.apache.hadoop.yarn.server.scheduler.OpportunisticContainerAllocator$PartitionedResourceRequests
 be a _static_ inner class?  At OpportunisticContainerAllocator.java:inner 
class?  At OpportunisticContainerAllocator.java:[lines 160-169] |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | YARN-5823 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12836713/YARN-5823.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  

[jira] [Commented] (YARN-5821) Drop left-over preemption-related code and clean up method visibilities in the Schedulable hierarchy

2016-11-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15631141#comment-15631141
 ] 

Hadoop QA commented on YARN-5821:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
10s{color} | {color:green} YARN-4752 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
37s{color} | {color:green} YARN-4752 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} YARN-4752 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
40s{color} | {color:green} YARN-4752 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
17s{color} | {color:green} YARN-4752 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
2s{color} | {color:green} YARN-4752 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} YARN-4752 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 20s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 2 new + 30 unchanged - 16 fixed = 32 total (was 46) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager
 generated 0 new + 929 unchanged - 5 fixed = 929 total (was 934) {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 36m 59s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 54m 34s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRestart |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | YARN-5821 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12836695/yarn-5821.YARN-4752.1.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 886b7c686d01 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | YARN-4752 / 5ad5085 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/13761/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/13761/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/13761/testReport/ |
| modules | C: 

[jira] [Created] (YARN-5822) Log ContainerRuntime initialization error in LinuxContainerExecutor

2016-11-02 Thread Sidharta Seethana (JIRA)
Sidharta Seethana created YARN-5822:
---

 Summary: Log ContainerRuntime initialization error in 
LinuxContainerExecutor 
 Key: YARN-5822
 URL: https://issues.apache.org/jira/browse/YARN-5822
 Project: Hadoop YARN
  Issue Type: Task
  Components: nodemanager
Reporter: Sidharta Seethana
Assignee: Sidharta Seethana
Priority: Trivial


LinuxContainerExecutor does not log information corresponding to 
ContainerRuntime initialization failure. This makes it hard to identify the 
root cause for Nodemanager start failure. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5823) Update NMTokens in case of requests with only opportunistic containers

2016-11-02 Thread Konstantinos Karanasos (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantinos Karanasos updated YARN-5823:
-
Description: 
At the moment, when an {{AllocateRequest}} contains only opportunistic 
{{ResourceRequests}}, the updated NMTokens are not properly added to the 
{{AllocateResponse}}.
In such a case the AM does not get back the needed NMTokens that are required 
to start the opportunistic containers at the respective nodes.

  was:
At the moment, when an {{AllocateRequest}} containers only opportunistic 
{{ResourceRequests}}, the updated NMTokens are not properly added to the 
{{AllocateResponse}}.
In such a case the AM does not get back the needed NMTokens that are required 
to start the opportunistic containers at the respective nodes.


> Update NMTokens in case of requests with only opportunistic containers
> --
>
> Key: YARN-5823
> URL: https://issues.apache.org/jira/browse/YARN-5823
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
>
> At the moment, when an {{AllocateRequest}} contains only opportunistic 
> {{ResourceRequests}}, the updated NMTokens are not properly added to the 
> {{AllocateResponse}}.
> In such a case the AM does not get back the needed NMTokens that are required 
> to start the opportunistic containers at the respective nodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5823) Update NMTokens in case of requests with only opportunistic containers

2016-11-02 Thread Konstantinos Karanasos (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantinos Karanasos updated YARN-5823:
-
Attachment: YARN-5823.001.patch

Attaching patch.

> Update NMTokens in case of requests with only opportunistic containers
> --
>
> Key: YARN-5823
> URL: https://issues.apache.org/jira/browse/YARN-5823
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Attachments: YARN-5823.001.patch
>
>
> At the moment, when an {{AllocateRequest}} contains only opportunistic 
> {{ResourceRequests}}, the updated NMTokens are not properly added to the 
> {{AllocateResponse}}.
> In such a case the AM does not get back the needed NMTokens that are required 
> to start the opportunistic containers at the respective nodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5739) Provide timeline reader API to list available timeline entity types for one application

2016-11-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15631098#comment-15631098
 ] 

Hadoop QA commented on YARN-5739:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
59s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
47s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
31s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
30s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
46s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
29s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
58s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} YARN-5355 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 27s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server: The patch generated 4 new + 
29 unchanged - 1 fixed = 33 total (was 30) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
47s{color} | {color:green} hadoop-yarn-server-timelineservice in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  6m 
34s{color} | {color:green} hadoop-yarn-server-timelineservice-hbase-tests in 
the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 30m 59s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | YARN-5739 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12836696/YARN-5739-YARN-5355.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 20f869487147 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | YARN-5355 / 513dcf6 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/13760/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/13760/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice
 

[jira] [Commented] (YARN-5391) PolicyManager to tie together Router/AMRM Federation policies

2016-11-02 Thread Carlo Curino (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15631152#comment-15631152
 ] 

Carlo Curino commented on YARN-5391:


Thanks [~subru] for prompt reviewing and commit.

> PolicyManager to tie together Router/AMRM Federation policies
> -
>
> Key: YARN-5391
> URL: https://issues.apache.org/jira/browse/YARN-5391
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Affects Versions: YARN-2915
>Reporter: Carlo Curino
>Assignee: Carlo Curino
>  Labels: oct16-hard
> Fix For: YARN-2915
>
> Attachments: YARN-5391-YARN-2915.04.patch, 
> YARN-5391-YARN-2915.05.patch, YARN-5391-YARN-2915.06.patch, 
> YARN-5391-YARN-2915.07.patch, YARN-5391.01.patch, YARN-5391.02.patch, 
> YARN-5391.03.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5822) Log ContainerRuntime initialization error in LinuxContainerExecutor

2016-11-02 Thread Sidharta Seethana (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sidharta Seethana updated YARN-5822:

Attachment: YARN-5822.001.patch

Uploading a quick patch to log the container runtime initialization failure. 



> Log ContainerRuntime initialization error in LinuxContainerExecutor 
> 
>
> Key: YARN-5822
> URL: https://issues.apache.org/jira/browse/YARN-5822
> Project: Hadoop YARN
>  Issue Type: Task
>  Components: nodemanager
>Reporter: Sidharta Seethana
>Assignee: Sidharta Seethana
>Priority: Trivial
> Attachments: YARN-5822.001.patch
>
>
> LinuxContainerExecutor does not log information corresponding to 
> ContainerRuntime initialization failure. This makes it hard to identify the 
> root cause for Nodemanager start failure. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5435) [Regression] QueueCapacities not being updated for dynamic ReservationQueue

2016-11-02 Thread Carlo Curino (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15631161#comment-15631161
 ] 

Carlo Curino commented on YARN-5435:


[~seanpo03] can you address the unit test issues?

> [Regression] QueueCapacities not being updated for dynamic ReservationQueue
> ---
>
> Key: YARN-5435
> URL: https://issues.apache.org/jira/browse/YARN-5435
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, resourcemanager
>Affects Versions: 2.8.0
>Reporter: Sean Po
>Assignee: Sean Po
>  Labels: oct16-easy, regression
> Attachments: YARN-5435.v003.patch, YARN-5435.v004.patch, 
> YARN-5435.v1.patch, YARN-5435.v2.patch
>
>
> YARN-1707 added dynamic queues (ReservationQueue) to CapacityScheduler. The 
> QueueCapacities data structure was added subsequently but is not being 
> updated correctly for ReservationQueue. This JIRA tracks the changes required 
> to update QueueCapacities of ReservationQueue correctly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5822) Log ContainerRuntime initialization error in LinuxContainerExecutor

2016-11-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15631233#comment-15631233
 ] 

Hadoop QA commented on YARN-5822:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
 4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 14m 
52s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 28m 28s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | YARN-5822 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12836698/YARN-5822.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 2edcf045e234 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 
21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / dcc07ad |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/13762/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/13762/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Log ContainerRuntime initialization error in LinuxContainerExecutor 
> 
>
> Key: YARN-5822
> URL: https://issues.apache.org/jira/browse/YARN-5822
> Project: Hadoop YARN
>  Issue Type: Task
>  Components: nodemanager
>Reporter: Sidharta Seethana
>

[jira] [Created] (YARN-5823) Update NMTokens in case of requests with only opportunistic containers

2016-11-02 Thread Konstantinos Karanasos (JIRA)
Konstantinos Karanasos created YARN-5823:


 Summary: Update NMTokens in case of requests with only 
opportunistic containers
 Key: YARN-5823
 URL: https://issues.apache.org/jira/browse/YARN-5823
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Konstantinos Karanasos
Assignee: Konstantinos Karanasos


At the moment, when an {{AllocateRequest}} containers only opportunistic 
{{ResourceRequests}}, the updated NMTokens are not properly added to the 
{{AllocateResponse}}.
In such a case the AM does not get back the needed NMTokens that are required 
to start the opportunistic containers at the respective nodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4498) Application level node labels stats to be available in REST

2016-11-02 Thread Bibin A Chundatt (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15631324#comment-15631324
 ] 

Bibin A Chundatt commented on YARN-4498:


Sample
*JSON*
{noformat}
{
  "apps": {
"app": [
  {
"id": "application_1478107292261_0001",
"user": "root",
"name": "QuasiMonteCarlo",
"queue": "default",
"state": "FINISHED",
"finalStatus": "SUCCEEDED",
"progress": 100,
"trackingUI": "History",
"trackingUrl": 
"http://localhost:8088/proxy/application_1478107292261_0001/;,
"diagnostics": "",
"clusterId": 1478107292261,
"applicationType": "MAPREDUCE",
"applicationTags": "",
"priority": 0,
"startedTime": 1478107344160,
"finishedTime": 1478107365065,
"elapsedTime": 20905,
"amContainerLogs": 
"http://localhost:8043/node/containerlogs/container_1478107292261_0001_01_01/root;,
"amHostHttpAddress": "localhost:8043",
"amRPCAddress": "localhost:44793",
"allocatedMB": -1,
"allocatedVCores": -1,
"runningContainers": -1,
"memorySeconds": 51757,
"vcoreSeconds": 36,
"queueUsagePercentage": 0,
"clusterUsagePercentage": 0,
"preemptedResourceMB": 0,
"preemptedResourceVCores": 0,
"numNonAMContainerPreempted": 0,
"numAMContainerPreempted": 0,
"logAggregationStatus": "DISABLED",
"unmanagedApplication": false,
"amNodeLabelExpression": ""
  }
]
  }
}
{noformat}

*XML*

{noformat}



application_1478107292261_0001
root
QuasiMonteCarlo
default
FINISHED
SUCCEEDED
100.0
History

http://localhost:8088/proxy/application_1478107292261_0001/

1478107292261
MAPREDUCE

0
1478107344160
1478107365065
20905

http://localhost:8043/node/containerlogs/container_1478107292261_0001_01_01/root
localhost:8043
localhost:44793
-1
-1
-1
51757
36
0.0
0.0
0
0
0
0
DISABLED
false



{noformat}

> Application level node labels stats to be available in REST
> ---
>
> Key: YARN-4498
> URL: https://issues.apache.org/jira/browse/YARN-4498
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api, client, resourcemanager
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>  Labels: oct16-medium
> Fix For: 2.8.0, 2.9.0, 3.0.0-alpha2
>
> Attachments: 0001-YARN-4498.patch, YARN-4498.0002.patch, 
> YARN-4498.0003.patch, YARN-4498.0004.patch, YARN-4498.branch-2.8.0001.patch, 
> YARN-4498.branch-2.8.addendum.001.patch, YARN-4498.trunk.addendum.001.patch, 
> apps.xml
>
>
> Currently nodelabel stats per application is not available through REST like 
> currently used labels by all live containers, total stats of containers per 
> label for app etc..
> CLI and web UI scenarios will be handled separately.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4498) Application level node labels stats to be available in REST

2016-11-02 Thread Bibin A Chundatt (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bibin A Chundatt updated YARN-4498:
---
Attachment: YARN-4498.addendum.001.patch

Reattaching patch again to trigger jenkins

> Application level node labels stats to be available in REST
> ---
>
> Key: YARN-4498
> URL: https://issues.apache.org/jira/browse/YARN-4498
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api, client, resourcemanager
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>  Labels: oct16-medium
> Fix For: 2.8.0, 2.9.0, 3.0.0-alpha2
>
> Attachments: 0001-YARN-4498.patch, YARN-4498.0002.patch, 
> YARN-4498.0003.patch, YARN-4498.0004.patch, YARN-4498.addendum.001.patch, 
> YARN-4498.branch-2.8.0001.patch, YARN-4498.branch-2.8.addendum.001.patch, 
> apps.xml
>
>
> Currently nodelabel stats per application is not available through REST like 
> currently used labels by all live containers, total stats of containers per 
> label for app etc..
> CLI and web UI scenarios will be handled separately.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4498) Application level node labels stats to be available in REST

2016-11-02 Thread Bibin A Chundatt (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bibin A Chundatt updated YARN-4498:
---
Attachment: (was: YARN-4498.trunk.addendum.001.patch)

> Application level node labels stats to be available in REST
> ---
>
> Key: YARN-4498
> URL: https://issues.apache.org/jira/browse/YARN-4498
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api, client, resourcemanager
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>  Labels: oct16-medium
> Fix For: 2.8.0, 2.9.0, 3.0.0-alpha2
>
> Attachments: 0001-YARN-4498.patch, YARN-4498.0002.patch, 
> YARN-4498.0003.patch, YARN-4498.0004.patch, YARN-4498.addendum.001.patch, 
> YARN-4498.branch-2.8.0001.patch, YARN-4498.branch-2.8.addendum.001.patch, 
> apps.xml
>
>
> Currently nodelabel stats per application is not available through REST like 
> currently used labels by all live containers, total stats of containers per 
> label for app etc..
> CLI and web UI scenarios will be handled separately.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5815) Random failure TestApplicationPriority.testOrderOfActivatingThePriorityApplicationOnRMRestart

2016-11-02 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15628944#comment-15628944
 ] 

Rohith Sharma K S commented on YARN-5815:
-

It look like after YARN-5773 is failing randomly.

> Random failure 
> TestApplicationPriority.testOrderOfActivatingThePriorityApplicationOnRMRestart
> -
>
> Key: YARN-5815
> URL: https://issues.apache.org/jira/browse/YARN-5815
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
> Attachments: YARN-5815.0001.patch
>
>
> {noformat}
> java.lang.AssertionError: expected:<2> but was:<0>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestApplicationPriority.testOrderOfActivatingThePriorityApplicationOnRMRestart(TestApplicationPriority.java:707)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5802) Application priority updates add pending apps to running ordering policy

2016-11-02 Thread Bibin A Chundatt (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15628786#comment-15628786
 ] 

Bibin A Chundatt commented on YARN-5802:


Test case failure looks random due to Nodemanager event not yet received to 
scheduler after registration. Not related to patch attached.


> Application priority updates add pending apps to running ordering policy
> 
>
> Key: YARN-5802
> URL: https://issues.apache.org/jira/browse/YARN-5802
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>Priority: Critical
> Attachments: YARN-5802.0001.patch, YARN-5802.0002.patch, 
> YARN-5802.0003.patch, YARN-5802.0004.patch, YARN-5802.0005.patch, 
> YARN-5802.0006.patch
>
>
> {{LeafQueue#updateApplicationPriority}}
> {code}
>  getOrderingPolicy().removeSchedulableEntity(attempt);
>   // Update new priority in SchedulerApplication
>   attempt.setPriority(newAppPriority);
>   getOrderingPolicy().addSchedulableEntity(attempt);
> {code}
> We should add again to ordering policy only when  attempt available in first 
> case.Else during application attempt removal will try to iterate on killed 
> application still available in pending Ordering policy.Which can cause RM to 
> crash.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5815) Random failure TestApplicationPriority.testOrderOfActivatingThePriorityApplicationOnRMRestart

2016-11-02 Thread Bibin A Chundatt (JIRA)
Bibin A Chundatt created YARN-5815:
--

 Summary: Random failure 
TestApplicationPriority.testOrderOfActivatingThePriorityApplicationOnRMRestart
 Key: YARN-5815
 URL: https://issues.apache.org/jira/browse/YARN-5815
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Bibin A Chundatt
Assignee: Bibin A Chundatt


{noformat}
java.lang.AssertionError: expected:<2> but was:<0>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:555)
at org.junit.Assert.assertEquals(Assert.java:542)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestApplicationPriority.testOrderOfActivatingThePriorityApplicationOnRMRestart(TestApplicationPriority.java:707)
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5815) Random failure TestApplicationPriority.testOrderOfActivatingThePriorityApplicationOnRMRestart

2016-11-02 Thread Bibin A Chundatt (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5815?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bibin A Chundatt updated YARN-5815:
---
Attachment: YARN-5815.0001.patch

> Random failure 
> TestApplicationPriority.testOrderOfActivatingThePriorityApplicationOnRMRestart
> -
>
> Key: YARN-5815
> URL: https://issues.apache.org/jira/browse/YARN-5815
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
> Attachments: YARN-5815.0001.patch
>
>
> {noformat}
> java.lang.AssertionError: expected:<2> but was:<0>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestApplicationPriority.testOrderOfActivatingThePriorityApplicationOnRMRestart(TestApplicationPriority.java:707)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5815) Random failure TestApplicationPriority.testOrderOfActivatingThePriorityApplicationOnRMRestart

2016-11-02 Thread Bibin A Chundatt (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15628987#comment-15628987
 ] 

Bibin A Chundatt commented on YARN-5815:


Thank you [~rohithsharma] for looking into issue.
Earlier even before NM registration its was expecting one app to be 
activated.When cluster resource is zero the active application will be zero in 
current implementation. After NM is registered the number of active apps will 
be 2  and pending will be 1 . Missed out case that NODE_ADDED event yet to be 
processed  by scheduler in testcase.

> Random failure 
> TestApplicationPriority.testOrderOfActivatingThePriorityApplicationOnRMRestart
> -
>
> Key: YARN-5815
> URL: https://issues.apache.org/jira/browse/YARN-5815
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
> Attachments: YARN-5815.0001.patch
>
>
> {noformat}
> java.lang.AssertionError: expected:<2> but was:<0>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestApplicationPriority.testOrderOfActivatingThePriorityApplicationOnRMRestart(TestApplicationPriority.java:707)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5816) TestDelegationTokenRenewer#testCancelWithMultipleAppSubmissions is still flakey

2016-11-02 Thread Daniel Templeton (JIRA)
Daniel Templeton created YARN-5816:
--

 Summary: 
TestDelegationTokenRenewer#testCancelWithMultipleAppSubmissions is still flakey
 Key: YARN-5816
 URL: https://issues.apache.org/jira/browse/YARN-5816
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager, test
Reporter: Daniel Templeton
Priority: Minor


Even after YARN-5057, 
TestDelegationTokenRenewer#testCancelWithMultipleAppSubmissions is still flakey:

{noformat}
Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 2.796 sec <<< 
FAILURE! - in 
org.apache.hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer
testCancelWithMultipleAppSubmissions(org.apache.hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer)
  Time elapsed: 2.307 sec  <<< FAILURE!
java.lang.AssertionError: null
at org.junit.Assert.fail(Assert.java:86)
at org.junit.Assert.assertTrue(Assert.java:41)
at org.junit.Assert.assertTrue(Assert.java:52)
at 
org.apache.hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer.testCancelWithMultipleAppSubmissions(TestDelegationTokenRenewer.java:1260)
{noformat}

Note that it's the same error as YARN-5057, but on a different line.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5815) Random failure TestApplicationPriority.testOrderOfActivatingThePriorityApplicationOnRMRestart

2016-11-02 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15628988#comment-15628988
 ] 

Sunil G commented on YARN-5815:
---

[~bibinchundatt] 
Could you please share the details of this. 

> Random failure 
> TestApplicationPriority.testOrderOfActivatingThePriorityApplicationOnRMRestart
> -
>
> Key: YARN-5815
> URL: https://issues.apache.org/jira/browse/YARN-5815
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
> Attachments: YARN-5815.0001.patch
>
>
> {noformat}
> java.lang.AssertionError: expected:<2> but was:<0>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestApplicationPriority.testOrderOfActivatingThePriorityApplicationOnRMRestart(TestApplicationPriority.java:707)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2882) Add an OPPORTUNISTIC ExecutionType

2016-11-02 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15628792#comment-15628792
 ] 

Steve Loughran commented on YARN-2882:
--

I think I've already expressed my unhappiness about breaking tagged-as-stable 
stable classes/APIs and how "it wasn't meant to be subclassed for mock testing" 
isn't the kind of response I'd like to have seen.

+1 for the patch, also +1 for making this policy "if we add new methods to a 
public class, we'll make them non-abstract but fail as unsupported for the 
benefit of those subclasses (especially mock test ones) which may exist.

> Add an OPPORTUNISTIC ExecutionType
> --
>
> Key: YARN-2882
> URL: https://issues.apache.org/jira/browse/YARN-2882
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Fix For: 2.9.0, 3.0.0-alpha1
>
> Attachments: YARN-2882-yarn-2877.001.patch, 
> YARN-2882-yarn-2877.002.patch, YARN-2882-yarn-2877.003.patch, 
> YARN-2882-yarn-2877.004.patch, YARN-2882.005.patch, yarn-2882.patch
>
>
> This JIRA introduces the notion of container types.
> We propose two initial types of containers: guaranteed-start and queueable 
> containers.
> Guaranteed-start are the existing containers, which are allocated by the 
> central RM and are instantaneously started, once allocated.
> Queueable is a new type of container, which allows containers to be queued in 
> the NM, thus their execution may be arbitrarily delayed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5611) Provide an API to update lifetime of an application.

2016-11-02 Thread Rohith Sharma K S (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S updated YARN-5611:

Attachment: YARN-5611.0005.patch

Updated patch with following changes from previous patch
# Removed RMAppImpl state transition and made direct call to RMAppImpl. 
# Made updateTimeout API as transactional. 
# Fixed test cases failures. 
# Fixed review comment i.e removed application attribute class and storing in 
ApplicationStateData only.

Pending task 
# UpdateResponse is empty in current patch. As we discussed we need to send 
back response as updated timeout value. This I will add in next patches.
# Checkstyle issue will be handled.

> Provide an API to update lifetime of an application.
> 
>
> Key: YARN-5611
> URL: https://issues.apache.org/jira/browse/YARN-5611
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>  Labels: oct16-hard
> Attachments: 0001-YARN-5611.patch, 0002-YARN-5611.patch, 
> 0003-YARN-5611.patch, YARN-5611.0004.patch, YARN-5611.0005.patch, 
> YARN-5611.v0.patch
>
>
> YARN-4205 monitors an Lifetime of an applications is monitored if required. 
> Add an client api to update lifetime of an application. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5802) Application priority updates add pending apps to running ordering policy

2016-11-02 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15628992#comment-15628992
 ] 

Sunil G commented on YARN-5802:
---

Test case failure is tracked via YARN-5815. Latest patch looks fine for me. +1
I will commit the same tomorrow if there are no objections.

> Application priority updates add pending apps to running ordering policy
> 
>
> Key: YARN-5802
> URL: https://issues.apache.org/jira/browse/YARN-5802
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>Priority: Critical
> Attachments: YARN-5802.0001.patch, YARN-5802.0002.patch, 
> YARN-5802.0003.patch, YARN-5802.0004.patch, YARN-5802.0005.patch, 
> YARN-5802.0006.patch
>
>
> {{LeafQueue#updateApplicationPriority}}
> {code}
>  getOrderingPolicy().removeSchedulableEntity(attempt);
>   // Update new priority in SchedulerApplication
>   attempt.setPriority(newAppPriority);
>   getOrderingPolicy().addSchedulableEntity(attempt);
> {code}
> We should add again to ordering policy only when  attempt available in first 
> case.Else during application attempt removal will try to iterate on killed 
> application still available in pending Ordering policy.Which can cause RM to 
> crash.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



  1   2   >