[jira] [Commented] (YARN-5009) NMLeveldbStateStoreService database can grow substantially leading to longer recovery times

2016-04-29 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15265142#comment-15265142
 ] 

Jian He commented on YARN-5009:
---

ah, thanks for the correction !

> NMLeveldbStateStoreService database can grow substantially leading to longer 
> recovery times
> ---
>
> Key: YARN-5009
> URL: https://issues.apache.org/jira/browse/YARN-5009
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.6.0
>Reporter: Jason Lowe
>Assignee: Jason Lowe
> Fix For: 2.7.4
>
> Attachments: YARN-5009.001.patch, YARN-5009.002.patch
>
>
> Similar to the RM case in YARN-5008, I have seen state stores for 
> nodemanagers with high container churn become significantly larger than they 
> should be due to lack of sufficient database compaction.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4844) Add getMemoryLong/getVirtualCoreLong to o.a.h.y.api.records.Resource

2016-04-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15265122#comment-15265122
 ] 

Hadoop QA commented on YARN-4844:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 14m 24s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 56 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 22s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
18s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 41s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 35s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
39s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 4m 6s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 2m 
3s {color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 21s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common in 
trunk has 1 extant Findbugs warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 52s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common in 
trunk has 3 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 43s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 6m 26s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 17s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
34s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 11s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 9m 11s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 9m 11s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 39s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 8m 39s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 8m 39s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 43s 
{color} | {color:red} root: patch generated 70 new + 1535 unchanged - 55 fixed 
= 1605 total (was 1590) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 4m 4s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 2m 
1s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 1 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 9m 
53s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 45s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 6m 29s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 31s 
{color} | {color:green} hadoop-yarn-api in the patch passed with JDK v1.8.0_91. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 2m 36s {color} 
| {color:red} hadoop-yarn-common in the patch failed with JDK v1.8.0_91. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | 

[jira] [Commented] (YARN-3998) Add support in the NodeManager to re-launch containers

2016-04-29 Thread Jun Gong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15265052#comment-15265052
 ] 

Jun Gong commented on YARN-3998:


Thanks [~vvasudev] for all the help and commit! Thanks [~vinodkv]'s suggestions.

> Add support in the NodeManager to re-launch containers
> --
>
> Key: YARN-3998
> URL: https://issues.apache.org/jira/browse/YARN-3998
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jun Gong
>Assignee: Jun Gong
> Fix For: 2.9.0
>
> Attachments: YARN-3998.01.patch, YARN-3998.02.patch, 
> YARN-3998.03.patch, YARN-3998.04.patch, YARN-3998.05.patch, 
> YARN-3998.06.patch, YARN-3998.07.patch, YARN-3998.08.patch, YARN-3998.09.patch
>
>
> I'd like to add a field(retry-times) in ContainerLaunchContext. When AM 
> launches containers, it could specify the value. Then NM will re-launch the 
> container 'retry-times' times when it fails to run(e.g.exit code is not 0). 
> It will save a lot of time. It avoids container localization. RM does not 
> need to re-schedule the container. And local files in container's working 
> directory will be left for re-use.(If container have downloaded some big 
> files, it does not need to re-download them when running again.) 
> We find it is useful in systems like Storm.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5018) Online aggregation logic should not run immediately after collectors got started

2016-04-29 Thread Li Lu (JIRA)
Li Lu created YARN-5018:
---

 Summary: Online aggregation logic should not run immediately after 
collectors got started
 Key: YARN-5018
 URL: https://issues.apache.org/jira/browse/YARN-5018
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Li Lu
Assignee: Li Lu


In app level collector, we launch the aggregation logic immediately after the 
collector got started. However, at this time, important context data has yet to 
be published to the container. Also, if the aggregation result is empty, we do 
not need to publish them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4920) ATS/NM should support a link to dowload/get the logs in text format

2016-04-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15265021#comment-15265021
 ] 

Hadoop QA commented on YARN-4920:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
34s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 4s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 16s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
25s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 13s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
44s {color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 42s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common in 
trunk has 3 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 3s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 12s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
1s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 4s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 4s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 18s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 18s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 23s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server: patch 
generated 1 new + 27 unchanged - 0 fixed = 28 total (was 27) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 9s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
32s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
40s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 46s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 59s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 22s 
{color} | {color:green} hadoop-yarn-server-common in the patch passed with JDK 
v1.8.0_91. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 11m 11s 
{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed with 
JDK v1.8.0_91. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 3m 38s {color} 
| {color:red} hadoop-yarn-server-applicationhistoryservice in the patch failed 
with JDK v1.8.0_91. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 24s 
{color} | {color:green} hadoop-yarn-server-common in the patch passed with JDK 
v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 11m 40s 

[jira] [Commented] (YARN-4986) Add a check in the coprocessor for table to operated on

2016-04-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15264985#comment-15264985
 ] 

Hadoop QA commented on YARN-4986:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 10m 48s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 11m 
41s {color} | {color:green} YARN-2928 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 17s 
{color} | {color:green} YARN-2928 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 20s 
{color} | {color:green} YARN-2928 passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
20s {color} | {color:green} YARN-2928 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 29s 
{color} | {color:green} YARN-2928 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
22s {color} | {color:green} YARN-2928 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
43s {color} | {color:green} YARN-2928 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 16s 
{color} | {color:green} YARN-2928 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s 
{color} | {color:green} YARN-2928 passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
21s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 14s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 14s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 17s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 17s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 24s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
44s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 16s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 4m 15s 
{color} | {color:green} hadoop-yarn-server-timelineservice in the patch passed 
with JDK v1.8.0_91. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 4m 22s 
{color} | {color:green} hadoop-yarn-server-timelineservice in the patch passed 
with JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
19s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 38m 23s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12801553/YARN-4986-YARN-2928.02.patch
 |
| JIRA Issue | YARN-4986 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux f84a4f4e2bf7 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | YARN-2928 / 466ea0d |

[jira] [Commented] (YARN-4986) Add a check in the coprocessor for table to operated on

2016-04-29 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15264978#comment-15264978
 ] 

Sangjin Lee commented on YARN-4986:
---

The latest patch LGTM. Thanks for adding the unit tests [~vrushalic]. I'll 
commit it tonight unless there is more feedback.

> Add a check in the coprocessor for table to operated on
> ---
>
> Key: YARN-4986
> URL: https://issues.apache.org/jira/browse/YARN-4986
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Vrushali C
>  Labels: yarn-2928-1st-milestone
> Attachments: YARN-4986-YARN-2928.01.patch, 
> YARN-4986-YARN-2928.02.patch
>
>
> As a precautionary measure, it will be a good idea to have the coprocessor 
> code check which table it needs to be working on and return/proceed 
> accordingly. This is more of a safety check so that we are sure we are not 
> inadvertently executing the coprocessor code on some other table.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4920) ATS/NM should support a link to dowload/get the logs in text format

2016-04-29 Thread Xuan Gong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15264958#comment-15264958
 ] 

Xuan Gong commented on YARN-4920:
-

fix the whitespace issue.
findbug warning is not related.

For the checkstyle issue:
{code}
ContainerInfo.java:51:  protected String nodeId;:20: Variable 'nodeId' must be 
private and have accessor methods.
{code}
Let us keep this consistent as other parameters.

> ATS/NM should support a link to dowload/get the logs in text format
> ---
>
> Key: YARN-4920
> URL: https://issues.apache.org/jira/browse/YARN-4920
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Attachments: YARN-4920.2.patch, YARN-4920.20160424.branch-2.patch, 
> YARN-4920.3.patch, YARN-4920.4.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4920) ATS/NM should support a link to dowload/get the logs in text format

2016-04-29 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated YARN-4920:

Attachment: YARN-4920.4.patch

> ATS/NM should support a link to dowload/get the logs in text format
> ---
>
> Key: YARN-4920
> URL: https://issues.apache.org/jira/browse/YARN-4920
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Attachments: YARN-4920.2.patch, YARN-4920.20160424.branch-2.patch, 
> YARN-4920.3.patch, YARN-4920.4.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4905) Improve "yarn logs" command-line to optionally show log metadata also

2016-04-29 Thread Xuan Gong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15264948#comment-15264948
 ] 

Xuan Gong commented on YARN-4905:
-

The findbug issue  and testcase failures are not related

> Improve "yarn logs" command-line to optionally show log metadata also
> -
>
> Key: YARN-4905
> URL: https://issues.apache.org/jira/browse/YARN-4905
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Attachments: YARN-4905.1.patch, YARN-4905.2.patch, YARN-4905.3.patch, 
> YARN-4905.4.patch, YARN-4905.5.patch, YARN-4905.6.1.patch, YARN-4905.7.patch
>
>
> Improve the Yarn log commandline to have "ls" command which can list 
> containers for which we have logs, list files within each container, along 
> with file size



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4986) Add a check in the coprocessor for table to operated on

2016-04-29 Thread Vrushali C (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C updated YARN-4986:
-
Attachment: YARN-4986-YARN-2928.02.patch


Uploading patch v2 that fixes the checkstyle warnings. Also added unit tests
 {noformat} TestHBaseStorageFlowRun#checkCoProcessorOff  {noformat} 
and   {noformat}  TestHBaseStorageFlowRunCompaction#testWriteNonNumericData  
{noformat}  to cover the code changes.

thanks
Vrushali


> Add a check in the coprocessor for table to operated on
> ---
>
> Key: YARN-4986
> URL: https://issues.apache.org/jira/browse/YARN-4986
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Vrushali C
>  Labels: yarn-2928-1st-milestone
> Attachments: YARN-4986-YARN-2928.01.patch, 
> YARN-4986-YARN-2928.02.patch
>
>
> As a precautionary measure, it will be a good idea to have the coprocessor 
> code check which table it needs to be working on and return/proceed 
> accordingly. This is more of a safety check so that we are sure we are not 
> inadvertently executing the coprocessor code on some other table.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4905) Improve "yarn logs" command-line to optionally show log metadata also

2016-04-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15264890#comment-15264890
 ] 

Hadoop QA commented on YARN-4905:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 23s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 12s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
5s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 49s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 13s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
35s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 52s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
26s {color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 9s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common in 
trunk has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 44s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 49s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
45s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 48s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 48s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 6s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 6s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
30s {color} | {color:green} hadoop-yarn-project/hadoop-yarn: patch generated 0 
new + 25 unchanged - 33 fixed = 25 total (was 58) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 51s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
23s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 7s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 39s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 47s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 0s 
{color} | {color:green} hadoop-yarn-common in the patch passed with JDK 
v1.8.0_91. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 66m 9s {color} 
| {color:red} hadoop-yarn-client in the patch failed with JDK v1.8.0_91. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 19s 
{color} | {color:green} hadoop-yarn-common in the patch passed with JDK 
v1.7.0_95. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 66m 21s {color} 
| {color:red} hadoop-yarn-client in the patch failed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
21s {color} | {color:green} Patch does not generate ASF 

[jira] [Commented] (YARN-4920) ATS/NM should support a link to dowload/get the logs in text format

2016-04-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15264828#comment-15264828
 ] 

Hadoop QA commented on YARN-4920:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
51s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 6s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 15s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
25s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 13s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
38s {color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 42s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common in 
trunk has 3 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 3s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
2s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 13s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 13s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 16s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 16s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 22s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server: patch 
generated 1 new + 27 unchanged - 0 fixed = 28 total (was 27) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 6s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
32s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 5 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
37s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 46s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 58s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 21s 
{color} | {color:green} hadoop-yarn-server-common in the patch passed with JDK 
v1.8.0_91. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 11m 5s 
{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed with 
JDK v1.8.0_91. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 2m 57s {color} 
| {color:red} hadoop-yarn-server-applicationhistoryservice in the patch failed 
with JDK v1.8.0_91. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 24s 
{color} | {color:green} hadoop-yarn-server-common in the patch passed with JDK 
v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 11m 34s 
{color} 

[jira] [Updated] (YARN-4447) Provide a mechanism to represent complex filters and parse them at the REST layer

2016-04-29 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated YARN-4447:
---
Attachment: Timeline-Filters.pdf


*1. Metric filters* – Expression for representing metric filters will be as 
under :
{{((([  …..])  ([  …..]))  ([ 
 …..]))}}
Here,  is a logical operator which can be either “AND” or “OR”.  This will 
directly convert to TimelineFilterList$Operator.
Each  would be transformed into single/multiple TimelineCompareFilter 
objects and wrapped inside a TimelineFilterList. 
Different  or expressions can be combined together with . 
Brackets(“(“ and “)”) are used to club together different logical expressions. 
Square bracket is not part of the representation and merely denotes that inside 
within a single opening and closing bracket, multiple  can be combined 
using , typically same  .
If brackets are not specified, operators will be parsed left to right with a 
new TimelineFilterList created with the old filter list wrapped inside it, 
whenever  changes.

 in turn is of the form :
  
Here,  is metric ID and  is the value which will be used to compare 
against the metric value.
 corresponds to TimelineCompareFilter#key and  to 
TimelineCompareFilter#value.

 transforms directly to TimelineCompareOp. There are 7 compare ops’ 
supported.
   gt – Equivalent to TimelineCompareOp.GREATER_THAN
   ge – Equivalent to TimelineCompareOp.GREATER_OR_EQUAL
   lt – Equivalent to TimelineCompareOp.LESS_THAN
   le – Equivalent to TimelineCompareOp.LESS_OR_EQUAL
   eq – Equivalent to TimelineCompareOp.EQUAL
   ne – Equivalent to TimelineCompareOp.NOT_EQUAL. 
TimelineCompareFilter#keyMustExist will be set to false. Entity would be 
returned if key or metric id does not exist.
   ene  – Equivalent to TimelineCompareOp.NOT_EQUAL. 
TimelineCompareFilter#keyMustExist will be set to true. Entity would not be 
returned if key or metric id does not exist.
   _Example:_ 
   {{((metric1 lt 40 OR metric2 gt 80) AND (metric3 eq 10)) OR 
(metric4 lt 5 AND metric5 ne 4 OR metric6 ene 7)}}
Please note that all the URL unsafe characters including spaces have to be 
properly encoded by client.

*2. Config and Info filters* – Expression for representing config and info 
filters is exactly same as metric filters. Only difference being that only 3 
 are supported, namely, eq, ne and ene. And, the corresponding 
filter is TimelineKeyValueFilter instead of TimelineCompareFilter.

*3. Event filters* – Expression for representing event filters will be as under 
:
{{((\[\!\](\[,,…..\])  \[\!\](\[,,…\]))  
\[\!\](\[,,…..\]))}}
Here also,  is a logical operator which can be either “AND” or “OR”.  This 
will directly convert to TimelineFilterList$Operator.
Each  here means event ID and would go to TimelineExistsFilter#value. A 
comma separated list of event IDs’ would be transformed into single(if no 
comma) or multiple TimelineExistsFilter objects and wrapped inside a 
TimelineFilterList with Operator AND.

“\!” here means NOT. A comma separated list of events inside an opening and 
closing bracket pair with a “!” before opening bracket (”(“) means the events 
should not exist for an entity to match. The TimelineCompareOp in each of these 
TimelineExistsFilter(s) here would be set to NOT_EQUAL.
If there is no “!” before opening bracket, it means the corresponding 
TimelineCompareOp will be  EQUAL.

Basically a sub-expression such as (event1,event2,event3)  or 
event1,event2,event3 would transform to a TimelineFilterList with Operator 
“AND” and containing following filters : 
TimelineExistsFilter (event1, TimelineCompareOp.EQUAL), 
TimelineExistsFilter (event2, TimelineCompareOp.EQUAL)
and 
TimelineExistsFilter (event3, TimelineCompareOp.EQUAL)

If the subexpression is !(event1,event2,event3) would transform to a 
TimelineFilterList with Operator “AND” and containing following filters : 
TimelineExistsFilter(event1, TimelineCompareOp.NOT_EQUAL), 
TimelineExistsFilter (event2, TimelineCompareOp.NOT_EQUAL)
and 
TimelineExistsFilter (event3, TimelineCompareOp.NOT_EQUAL)
Please note brackets cannot be omitted if we want compare op to be NOT_EQUAL. 
The comma separated list of values(events) must be within brackets with a “!” 
preceding opening bracket.
_Example :_  
{{event1,event2,event3) AND !(event4,event5)) OR event6,event7) 
AND !(event8))}}

*4. Relation filters* – Expression for representing relation filters is same as 
event filters except that the value is treated differently. 
Basic expression is same as event filters. But the  for relation filters 
is further represented as :
{{:\[::…\]}}
It is a colon separated list of entity type and IDs’. Spaces are not allowed in 
expression above. 
The corresponding filter constructed for relation filters is 
TimelineKeyValuesFilter.  
_Example :_  


[jira] [Updated] (YARN-3863) Support complex filters in TimelineReader

2016-04-29 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated YARN-3863:
---
Attachment: Timeline-Filters.pdf

> Support complex filters in TimelineReader
> -
>
> Key: YARN-3863
> URL: https://issues.apache.org/jira/browse/YARN-3863
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: YARN-2928
>Reporter: Varun Saxena
>Assignee: Varun Saxena
>  Labels: yarn-2928-1st-milestone
> Fix For: YARN-2928
>
> Attachments: Timeline-Filters.pdf, YARN-3863-YARN-2928.v2.01.patch, 
> YARN-3863-YARN-2928.v2.02.patch, YARN-3863-YARN-2928.v2.03.patch, 
> YARN-3863-YARN-2928.v2.04.patch, YARN-3863-YARN-2928.v2.05.patch, 
> YARN-3863-feature-YARN-2928.wip.003.patch, 
> YARN-3863-feature-YARN-2928.wip.01.patch, 
> YARN-3863-feature-YARN-2928.wip.02.patch, 
> YARN-3863-feature-YARN-2928.wip.04.patch, 
> YARN-3863-feature-YARN-2928.wip.05.patch
>
>
> Currently filters in timeline reader will return an entity only if all the 
> filter conditions hold true i.e. only AND operation is supported. We can 
> support OR operation for the filters as well. Additionally as primary backend 
> implementation is HBase, we can design our filters in a manner, where they 
> closely resemble HBase Filters.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5014) Ensure non-metric values are returned as is for flow run table from the coprocessor

2016-04-29 Thread Sangjin Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5014?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sangjin Lee updated YARN-5014:
--
Labels: yarn-2928-1st-milestone  (was: )

> Ensure non-metric values are returned as is for flow run table from the 
> coprocessor
> ---
>
> Key: YARN-5014
> URL: https://issues.apache.org/jira/browse/YARN-5014
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Vrushali C
>  Labels: yarn-2928-1st-milestone
>
> Presently the FlowScanner class presumes existence of NumericValueConverter 
> in it's emitCells function. This causes an exception when we try to retrieve 
> non-numeric values from this table. 
> Exception is seen as:
> {code}
> java.lang.ClassCastException: 
> org.apache.hadoop.yarn.server.timelineservice.storage.common.GenericConverter 
> cannot be cast to 
> org.apache.hadoop.yarn.server.timelineservice.storage.common.NumericValueConverter
> at 
> org.apache.hadoop.yarn.server.timelineservice.storage.flow.FlowScanner.nextInternal(FlowScanner.java:246)
> at 
> org.apache.hadoop.yarn.server.timelineservice.storage.flow.FlowScanner.nextRaw(FlowScanner.java:125)
> at 
> org.apache.hadoop.yarn.server.timelineservice.storage.flow.FlowScanner.nextRaw(FlowScanner.java:119)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2117)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:31443)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2031)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:107)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4097) Create POC timeline web UI with new YARN web UI framework

2016-04-29 Thread Sangjin Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sangjin Lee updated YARN-4097:
--
Labels:   (was: yarn-2928-1st-milestone)

> Create POC timeline web UI with new YARN web UI framework
> -
>
> Key: YARN-4097
> URL: https://issues.apache.org/jira/browse/YARN-4097
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Li Lu
>Assignee: Li Lu
> Attachments: Screen Shot 2016-02-24 at 15.57.38.png, Screen Shot 
> 2016-02-24 at 15.57.53.png, Screen Shot 2016-02-24 at 15.58.08.png, Screen 
> Shot 2016-02-24 at 15.58.26.png
>
>
> As planned, we need to try out the new YARN web UI framework and implement 
> timeline v2 web UI on top of it. This JIRA proposes to build the basic active 
> flow and application lists of the timeline data. We can add more content 
> after we get used to this framework.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4986) Add a check in the coprocessor for table to operated on

2016-04-29 Thread Sangjin Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sangjin Lee updated YARN-4986:
--
Labels: yarn-2928-1st-milestone  (was: )

> Add a check in the coprocessor for table to operated on
> ---
>
> Key: YARN-4986
> URL: https://issues.apache.org/jira/browse/YARN-4986
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Vrushali C
>  Labels: yarn-2928-1st-milestone
> Attachments: YARN-4986-YARN-2928.01.patch
>
>
> As a precautionary measure, it will be a good idea to have the coprocessor 
> code check which table it needs to be working on and return/proceed 
> accordingly. This is more of a safety check so that we are sure we are not 
> inadvertently executing the coprocessor code on some other table.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4173) Ensure the final values for metrics/events are emitted/stored at APP completion time

2016-04-29 Thread Sangjin Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sangjin Lee updated YARN-4173:
--
Labels:   (was: yarn-2928-1st-milestone)

> Ensure the final values for metrics/events are emitted/stored at APP 
> completion time
> 
>
> Key: YARN-4173
> URL: https://issues.apache.org/jira/browse/YARN-4173
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Vrushali C
>
> When an application is finishing, the final values of metrics/events need to 
> be written to the backend as final values from the all AM/RM/NM processes for 
> that app.
> For the flow run table (YARN-3901), we need to know which values are the 
> final ones for metrics so that they can be tagged accordingly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4239) Flow page for Web UI

2016-04-29 Thread Sangjin Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sangjin Lee updated YARN-4239:
--
Labels:   (was: yarn-2928-1st-milestone)

> Flow page for Web UI
> 
>
> Key: YARN-4239
> URL: https://issues.apache.org/jira/browse/YARN-4239
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Varun Saxena
>Assignee: Varun Saxena
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4844) Add getMemoryLong/getVirtualCoreLong to o.a.h.y.api.records.Resource

2016-04-29 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4844?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-4844:
-
Attachment: YARN-4844.4.patch

Thanks [~bibinchundatt],

Attached ver.4 patch, fixed failed tests, update interface audience/stability 
of new methods, and fixed SLS compilation failures.

> Add getMemoryLong/getVirtualCoreLong to o.a.h.y.api.records.Resource
> 
>
> Key: YARN-4844
> URL: https://issues.apache.org/jira/browse/YARN-4844
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Blocker
> Attachments: YARN-4844.1.patch, YARN-4844.2.patch, YARN-4844.3.patch, 
> YARN-4844.4.patch
>
>
> We use int32 for memory now, if a cluster has 10k nodes, each node has 210G 
> memory, we will get a negative total cluster memory.
> And another case that easier overflows int32 is: we added all pending 
> resources of running apps to cluster's total pending resources. If a 
> problematic app requires too much resources (let's say 1M+ containers, each 
> of them has 3G containers), int32 will be not enough.
> Even if we can cap each app's pending request, we cannot handle the case that 
> there're many running apps, each of them has capped but still significant 
> numbers of pending resources.
> So we may possibly need to add getMemoryLong/getVirtualCoreLong to 
> o.a.h.y.api.records.Resource.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3150) [Documentation] Documenting the timeline service v2

2016-04-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15264722#comment-15264722
 ] 

Hadoop QA commented on YARN-3150:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 32s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 
42s {color} | {color:green} YARN-2928 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 6s 
{color} | {color:green} YARN-2928 passed with JDK v1.8.0_92 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 57s 
{color} | {color:green} YARN-2928 passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
44s {color} | {color:green} YARN-2928 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 20s 
{color} | {color:green} YARN-2928 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
2s {color} | {color:green} YARN-2928 passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped branch modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 
41s {color} | {color:green} YARN-2928 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 10s 
{color} | {color:green} YARN-2928 passed with JDK v1.8.0_92 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 45s 
{color} | {color:green} YARN-2928 passed with JDK v1.7.0_95 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
52s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 56s 
{color} | {color:green} the patch passed with JDK v1.8.0_92 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 56s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 55s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 55s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
42s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 11s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
52s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patch modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 
22s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 59s 
{color} | {color:green} the patch passed with JDK v1.8.0_92 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 31s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 30s 
{color} | {color:green} hadoop-yarn-api in the patch passed with JDK v1.8.0_92. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 36s 
{color} | {color:green} hadoop-yarn-common in the patch passed with JDK 
v1.8.0_92. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 67m 23s 

[jira] [Updated] (YARN-4920) ATS/NM should support a link to dowload/get the logs in text format

2016-04-29 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated YARN-4920:

Attachment: YARN-4920.3.patch

> ATS/NM should support a link to dowload/get the logs in text format
> ---
>
> Key: YARN-4920
> URL: https://issues.apache.org/jira/browse/YARN-4920
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Attachments: YARN-4920.2.patch, YARN-4920.20160424.branch-2.patch, 
> YARN-4920.3.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4920) ATS/NM should support a link to dowload/get the logs in text format

2016-04-29 Thread Xuan Gong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15264708#comment-15264708
 ] 

Xuan Gong commented on YARN-4920:
-

bq. This is not a refactor issue but how to use memory efficient issue - or we 
could hit OOM issue always

Fixed



> ATS/NM should support a link to dowload/get the logs in text format
> ---
>
> Key: YARN-4920
> URL: https://issues.apache.org/jira/browse/YARN-4920
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Attachments: YARN-4920.2.patch, YARN-4920.20160424.branch-2.patch, 
> YARN-4920.3.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4905) Improve "yarn logs" command-line to optionally show log metadata also

2016-04-29 Thread Xuan Gong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15264579#comment-15264579
 ] 

Xuan Gong commented on YARN-4905:
-

New patch did more refactory work to fix the checkstyle issue.

The find bug issue is not related
{code}

Code Warning
Dm  org.apache.hadoop.yarn.event.EventDispatcher$EventProcessor.run() 
invokes System.exit(...), which shuts down the entire virtual machine
Bug type DM_EXIT (click for details) 
In class org.apache.hadoop.yarn.event.EventDispatcher$EventProcessor
In method org.apache.hadoop.yarn.event.EventDispatcher$EventProcessor.run()
At EventDispatcher.java:[line 80]
{code} 

> Improve "yarn logs" command-line to optionally show log metadata also
> -
>
> Key: YARN-4905
> URL: https://issues.apache.org/jira/browse/YARN-4905
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Attachments: YARN-4905.1.patch, YARN-4905.2.patch, YARN-4905.3.patch, 
> YARN-4905.4.patch, YARN-4905.5.patch, YARN-4905.6.1.patch, YARN-4905.7.patch
>
>
> Improve the Yarn log commandline to have "ls" command which can list 
> containers for which we have logs, list files within each container, along 
> with file size



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4905) Improve "yarn logs" command-line to optionally show log metadata also

2016-04-29 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4905?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated YARN-4905:

Attachment: YARN-4905.7.patch

> Improve "yarn logs" command-line to optionally show log metadata also
> -
>
> Key: YARN-4905
> URL: https://issues.apache.org/jira/browse/YARN-4905
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Attachments: YARN-4905.1.patch, YARN-4905.2.patch, YARN-4905.3.patch, 
> YARN-4905.4.patch, YARN-4905.5.patch, YARN-4905.6.1.patch, YARN-4905.7.patch
>
>
> Improve the Yarn log commandline to have "ls" command which can list 
> containers for which we have logs, list files within each container, along 
> with file size



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3150) [Documentation] Documenting the timeline service v2

2016-04-29 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15264531#comment-15264531
 ] 

Li Lu commented on YARN-3150:
-

Thanks [~sjlee0] and [~vrushalic]! Latest patch LGTM. I'll wait about a day to 
see if anyone else would like to give some comments. If nobody chime in I'll 
commit it during the weekend. 

> [Documentation] Documenting the timeline service v2
> ---
>
> Key: YARN-3150
> URL: https://issues.apache.org/jira/browse/YARN-3150
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Zhijie Shen
>Assignee: Sangjin Lee
>  Labels: yarn-2928-1st-milestone
> Attachments: TimelineServiceV2.pdf, YARN-3150-YARN-2928.01.patch, 
> YARN-3150-YARN-2928.02.patch, YARN-3150-YARN-2928.03.patch, 
> YARN-3150-YARN-2928.04.patch
>
>
> Let's make sure we will have a document to describe what's new in TS v2, the 
> APIs, the client libs and so on. We should do better around documentation in 
> v2 than v1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5017) Relook at containerLaunchStartTime in the NM

2016-04-29 Thread Varun Vasudev (JIRA)
Varun Vasudev created YARN-5017:
---

 Summary: Relook at containerLaunchStartTime in the NM
 Key: YARN-5017
 URL: https://issues.apache.org/jira/browse/YARN-5017
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Varun Vasudev


With support for container re-launch, we need to re-look at 
containerLaunchStartTime. We probably a need a new metric - one for the overall 
the container run time and one for the current attempt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5016) Add support for a minimum retry interval

2016-04-29 Thread Varun Vasudev (JIRA)
Varun Vasudev created YARN-5016:
---

 Summary: Add support for a minimum retry interval
 Key: YARN-5016
 URL: https://issues.apache.org/jira/browse/YARN-5016
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Varun Vasudev


The NM container re-launch feature should add support to specify a minimum 
restart interval so that the minimum time between restarts can be set by admins.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5015) Unify restart policies across AM and container restarts

2016-04-29 Thread Varun Vasudev (JIRA)
Varun Vasudev created YARN-5015:
---

 Summary: Unify restart policies across AM and container restarts
 Key: YARN-5015
 URL: https://issues.apache.org/jira/browse/YARN-5015
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Varun Vasudev


We support AM restart and container restarts - however the two have slightly 
different capabilities. We should unify them. There's no reason for them to be 
different.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4986) Add a check in the coprocessor for table to operated on

2016-04-29 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15264422#comment-15264422
 ] 

Vrushali C commented on YARN-4986:
--

Thanks Sangjin, I will update the checkstyle warnings and the log statement 
suggestion. Will upload patch very shortly.

> Add a check in the coprocessor for table to operated on
> ---
>
> Key: YARN-4986
> URL: https://issues.apache.org/jira/browse/YARN-4986
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Vrushali C
> Attachments: YARN-4986-YARN-2928.01.patch
>
>
> As a precautionary measure, it will be a good idea to have the coprocessor 
> code check which table it needs to be working on and return/proceed 
> accordingly. This is more of a safety check so that we are sure we are not 
> inadvertently executing the coprocessor code on some other table.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5002) getApplicationReport call may raise NPE

2016-04-29 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15264417#comment-15264417
 ] 

Sunil G commented on YARN-5002:
---

Patch looks fine for me. For short term, this will be fine. I think we need to 
revive the config data store as we are running in to these issue more recently.

> getApplicationReport call may raise NPE
> ---
>
> Key: YARN-5002
> URL: https://issues.apache.org/jira/browse/YARN-5002
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Sumana Sathish
>Assignee: Jian He
>Priority: Critical
> Attachments: YARN-5002.1.patch, YARN-5002.2.patch, YARN-5002.3.patch
>
>
> getApplicationReport call may raise NPE
> {code}
> Exception in thread "main" java.lang.NullPointerException: 
> java.lang.NullPointerException
>  
> org.apache.hadoop.yarn.server.resourcemanager.security.QueueACLsManager.checkAccess(QueueACLsManager.java:57)
>  
> org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.checkAccess(ClientRMService.java:279)
>  
> org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.getApplications(ClientRMService.java:760)
>  
> org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.getApplications(ClientRMService.java:682)
>  
> org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.getApplications(ApplicationClientProtocolPBServiceImpl.java:234)
>  
> org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:425)
>  
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
>  org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
>  org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2268)
>  org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2264)
>  java.security.AccessController.doPrivileged(Native Method)
>  javax.security.auth.Subject.doAs(Subject.java:422)
>  
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1708)
>  org.apache.hadoop.ipc.Server$Handler.run(Server.java:2262)
>  sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>  
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>  
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>  java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>  org.apache.hadoop.yarn.ipc.RPCUtil.instantiateException(RPCUtil.java:53)
>  org.apache.hadoop.yarn.ipc.RPCUtil.unwrapAndThrowException(RPCUtil.java:107)
>  
> org.apache.hadoop.yarn.api.impl.pb.client.ApplicationClientProtocolPBClientImpl.getApplications(ApplicationClientProtocolPBClientImpl.java:254)
>  sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>  
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  java.lang.reflect.Method.invoke(Method.java:498)
>  
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:256)
>  
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104)
>  com.sun.proxy.$Proxy18.getApplications(Unknown Source)
>  
> org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.getApplications(YarnClientImpl.java:479)
>  
> org.apache.hadoop.mapred.ResourceMgrDelegate.getAllJobs(ResourceMgrDelegate.java:135)
>  org.apache.hadoop.mapred.YARNRunner.getAllJobs(YARNRunner.java:167)
>  org.apache.hadoop.mapreduce.Cluster.getAllJobStatuses(Cluster.java:294)
>  org.apache.hadoop.mapreduce.tools.CLI.listJobs(CLI.java:553)
>  org.apache.hadoop.mapreduce.tools.CLI.run(CLI.java:338)
>  org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>  org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>  org.apache.hadoop.mapred.JobClient.main(JobClient.java:1274)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5002) getApplicationReport call may raise NPE

2016-04-29 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15264408#comment-15264408
 ] 

Jian He commented on YARN-5002:
---

[~templedf], mind checking my last comment or do you have more concern ?

> getApplicationReport call may raise NPE
> ---
>
> Key: YARN-5002
> URL: https://issues.apache.org/jira/browse/YARN-5002
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Sumana Sathish
>Assignee: Jian He
>Priority: Critical
> Attachments: YARN-5002.1.patch, YARN-5002.2.patch, YARN-5002.3.patch
>
>
> getApplicationReport call may raise NPE
> {code}
> Exception in thread "main" java.lang.NullPointerException: 
> java.lang.NullPointerException
>  
> org.apache.hadoop.yarn.server.resourcemanager.security.QueueACLsManager.checkAccess(QueueACLsManager.java:57)
>  
> org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.checkAccess(ClientRMService.java:279)
>  
> org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.getApplications(ClientRMService.java:760)
>  
> org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.getApplications(ClientRMService.java:682)
>  
> org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.getApplications(ApplicationClientProtocolPBServiceImpl.java:234)
>  
> org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:425)
>  
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
>  org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
>  org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2268)
>  org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2264)
>  java.security.AccessController.doPrivileged(Native Method)
>  javax.security.auth.Subject.doAs(Subject.java:422)
>  
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1708)
>  org.apache.hadoop.ipc.Server$Handler.run(Server.java:2262)
>  sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>  
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>  
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>  java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>  org.apache.hadoop.yarn.ipc.RPCUtil.instantiateException(RPCUtil.java:53)
>  org.apache.hadoop.yarn.ipc.RPCUtil.unwrapAndThrowException(RPCUtil.java:107)
>  
> org.apache.hadoop.yarn.api.impl.pb.client.ApplicationClientProtocolPBClientImpl.getApplications(ApplicationClientProtocolPBClientImpl.java:254)
>  sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>  
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  java.lang.reflect.Method.invoke(Method.java:498)
>  
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:256)
>  
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104)
>  com.sun.proxy.$Proxy18.getApplications(Unknown Source)
>  
> org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.getApplications(YarnClientImpl.java:479)
>  
> org.apache.hadoop.mapred.ResourceMgrDelegate.getAllJobs(ResourceMgrDelegate.java:135)
>  org.apache.hadoop.mapred.YARNRunner.getAllJobs(YARNRunner.java:167)
>  org.apache.hadoop.mapreduce.Cluster.getAllJobStatuses(Cluster.java:294)
>  org.apache.hadoop.mapreduce.tools.CLI.listJobs(CLI.java:553)
>  org.apache.hadoop.mapreduce.tools.CLI.run(CLI.java:338)
>  org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>  org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>  org.apache.hadoop.mapred.JobClient.main(JobClient.java:1274)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4577) Enable aux services to have their own custom classpath/jar file

2016-04-29 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15264399#comment-15264399
 ] 

Sangjin Lee commented on YARN-4577:
---

If you push more into {{AuxiliaryServiceWithCustomClassLoader}} from 
{{AuxServices}}, it may give you more opportunities to do targeted unit tests.

Just FYI, in case of {{TestRunJar}}, I created a few test classes 
({{ClassLoaderCheckMain}}, etc.), created a jar on the fly 
({{makeClassLoaderTestJar()}}), changed the system classes to include/exclude 
some of these test classes using the override, and test scenarios. Your mileage 
may vary.

> Enable aux services to have their own custom classpath/jar file
> ---
>
> Key: YARN-4577
> URL: https://issues.apache.org/jira/browse/YARN-4577
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.8.0
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Attachments: YARN-4577.1.patch, YARN-4577.2.patch, 
> YARN-4577.20160119.1.patch, YARN-4577.20160204.patch, 
> YARN-4577.20160428.patch, YARN-4577.3.patch, YARN-4577.3.rebase.patch, 
> YARN-4577.4.patch, YARN-4577.5.patch, YARN-4577.poc.patch
>
>
> Right now, users have to add their jars to the NM classpath directly, thus 
> put them on the system classloader. But if multiple versions of the plugin 
> are present on the classpath, there is no control over which version actually 
> gets loaded. Or if there are any conflicts between the dependencies 
> introduced by the auxiliary service and the NM itself, they can break the NM, 
> the auxiliary service, or both.
> The solution could be: to instantiate aux services using a classloader that 
> is different from the system classloader.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-3150) [Documentation] Documenting the timeline service v2

2016-04-29 Thread Sangjin Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3150?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sangjin Lee updated YARN-3150:
--
Attachment: YARN-3150-YARN-2928.04.patch

Posted patch v.4.

This completes the last remaining details for the HBase setup (thanks 
[~vrushalic]). I also posted a pdf version of the page for easier review.

> [Documentation] Documenting the timeline service v2
> ---
>
> Key: YARN-3150
> URL: https://issues.apache.org/jira/browse/YARN-3150
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Zhijie Shen
>Assignee: Sangjin Lee
>  Labels: yarn-2928-1st-milestone
> Attachments: TimelineServiceV2.pdf, YARN-3150-YARN-2928.01.patch, 
> YARN-3150-YARN-2928.02.patch, YARN-3150-YARN-2928.03.patch, 
> YARN-3150-YARN-2928.04.patch
>
>
> Let's make sure we will have a document to describe what's new in TS v2, the 
> APIs, the client libs and so on. We should do better around documentation in 
> v2 than v1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-3150) [Documentation] Documenting the timeline service v2

2016-04-29 Thread Sangjin Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3150?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sangjin Lee updated YARN-3150:
--
Attachment: TimelineServiceV2.pdf

> [Documentation] Documenting the timeline service v2
> ---
>
> Key: YARN-3150
> URL: https://issues.apache.org/jira/browse/YARN-3150
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Zhijie Shen
>Assignee: Sangjin Lee
>  Labels: yarn-2928-1st-milestone
> Attachments: TimelineServiceV2.pdf, YARN-3150-YARN-2928.01.patch, 
> YARN-3150-YARN-2928.02.patch, YARN-3150-YARN-2928.03.patch
>
>
> Let's make sure we will have a document to describe what's new in TS v2, the 
> APIs, the client libs and so on. We should do better around documentation in 
> v2 than v1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-3150) [Documentation] Documenting the timeline service v2

2016-04-29 Thread Sangjin Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3150?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sangjin Lee updated YARN-3150:
--
Attachment: (was: TimelineServiceV2.html)

> [Documentation] Documenting the timeline service v2
> ---
>
> Key: YARN-3150
> URL: https://issues.apache.org/jira/browse/YARN-3150
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Zhijie Shen
>Assignee: Sangjin Lee
>  Labels: yarn-2928-1st-milestone
> Attachments: YARN-3150-YARN-2928.01.patch, 
> YARN-3150-YARN-2928.02.patch, YARN-3150-YARN-2928.03.patch
>
>
> Let's make sure we will have a document to describe what's new in TS v2, the 
> APIs, the client libs and so on. We should do better around documentation in 
> v2 than v1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4986) Add a check in the coprocessor for table to operated on

2016-04-29 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15264291#comment-15264291
 ] 

Sangjin Lee commented on YARN-4986:
---

Thanks [~vrushalic] for finding and fixing these issues! It's an important fix.

The patch LGTM for the most part. Can we address the checkstyle issues as they 
are quite straightforward?

Also, one other minor issue: in FlowRunCoprocessor.java:79,81, let's wrap the 
{{debug()}} calls with {{if (LOG.isDebugEnabled()}}.

> Add a check in the coprocessor for table to operated on
> ---
>
> Key: YARN-4986
> URL: https://issues.apache.org/jira/browse/YARN-4986
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Vrushali C
> Attachments: YARN-4986-YARN-2928.01.patch
>
>
> As a precautionary measure, it will be a good idea to have the coprocessor 
> code check which table it needs to be working on and return/proceed 
> accordingly. This is more of a safety check so that we are sure we are not 
> inadvertently executing the coprocessor code on some other table.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4390) Do surgical preemption based on reserved container in CapacityScheduler

2016-04-29 Thread Eric Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15264250#comment-15264250
 ] 

Eric Payne commented on YARN-4390:
--

[~jlowe] pointed out to me offline that it is probably not reserving because of 
YARN-4280. The app can't reserve the 1GB container because there is only 0.5GB 
left, so {{ReservedContainerCandidatesSelector#selectCandidates}} isn't able to 
select any containers. Then, {{FifoCandidatesSelector#selectCandidates}} isn't 
very smart and sees that if it marks any random 0.5GB container as preemptable, 
that plus the free space equals the 1GB that is being requested.

> Do surgical preemption based on reserved container in CapacityScheduler
> ---
>
> Key: YARN-4390
> URL: https://issues.apache.org/jira/browse/YARN-4390
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Affects Versions: 3.0.0, 2.8.0, 2.7.3
>Reporter: Eric Payne
>Assignee: Wangda Tan
> Attachments: QueueNotHittingMax.jpg, YARN-4390-design.1.pdf, 
> YARN-4390-test-results.pdf, YARN-4390.1.patch, YARN-4390.2.patch, 
> YARN-4390.3.branch-2.patch, YARN-4390.3.patch, YARN-4390.4.patch, 
> YARN-4390.5.patch, YARN-4390.6.patch, YARN-4390.7.patch, YARN-4390.8.patch
>
>
> There are multiple reasons why preemption could unnecessarily preempt 
> containers. One is that an app could be requesting a large container (say 
> 8-GB), and the preemption monitor could conceivably preempt multiple 
> containers (say 8, 1-GB containers) in order to fill the large container 
> request. These smaller containers would then be rejected by the requesting AM 
> and potentially given right back to the preempted app.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5000) [YARN-3368] App attempt page is not loading when timeline server is not started

2016-04-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15264176#comment-15264176
 ] 

Hadoop QA commented on YARN-5000:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 1m 34s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 4 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 33 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
19s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 2m 8s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:f38692c |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12801465/YARN-5000-YARN-3368.2.patch
 |
| JIRA Issue | YARN-5000 |
| Optional Tests |  asflicense  |
| uname | Linux 3f2bfbd025ef 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | YARN-3368 / 2d617a5 |
| whitespace | 
https://builds.apache.org/job/PreCommit-YARN-Build/11283/artifact/patchprocess/whitespace-eol.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-YARN-Build/11283/artifact/patchprocess/whitespace-tabs.txt
 |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/11283/console |
| Powered by | Apache Yetus 0.2.0   http://yetus.apache.org |


This message was automatically generated.



> [YARN-3368] App attempt page is not loading when timeline server is not 
> started
> ---
>
> Key: YARN-5000
> URL: https://issues.apache.org/jira/browse/YARN-5000
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Sunil G
>Assignee: Sunil G
> Attachments: 0001-YARN-5000.patch, 
> AppFinishedAndNoTimelineServer.png, AppRunningAndNoTimelineServer.png, 
> YARN-5000-YARN-3368.1.patch, YARN-5000-YARN-3368.2.patch
>
>
> If timeline server is not started, app attempt page is not getting loaded.
> In new web-ui, yarnContainer route is tightly coupled with both RM and 
> Timeline server. And if one of server is not up, page will not load. If 
> timeline server is not up, container information from RM is to be displayed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4390) Do surgical preemption based on reserved container in CapacityScheduler

2016-04-29 Thread Eric Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15264172#comment-15264172
 ] 

Eric Payne commented on YARN-4390:
--

[~leftnoteasy]
bq. What's your idea about this case? Should we allow preemption or not for 
this case?
Ideally, preemption should not happen, although it is a corner case, so it 
might be okay.

bq. One question to add: is the 1GB container requested by app1 reserved on any 
node?
No, I don't think it is reserving.

> Do surgical preemption based on reserved container in CapacityScheduler
> ---
>
> Key: YARN-4390
> URL: https://issues.apache.org/jira/browse/YARN-4390
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Affects Versions: 3.0.0, 2.8.0, 2.7.3
>Reporter: Eric Payne
>Assignee: Wangda Tan
> Attachments: QueueNotHittingMax.jpg, YARN-4390-design.1.pdf, 
> YARN-4390-test-results.pdf, YARN-4390.1.patch, YARN-4390.2.patch, 
> YARN-4390.3.branch-2.patch, YARN-4390.3.patch, YARN-4390.4.patch, 
> YARN-4390.5.patch, YARN-4390.6.patch, YARN-4390.7.patch, YARN-4390.8.patch
>
>
> There are multiple reasons why preemption could unnecessarily preempt 
> containers. One is that an app could be requesting a large container (say 
> 8-GB), and the preemption monitor could conceivably preempt multiple 
> containers (say 8, 1-GB containers) in order to fill the large container 
> request. These smaller containers would then be rejected by the requesting AM 
> and potentially given right back to the preempted app.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5009) NMLeveldbStateStoreService database can grow substantially leading to longer recovery times

2016-04-29 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15264169#comment-15264169
 ] 

Jason Lowe commented on YARN-5009:
--

Thanks for the review and commit, Jian!  Note that the commit to branch-2.7 had 
a bad conflict resolution in yarn-default.xml, so I reverted and recommitted to 
correct it.

> NMLeveldbStateStoreService database can grow substantially leading to longer 
> recovery times
> ---
>
> Key: YARN-5009
> URL: https://issues.apache.org/jira/browse/YARN-5009
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.6.0
>Reporter: Jason Lowe
>Assignee: Jason Lowe
> Fix For: 2.7.4
>
> Attachments: YARN-5009.001.patch, YARN-5009.002.patch
>
>
> Similar to the RM case in YARN-5008, I have seen state stores for 
> nodemanagers with high container churn become significantly larger than they 
> should be due to lack of sufficient database compaction.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4913) Yarn logs should take a -out option to write to a directory

2016-04-29 Thread Varun Vasudev (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15264170#comment-15264170
 ] 

Varun Vasudev commented on YARN-4913:
-

I suspect this patch doesn't address the problems that [~venkateshrin] and 
[~djp] want addressed. It merely redirects the output to specified file. The 
use case that needs to be solved is to split up the log into the individual 
container log files.

> Yarn logs should take a -out option to write to a directory
> ---
>
> Key: YARN-4913
> URL: https://issues.apache.org/jira/browse/YARN-4913
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Attachments: YARN-4913.1.patch, YARN-4913.2.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5000) [YARN-3368] App attempt page is not loading when timeline server is not started

2016-04-29 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5000?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-5000:
--
Attachment: YARN-5000-YARN-3368.2.patch

Updating patch after some more cleanup.

> [YARN-3368] App attempt page is not loading when timeline server is not 
> started
> ---
>
> Key: YARN-5000
> URL: https://issues.apache.org/jira/browse/YARN-5000
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Sunil G
>Assignee: Sunil G
> Attachments: 0001-YARN-5000.patch, 
> AppFinishedAndNoTimelineServer.png, AppRunningAndNoTimelineServer.png, 
> YARN-5000-YARN-3368.1.patch, YARN-5000-YARN-3368.2.patch
>
>
> If timeline server is not started, app attempt page is not getting loaded.
> In new web-ui, yarnContainer route is tightly coupled with both RM and 
> Timeline server. And if one of server is not up, page will not load. If 
> timeline server is not up, container information from RM is to be displayed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4920) ATS/NM should support a link to dowload/get the logs in text format

2016-04-29 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15264133#comment-15264133
 ] 

Junping Du commented on YARN-4920:
--

bq. Interesting. This exists in several different places. Instead of handling 
them separately, let us fix it together after the refactory patch.
This is not a refactor issue but how to use memory efficient issue - or we 
could hit OOM issue always. Can you point me where else we put cache allocation 
within a while loop? We should definitely fix them in a separated patch but 
here we should make it right first.

> ATS/NM should support a link to dowload/get the logs in text format
> ---
>
> Key: YARN-4920
> URL: https://issues.apache.org/jira/browse/YARN-4920
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Attachments: YARN-4920.2.patch, YARN-4920.20160424.branch-2.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4280) CapacityScheduler reservations may not prevent indefinite postponement on a busy cluster

2016-04-29 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15264079#comment-15264079
 ] 

Jason Lowe commented on YARN-4280:
--

I'm not thrilled with the idea of preemption to solve this issue.  Nothing 
should have to be shot (i.e.: work lost) to solve this problem.  The real issue 
is that we are _not_ placing a reservation and allowing further containers to 
be allocated.

To really solve it without resorting to shooting containers we need to allow 
reservations to exceed the cluster or queue capacity.  As a user I should be 
able to reserve up to my user limit, which already happens today as long as the 
queue/cluster limit isn't hit.  If we just allowed a reservation of at least 
one container beyond the cluster/queue limit (as long as it's below the 
user-limit) then the application would make progress and it should solve this 
particular issue.  Yes, this would mean that used + reserved could be > total 
capacity, but without it we are allowing apps to starve indefinitely.

> CapacityScheduler reservations may not prevent indefinite postponement on a 
> busy cluster
> 
>
> Key: YARN-4280
> URL: https://issues.apache.org/jira/browse/YARN-4280
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler
>Affects Versions: 2.6.1, 2.8.0, 2.7.1
>Reporter: Kuhu Shukla
>Assignee: Kuhu Shukla
>
> Consider the following scenario:
> There are 2 queues A(25% of the total capacity) and B(75%), both can run at 
> total cluster capacity. There are 2 applications, appX that runs on Queue A, 
> always asking for 1G containers(non-AM) and appY runs on Queue B asking for 2 
> GB containers.
> The user limit is high enough for the application to reach 100% of the 
> cluster resource. 
> appX is running at total cluster capacity, full with 1G containers releasing 
> only one container at a time. appY comes in with a request of 2GB container 
> but only 1 GB is free. Ideally, since appY is in the underserved queue, it 
> has higher priority and should reserve for its 2 GB request. Since this 
> request puts the alloc+reserve above total capacity of the cluster, 
> reservation is not made. appX comes in with a 1GB request and since 1GB is 
> still available, the request is allocated. 
> This can continue indefinitely causing priority inversion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3998) Add support in the NodeManager to re-launch containers

2016-04-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15263912#comment-15263912
 ] 

Hudson commented on YARN-3998:
--

FAILURE: Integrated in Hadoop-trunk-Commit #9691 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9691/])
YARN-3998. Add support in the NodeManager to re-launch containers. (vvasudev: 
rev 0f25a1bb52bc56661fd020a6ba82df99f8c6ef1f)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/TestPBImplRecords.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/recovery/NMStateStoreService.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/container/Container.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/ContainerExecutor.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/recovery/NMMemoryStateStoreService.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ContainerRetryContext.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/container/ContainerState.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/ProtoUtils.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/proto/yarn_protos.proto
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/recovery/NMLeveldbStateStoreService.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/Client.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/container/TestContainer.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/recovery/TestNMLeveldbStateStoreService.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/ApplicationMaster.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/container/ContainerImpl.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ContainerLaunchContext.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/webapp/MockContainer.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/LocalDirsHandlerService.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ContainerRetryPolicy.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/launcher/ContainerLaunch.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/recovery/NMNullStateStoreService.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/ContainerLaunchContextPBImpl.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/ContainerManagerImpl.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/ContainerRetryContextPBImpl.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/launcher/ContainersLauncher.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/launcher/ContainerRelaunch.java
* 

[jira] [Commented] (YARN-3998) Add support in the NodeManager to re-launch containers

2016-04-29 Thread Varun Vasudev (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15263894#comment-15263894
 ] 

Varun Vasudev commented on YARN-3998:
-

Committed to trunk and branch-2. Thanks for all your work [~hex108]!

> Add support in the NodeManager to re-launch containers
> --
>
> Key: YARN-3998
> URL: https://issues.apache.org/jira/browse/YARN-3998
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jun Gong
>Assignee: Jun Gong
> Fix For: 2.9.0
>
> Attachments: YARN-3998.01.patch, YARN-3998.02.patch, 
> YARN-3998.03.patch, YARN-3998.04.patch, YARN-3998.05.patch, 
> YARN-3998.06.patch, YARN-3998.07.patch, YARN-3998.08.patch, YARN-3998.09.patch
>
>
> I'd like to add a field(retry-times) in ContainerLaunchContext. When AM 
> launches containers, it could specify the value. Then NM will re-launch the 
> container 'retry-times' times when it fails to run(e.g.exit code is not 0). 
> It will save a lot of time. It avoids container localization. RM does not 
> need to re-schedule the container. And local files in container's working 
> directory will be left for re-use.(If container have downloaded some big 
> files, it does not need to re-download them when running again.) 
> We find it is useful in systems like Storm.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4844) Add getMemoryLong/getVirtualCoreLong to o.a.h.y.api.records.Resource

2016-04-29 Thread Bibin A Chundatt (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15263893#comment-15263893
 ] 

Bibin A Chundatt commented on YARN-4844:


[~leftnoteasy]
IIUC SLS udpate also is required 
{noformat}
 
 BUILD FAILURE
 
 Total time: 3:11.802s
 Finished at: Fri Apr 29 18:44:53 GMT+08:00 2016
 Final Memory: 197M/616M
 
] Failed to execute goal 
org.apache.maven.plugins:maven-compiler-plugin:3.1:compile (default-compile) on 
project hadoop-sls: Compilation failure: Compilation failure:
] 
/D:/Hadoop/trunk/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/scheduler/SLSCapacityScheduler.java:[516,56]
 incompatible types: long cannot be converted to java.lang.Integer
] 
/D:/Hadoop/trunk/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/scheduler/SLSCapacityScheduler.java:[528,66]
 incompatible types: long cannot be converted to java.lang.Integer
] 
/D:/Hadoop/trunk/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/scheduler/SLSCapacityScheduler.java:[540,56]
 incompatible types: long cannot be converted to java.lang.Integer
] 
/D:/Hadoop/trunk/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/scheduler/SLSCapacityScheduler.java:[552,66]
 incompatible types: long cannot be converted to java.lang.Integer
] 
/D:/Hadoop/trunk/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/scheduler/ResourceSchedulerWrapper.java:[539,66]
 incompatible types: long cannot be converted to java.lang.Integer
] 
/D:/Hadoop/trunk/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/scheduler/ResourceSchedulerWrapper.java:[551,76]
 incompatible types: long cannot be converted to java.lang.Integer
] 
/D:/Hadoop/trunk/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/scheduler/ResourceSchedulerWrapper.java:[563,66]
 incompatible types: long cannot be converted to java.lang.Integer
] 
/D:/Hadoop/trunk/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/scheduler/ResourceSchedulerWrapper.java:[575,76]
 incompatible types: long cannot be converted to java.lang.Integer
{noformat}

> Add getMemoryLong/getVirtualCoreLong to o.a.h.y.api.records.Resource
> 
>
> Key: YARN-4844
> URL: https://issues.apache.org/jira/browse/YARN-4844
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Blocker
> Attachments: YARN-4844.1.patch, YARN-4844.2.patch, YARN-4844.3.patch
>
>
> We use int32 for memory now, if a cluster has 10k nodes, each node has 210G 
> memory, we will get a negative total cluster memory.
> And another case that easier overflows int32 is: we added all pending 
> resources of running apps to cluster's total pending resources. If a 
> problematic app requires too much resources (let's say 1M+ containers, each 
> of them has 3G containers), int32 will be not enough.
> Even if we can cap each app's pending request, we cannot handle the case that 
> there're many running apps, each of them has capped but still significant 
> numbers of pending resources.
> So we may possibly need to add getMemoryLong/getVirtualCoreLong to 
> o.a.h.y.api.records.Resource.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-3998) Add support in the NodeManager to re-launch containers

2016-04-29 Thread Varun Vasudev (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Vasudev updated YARN-3998:

Summary: Add support in the NodeManager to re-launch containers  (was: Add 
retry-times to let NM re-launch container when it fails to run)

> Add support in the NodeManager to re-launch containers
> --
>
> Key: YARN-3998
> URL: https://issues.apache.org/jira/browse/YARN-3998
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jun Gong
>Assignee: Jun Gong
> Attachments: YARN-3998.01.patch, YARN-3998.02.patch, 
> YARN-3998.03.patch, YARN-3998.04.patch, YARN-3998.05.patch, 
> YARN-3998.06.patch, YARN-3998.07.patch, YARN-3998.08.patch, YARN-3998.09.patch
>
>
> I'd like to add a field(retry-times) in ContainerLaunchContext. When AM 
> launches containers, it could specify the value. Then NM will re-launch the 
> container 'retry-times' times when it fails to run(e.g.exit code is not 0). 
> It will save a lot of time. It avoids container localization. RM does not 
> need to re-schedule the container. And local files in container's working 
> directory will be left for re-use.(If container have downloaded some big 
> files, it does not need to re-download them when running again.) 
> We find it is useful in systems like Storm.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4947) Test timeout is happening for TestRMWebServicesNodes

2016-04-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4947?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15263817#comment-15263817
 ] 

Hadoop QA commented on YARN-4947:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
8s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 32s 
{color} | {color:green} trunk passed with JDK v1.8.0_92 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 29s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
19s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 35s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 6s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 22s 
{color} | {color:green} trunk passed with JDK v1.8.0_92 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
30s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 26s 
{color} | {color:green} the patch passed with JDK v1.8.0_92 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 26s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 26s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 26s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
16s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 33s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
17s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 20s 
{color} | {color:green} the patch passed with JDK v1.8.0_92 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 24s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 29m 12s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK 
v1.8.0_92. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 29m 53s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK 
v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
18s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 76m 10s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_92 Failed junit tests | 
hadoop.yarn.server.resourcemanager.TestClientRMTokens |
|   | hadoop.yarn.server.resourcemanager.TestContainerResourceUsage |
|   | hadoop.yarn.server.resourcemanager.TestAMAuthorization |
| JDK v1.7.0_95 Failed junit tests | 
hadoop.yarn.server.resourcemanager.TestClientRMTokens |
|   | hadoop.yarn.server.resourcemanager.TestAMAuthorization |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:cf2ee45 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12801418/0006-YARN-4947-rebase.patch
 |
| JIRA Issue | YARN-4947 |
| Optional Tests |  asflicense  

[jira] [Commented] (YARN-4986) Add a check in the coprocessor for table to operated on

2016-04-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15263746#comment-15263746
 ] 

Hadoop QA commented on YARN-4986:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 14m 48s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
25s {color} | {color:green} YARN-2928 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 18s 
{color} | {color:green} YARN-2928 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 20s 
{color} | {color:green} YARN-2928 passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
17s {color} | {color:green} YARN-2928 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 27s 
{color} | {color:green} YARN-2928 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
17s {color} | {color:green} YARN-2928 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
35s {color} | {color:green} YARN-2928 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 16s 
{color} | {color:green} YARN-2928 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s 
{color} | {color:green} YARN-2928 passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
21s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 15s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 15s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 16s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 16s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 13s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice:
 patch generated 5 new + 0 unchanged - 0 fixed = 5 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 24s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
44s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 17s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 4m 46s 
{color} | {color:green} hadoop-yarn-server-timelineservice in the patch passed 
with JDK v1.8.0_91. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 4m 48s 
{color} | {color:green} hadoop-yarn-server-timelineservice in the patch passed 
with JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
18s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 39m 53s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12801411/YARN-4986-YARN-2928.01.patch
 |
| JIRA Issue | YARN-4986 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 

[jira] [Updated] (YARN-4947) Test timeout is happening for TestRMWebServicesNodes

2016-04-29 Thread Bibin A Chundatt (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4947?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bibin A Chundatt updated YARN-4947:
---
Attachment: 0006-YARN-4947-rebase.patch

> Test timeout is happening for TestRMWebServicesNodes
> 
>
> Key: YARN-4947
> URL: https://issues.apache.org/jira/browse/YARN-4947
> Project: Hadoop YARN
>  Issue Type: Test
>  Components: test
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
> Attachments: 0001-YARN-4947.patch, 0002-YARN-4947.patch, 
> 0003-YARN-4947.patch, 0004-YARN-4947.patch, 0005-YARN-4947.patch, 
> 0006-YARN-4947-rebase.patch, 0006-YARN-4947.patch
>
>
> Testcase timeout for TestRMWebServicesNodes is happening after YARN-4893 
> [timeout|https://builds.apache.org/job/PreCommit-YARN-Build/11044/testReport/]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4894) Elide long app names in web UI

2016-04-29 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/YARN-4894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gergely Novák updated YARN-4894:

Attachment: Screen Shot 2016-04-29 at 09.07.50.png

Which version are you using? On trunk and on 2.7.2 long names don't push the 
other columns right, they get wrapped (like on the attached screenshot).

> Elide long app names in web UI
> --
>
> Key: YARN-4894
> URL: https://issues.apache.org/jira/browse/YARN-4894
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: webapp
>Reporter: Ryan Williams
>Priority: Minor
> Attachments: Screen Shot 2016-04-29 at 09.07.50.png
>
>
> When someone submits an app with a long name, the other columns in the UI get 
> pushed far to the right and require scrolling to see, which makes for an 
> awkward experience.
> !http://f.cl.ly/items/1L2G2U3B0s1U3m060Z42/Screen%20Shot%202016-03-29%20at%201.15.22%20PM.png!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5014) Ensure non-metric values are returned as is for flow run table from the coprocessor

2016-04-29 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15263696#comment-15263696
 ] 

Vrushali C commented on YARN-5014:
--

Now I can insert and scan for non numeric values in flow run table
{code}
hbase(main):030:0> scan 'timelineservice.flowrun'
ROW COLUMN+CELL
 row2   column=f3:another_key, 
timestamp=1461913999681, value=value_1001
 row4   column=f3:key66, timestamp=1461912421472, 
value=value00
2 row(s) in 0.0200 seconds
{code}

The code changes in FlowScanner.java are available as part of the patch I just 
uploaded to YARN-4986

> Ensure non-metric values are returned as is for flow run table from the 
> coprocessor
> ---
>
> Key: YARN-5014
> URL: https://issues.apache.org/jira/browse/YARN-5014
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Vrushali C
>
> Presently the FlowScanner class presumes existence of NumericValueConverter 
> in it's emitCells function. This causes an exception when we try to retrieve 
> non-numeric values from this table. 
> Exception is seen as:
> {code}
> java.lang.ClassCastException: 
> org.apache.hadoop.yarn.server.timelineservice.storage.common.GenericConverter 
> cannot be cast to 
> org.apache.hadoop.yarn.server.timelineservice.storage.common.NumericValueConverter
> at 
> org.apache.hadoop.yarn.server.timelineservice.storage.flow.FlowScanner.nextInternal(FlowScanner.java:246)
> at 
> org.apache.hadoop.yarn.server.timelineservice.storage.flow.FlowScanner.nextRaw(FlowScanner.java:125)
> at 
> org.apache.hadoop.yarn.server.timelineservice.storage.flow.FlowScanner.nextRaw(FlowScanner.java:119)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2117)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:31443)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2031)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:107)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5014) Ensure non-metric values are returned as is for flow run table from the coprocessor

2016-04-29 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15263695#comment-15263695
 ] 

Vrushali C commented on YARN-5014:
--

Patch is available at https://issues.apache.org/jira/browse/YARN-4986 

> Ensure non-metric values are returned as is for flow run table from the 
> coprocessor
> ---
>
> Key: YARN-5014
> URL: https://issues.apache.org/jira/browse/YARN-5014
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Vrushali C
>
> Presently the FlowScanner class presumes existence of NumericValueConverter 
> in it's emitCells function. This causes an exception when we try to retrieve 
> non-numeric values from this table. 
> Exception is seen as:
> {code}
> java.lang.ClassCastException: 
> org.apache.hadoop.yarn.server.timelineservice.storage.common.GenericConverter 
> cannot be cast to 
> org.apache.hadoop.yarn.server.timelineservice.storage.common.NumericValueConverter
> at 
> org.apache.hadoop.yarn.server.timelineservice.storage.flow.FlowScanner.nextInternal(FlowScanner.java:246)
> at 
> org.apache.hadoop.yarn.server.timelineservice.storage.flow.FlowScanner.nextRaw(FlowScanner.java:125)
> at 
> org.apache.hadoop.yarn.server.timelineservice.storage.flow.FlowScanner.nextRaw(FlowScanner.java:119)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2117)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:31443)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2031)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:107)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4986) Add a check in the coprocessor for table to operated on

2016-04-29 Thread Vrushali C (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C updated YARN-4986:
-
Attachment: YARN-4986-YARN-2928.01.patch


Uploading patch v1. This code will simply "disable" the coprocessor code for 
any table other than the flow run table.

Also, this patch includes a fix for YARN-5014 . The FlowScanner class is the 
fix for this patch. 

Both taken together have been tested for coprocessor execution on an actual 
cluster. 


> Add a check in the coprocessor for table to operated on
> ---
>
> Key: YARN-4986
> URL: https://issues.apache.org/jira/browse/YARN-4986
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Vrushali C
> Attachments: YARN-4986-YARN-2928.01.patch
>
>
> As a precautionary measure, it will be a good idea to have the coprocessor 
> code check which table it needs to be working on and return/proceed 
> accordingly. This is more of a safety check so that we are sure we are not 
> inadvertently executing the coprocessor code on some other table.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-358) bundle container classpath in temporary jar on all platforms, not just Windows

2016-04-29 Thread Shuai Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15263690#comment-15263690
 ] 

Shuai Zhang commented on YARN-358:
--

I found it difficult to work with DockerContainerExecutor. The environment is 
different within a Docker container or not, so the classpath differs according 
to the environment variables such as HADOOP_PREFIX, YARN_CONF_DIR, etc. We 
cannot introduce environment variables within a Docker container into a 
generated manifest file. As a result, we cannot generate a correct manifest 
file running within a Docker container.

> bundle container classpath in temporary jar on all platforms, not just Windows
> --
>
> Key: YARN-358
> URL: https://issues.apache.org/jira/browse/YARN-358
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: trunk-win
>Reporter: Chris Nauroth
>
> Currently, a Windows-specific code path bundles the classpath into a 
> temporary jar with a manifest to work around command line length limitations. 
>  This code path does not need to be Windows-specific.  We can use the same 
> approach on all platforms.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5014) Ensure non-metric values are returned as is for flow run table from the coprocessor

2016-04-29 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15263687#comment-15263687
 ] 

Vrushali C commented on YARN-5014:
--

Also when no operation needs to be performed on non numeric converters, they 
need to be collected for processing, i.e. need to be added to 
currentColumnCells.

> Ensure non-metric values are returned as is for flow run table from the 
> coprocessor
> ---
>
> Key: YARN-5014
> URL: https://issues.apache.org/jira/browse/YARN-5014
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Vrushali C
>
> Presently the FlowScanner class presumes existence of NumericValueConverter 
> in it's emitCells function. This causes an exception when we try to retrieve 
> non-numeric values from this table. 
> Exception is seen as:
> {code}
> java.lang.ClassCastException: 
> org.apache.hadoop.yarn.server.timelineservice.storage.common.GenericConverter 
> cannot be cast to 
> org.apache.hadoop.yarn.server.timelineservice.storage.common.NumericValueConverter
> at 
> org.apache.hadoop.yarn.server.timelineservice.storage.flow.FlowScanner.nextInternal(FlowScanner.java:246)
> at 
> org.apache.hadoop.yarn.server.timelineservice.storage.flow.FlowScanner.nextRaw(FlowScanner.java:125)
> at 
> org.apache.hadoop.yarn.server.timelineservice.storage.flow.FlowScanner.nextRaw(FlowScanner.java:119)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2117)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:31443)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2031)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:107)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5014) Ensure non-metric values are returned as is for flow run table from the coprocessor

2016-04-29 Thread Vrushali C (JIRA)
Vrushali C created YARN-5014:


 Summary: Ensure non-metric values are returned as is for flow run 
table from the coprocessor
 Key: YARN-5014
 URL: https://issues.apache.org/jira/browse/YARN-5014
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Vrushali C
Assignee: Vrushali C



Presently the FlowScanner class presumes existence of NumericValueConverter in 
it's emitCells function. This causes an exception when we try to retrieve 
non-numeric values from this table. 

Exception is seen as:
{code}
java.lang.ClassCastException: 
org.apache.hadoop.yarn.server.timelineservice.storage.common.GenericConverter 
cannot be cast to 
org.apache.hadoop.yarn.server.timelineservice.storage.common.NumericValueConverter
at 
org.apache.hadoop.yarn.server.timelineservice.storage.flow.FlowScanner.nextInternal(FlowScanner.java:246)
at 
org.apache.hadoop.yarn.server.timelineservice.storage.flow.FlowScanner.nextRaw(FlowScanner.java:125)
at 
org.apache.hadoop.yarn.server.timelineservice.storage.flow.FlowScanner.nextRaw(FlowScanner.java:119)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2117)
at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:31443)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2031)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:107)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
{code}





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4920) ATS/NM should support a link to dowload/get the logs in text format

2016-04-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15263620#comment-15263620
 ] 

Hadoop QA commented on YARN-4920:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 13m 12s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 42s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
34s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 8s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 23s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
27s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 17s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
41s {color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 42s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common in 
trunk has 3 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
3s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 8s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 8s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 16s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 16s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 24s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server: patch 
generated 5 new + 27 unchanged - 0 fixed = 32 total (was 27) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 9s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
33s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 28 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
41s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 45s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 59s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 21s 
{color} | {color:green} hadoop-yarn-server-common in the patch passed with JDK 
v1.8.0_91. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 11m 5s 
{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed with 
JDK v1.8.0_91. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 2m 57s {color} 
| {color:red} hadoop-yarn-server-applicationhistoryservice in the patch failed 
with JDK v1.8.0_91. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 24s 
{color} | {color:green} hadoop-yarn-server-common in the patch passed with JDK 
v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 11m 36s