[jira] [Commented] (YARN-5121) fix some container-executor portability issues

2016-07-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15400438#comment-15400438
 ] 

Hadoop QA commented on YARN-5121:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 8s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
29s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 52s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 11m 
28s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 2m 
47s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 5m 36s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 18s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
54s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 17s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 8m 17s 
{color} | {color:green} root generated 0 new + 7 unchanged - 3 fixed = 7 total 
(was 10) {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 8m 17s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 10m 
45s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
8s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 5m 53s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 109m 48s 
{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 23s 
{color} | {color:red} The patch generated 2 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 182m 52s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes |
|   | hadoop.hdfs.server.balancer.TestBalancerWithSaslDataTransfer |
|   | hadoop.hdfs.server.namenode.TestEditLog |
|   | hadoop.yarn.server.nodemanager.TestDirectoryCollection |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12821127/YARN-5121.08.patch |
| JIRA Issue | YARN-5121 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  cc  |
| uname | Linux f6917c7b1155 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 95f2b98 |
| Default Java | 1.8.0_101 |
| whitespace | 
https://builds.apache.org/job/PreCommit-YARN-Build/12573/artifact/patchprocess/whitespace-eol.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/12573/artifact/patchprocess/patch-unit-root.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-YARN-Build/12573/artifact/patchprocess/patch-unit-root.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/12573/testReport/ |
| asflicense | 
https://builds.apache.org/job/PreCommit-YARN-Build/12573/artifact/patchprocess/patch-asflicense-problems.txt
 |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 . U: . |
| 

[jira] [Commented] (YARN-5121) fix some container-executor portability issues

2016-07-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15400411#comment-15400411
 ] 

Hadoop QA commented on YARN-5121:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 21s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 12s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
39s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 56s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 10m 
56s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
5s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 5m 26s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
33s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 58s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 7m 58s 
{color} | {color:green} root generated 0 new + 7 unchanged - 3 fixed = 7 total 
(was 10) {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 58s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 9m 28s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
0s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 48s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 118m 15s 
{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 24s 
{color} | {color:red} The patch generated 2 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 183m 58s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.server.namenode.TestEditLog |
|   | 
hadoop.yarn.server.nodemanager.containermanager.queuing.TestQueuingContainerManager
 |
|   | hadoop.yarn.server.nodemanager.TestDirectoryCollection |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12821118/YARN-5121.07.patch |
| JIRA Issue | YARN-5121 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  cc  |
| uname | Linux 95046fd3e110 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 95f2b98 |
| Default Java | 1.8.0_101 |
| whitespace | 
https://builds.apache.org/job/PreCommit-YARN-Build/12571/artifact/patchprocess/whitespace-eol.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/12571/artifact/patchprocess/patch-unit-root.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-YARN-Build/12571/artifact/patchprocess/patch-unit-root.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/12571/testReport/ |
| asflicense | 
https://builds.apache.org/job/PreCommit-YARN-Build/12571/artifact/patchprocess/patch-asflicense-problems.txt
 |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 . U: . 

[jira] [Updated] (YARN-5121) fix some container-executor portability issues

2016-07-29 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated YARN-5121:
---
Attachment: YARN-5121.08.patch

-08:
* add malloc checks rather than letting realpath throw the error

> fix some container-executor portability issues
> --
>
> Key: YARN-5121
> URL: https://issues.apache.org/jira/browse/YARN-5121
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Blocker
> Attachments: YARN-5121.00.patch, YARN-5121.01.patch, 
> YARN-5121.02.patch, YARN-5121.03.patch, YARN-5121.04.patch, 
> YARN-5121.06.patch, YARN-5121.07.patch, YARN-5121.08.patch
>
>
> container-executor has some issues that are preventing it from even compiling 
> on the OS X jenkins instance.  Let's fix those.  While we're there, let's 
> also try to take care of some of the other portability problems that have 
> crept in over the years, since it used to work great on Solaris but now 
> doesn't.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5121) fix some container-executor portability issues

2016-07-29 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15400343#comment-15400343
 ] 

Allen Wittenauer commented on YARN-5121:


Oh right... we can't call flush_and_close there because the function isn't 
shared with the rest of the source code.  Just shoot me now.

Anyway, here's -08.

> fix some container-executor portability issues
> --
>
> Key: YARN-5121
> URL: https://issues.apache.org/jira/browse/YARN-5121
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Blocker
> Attachments: YARN-5121.00.patch, YARN-5121.01.patch, 
> YARN-5121.02.patch, YARN-5121.03.patch, YARN-5121.04.patch, 
> YARN-5121.06.patch, YARN-5121.07.patch
>
>
> container-executor has some issues that are preventing it from even compiling 
> on the OS X jenkins instance.  Let's fix those.  While we're there, let's 
> also try to take care of some of the other portability problems that have 
> crept in over the years, since it used to work great on Solaris but now 
> doesn't.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3649) Allow configurable prefix for hbase table names (like prod, exp, test etc)

2016-07-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15400337#comment-15400337
 ] 

Hadoop QA commented on YARN-3649:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 4m 23s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 
21s {color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 34s 
{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
43s {color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 33s 
{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
59s {color} | {color:green} YARN-5355 passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
21s {color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 59s 
{color} | {color:green} YARN-5355 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 50s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 50s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 43s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn: The patch generated 46 
new + 210 unchanged - 0 fixed = 256 total (was 210) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 28s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
47s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
44s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 17s 
{color} | {color:red} hadoop-yarn-server-timelineservice in the patch failed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 28s 
{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 51s 
{color} | {color:green} hadoop-yarn-server-timelineservice in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 6m 32s 
{color} | {color:green} hadoop-yarn-server-timelineservice-hbase-tests in the 
patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 7s 
{color} | {color:green} hadoop-yarn-site in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
19s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 43m 17s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12821121/YARN-3649-YARN-5355.01.patch
 |
| JIRA Issue | YARN-3649 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  

[jira] [Commented] (YARN-5121) fix some container-executor portability issues

2016-07-29 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15400335#comment-15400335
 ] 

Allen Wittenauer commented on YARN-5121:


bq. if ret can never be null in that case (real_fname is never null?), then the 
tenary operator is redundant. If it can be null, then the new debug stmt can 
cause a segfault before it prints? Nit-picking in any case.

IIRC, I think the only time that ret can ever be null is if the mallocs inside 
realpath/canonicalize_file_name fail. My hunch is yes, that ternary could 
probably go away and there should be some more safety.  I didn't spend much 
time digging into it though. There's a lot of code like that all over the place 
and it's definitely a much bigger project to fix those problems.

So, yes, there's definitely a risk that the debug will cause a segfault but the 
code isn't enabled by default and someone enabling it is likely trying to 
figure out what the heck is going on anyway haha.

bq. Would you please check for NULL returns from the malloc calls in 
__get_exec_readproc and the OS X implementation of get_executable?

Sure.  Good catch.  They were missing in the original, likely because realpath 
will fail and you'll exit out there instead.  But it'd definitely be better to 
have a real check with a specific error. (That also means flushing the log. We 
really should be using atexit().  This whole thing needs a rewrite.)

> fix some container-executor portability issues
> --
>
> Key: YARN-5121
> URL: https://issues.apache.org/jira/browse/YARN-5121
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Blocker
> Attachments: YARN-5121.00.patch, YARN-5121.01.patch, 
> YARN-5121.02.patch, YARN-5121.03.patch, YARN-5121.04.patch, 
> YARN-5121.06.patch, YARN-5121.07.patch
>
>
> container-executor has some issues that are preventing it from even compiling 
> on the OS X jenkins instance.  Let's fix those.  While we're there, let's 
> also try to take care of some of the other portability problems that have 
> crept in over the years, since it used to work great on Solaris but now 
> doesn't.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5229) Refactor #isApplicationEntity and #getApplicationEvent from HBaseTimelineWriterImpl

2016-07-29 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15400306#comment-15400306
 ] 

Li Lu commented on YARN-5229:
-

Will commit shortly. 

> Refactor #isApplicationEntity and #getApplicationEvent from 
> HBaseTimelineWriterImpl
> ---
>
> Key: YARN-5229
> URL: https://issues.apache.org/jira/browse/YARN-5229
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928, YARN-5355
>Reporter: Joep Rottinghuis
>Assignee: Vrushali C
>Priority: Minor
>  Labels: YARN-5355
> Attachments: YARN-229-YARN-5355.01.patch, 
> YARN-5229-YARN-2928.01.patch, YARN-5229-YARN-2928.02.patch, 
> YARN-5229-YARN-2928.03.patch, YARN-5229-YARN-2928.04.patch
>
>
> As per [~gtCarrera9] commented in YARN-5170
> bq. In HBaseTimelineWriterImpl isApplicationEntity and getApplicationEvent 
> seem to be awkward. Looks more like something related to TimelineEntity or 
> ApplicationEntity
> In YARN-5170 we just made the method private, and in this separate jira we 
> can refactor these methods to TimelineEntity or ApplicationEntity.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-3649) Allow configurable prefix for hbase table names (like prod, exp, test etc)

2016-07-29 Thread Vrushali C (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C updated YARN-3649:
-
Attachment: YARN-3649-YARN-5355.01.patch

Rebased patch to branch YARN-5355. Also added some documentation.

> Allow configurable prefix for hbase table names (like prod, exp, test etc)
> --
>
> Key: YARN-3649
> URL: https://issues.apache.org/jira/browse/YARN-3649
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Vrushali C
>  Labels: YARN-5355
> Attachments: YARN-3649-YARN-2928.01.patch, 
> YARN-3649-YARN-5355.01.patch
>
>
> As per [~jrottinghuis]'s suggestion in YARN-3411, it will be a good idea to 
> have a configurable prefix for hbase table names.  
> This way we can easily run a staging, a test, a production and whatever setup 
> in the same HBase instance / without having to override every single table in 
> the config.
> One could simply overwrite the default prefix and you're off and running.
> For prefix, potential candidates are "tst" "prod" "exp" etc. Once can then 
> still override one tablename if needed, but managing one whole setup will be 
> easier.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4888) Changes in RM container allocation for identifying resource-requests explicitly

2016-07-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15400296#comment-15400296
 ] 

Hadoop QA commented on YARN-4888:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
45s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 17s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
45s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 54s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
55s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 9s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 9s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
33s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 37s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 2m 37s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 37s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 48s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn: The patch generated 3 
new + 577 unchanged - 2 fixed = 580 total (was 579) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 5s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
53s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 
17s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 12s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 26s 
{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 28s 
{color} | {color:green} hadoop-yarn-server-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 13m 21s {color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 50m 15s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
17s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 96m 46s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.nodemanager.TestDirectoryCollection |
|   | hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairScheduler |
|   | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacityScheduler |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12821105/YARN-4888-v4.patch |
| JIRA Issue | YARN-4888 |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  javadoc  
mvninstall  findbugs  checkstyle  |
| uname | Linux 4868f1dd3519 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 

[jira] [Commented] (YARN-5121) fix some container-executor portability issues

2016-07-29 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15400278#comment-15400278
 ] 

Chris Nauroth commented on YARN-5121:
-

Allen, sorry, I just spotted one more thing.  Would you please check for 
{{NULL}} returns from the {{malloc}} calls in {{__get_exec_readproc}} and the 
OS X implementation of {{get_executable}}?

> fix some container-executor portability issues
> --
>
> Key: YARN-5121
> URL: https://issues.apache.org/jira/browse/YARN-5121
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Blocker
> Attachments: YARN-5121.00.patch, YARN-5121.01.patch, 
> YARN-5121.02.patch, YARN-5121.03.patch, YARN-5121.04.patch, 
> YARN-5121.06.patch, YARN-5121.07.patch
>
>
> container-executor has some issues that are preventing it from even compiling 
> on the OS X jenkins instance.  Let's fix those.  While we're there, let's 
> also try to take care of some of the other portability problems that have 
> crept in over the years, since it used to work great on Solaris but now 
> doesn't.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5121) fix some container-executor portability issues

2016-07-29 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15400276#comment-15400276
 ] 

Chris Douglas commented on YARN-5121:
-

+1 from me. Thanks, Allen for the patch and ChrisN for review.

bq. I did remove some other debugging code, but that one I thought was useful 
due to aggressive use of ternary operators
I haven't looked at the context, but if {{ret}} can never be null in that case 
({{real_fname}} is never null?), then the tenary operator is redundant. If it 
can be null, then the new debug stmt can cause a segfault before it prints? 
Nit-picking in any case.

> fix some container-executor portability issues
> --
>
> Key: YARN-5121
> URL: https://issues.apache.org/jira/browse/YARN-5121
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Blocker
> Attachments: YARN-5121.00.patch, YARN-5121.01.patch, 
> YARN-5121.02.patch, YARN-5121.03.patch, YARN-5121.04.patch, 
> YARN-5121.06.patch, YARN-5121.07.patch
>
>
> container-executor has some issues that are preventing it from even compiling 
> on the OS X jenkins instance.  Let's fix those.  While we're there, let's 
> also try to take care of some of the other portability problems that have 
> crept in over the years, since it used to work great on Solaris but now 
> doesn't.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5382) RM does not audit log kill request for active applications

2016-07-29 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15400275#comment-15400275
 ] 

Vrushali C commented on YARN-5382:
--

Wondering what I can do to fix the checkstyle warning for 
{code}
./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMAuditLogger.java:91:
  static String createSuccessLog(String user, String operation, String 
target,:17: More than 7 parameters (found 9).
{code}

There were earlier 8 parameters, this patch adds one more (ip).


> RM does not audit log kill request for active applications
> --
>
> Key: YARN-5382
> URL: https://issues.apache.org/jira/browse/YARN-5382
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.7.2
>Reporter: Jason Lowe
>Assignee: Vrushali C
> Attachments: YARN-5382-branch-2.7.01.patch, 
> YARN-5382-branch-2.7.02.patch, YARN-5382-branch-2.7.03.patch, 
> YARN-5382-branch-2.7.04.patch, YARN-5382-branch-2.7.05.patch, 
> YARN-5382.06.patch, YARN-5382.07.patch, YARN-5382.08.patch
>
>
> ClientRMService will audit a kill request but only if it either fails to 
> issue the kill or if the kill is sent to an already finished application.  It 
> does not create a log entry when the application is active which is arguably 
> the most important case to audit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5121) fix some container-executor portability issues

2016-07-29 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated YARN-5121:
---
Attachment: YARN-5121.07.patch

-07:
* address the feedback
* fix two more spurious gcc warnings

> fix some container-executor portability issues
> --
>
> Key: YARN-5121
> URL: https://issues.apache.org/jira/browse/YARN-5121
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Blocker
> Attachments: YARN-5121.00.patch, YARN-5121.01.patch, 
> YARN-5121.02.patch, YARN-5121.03.patch, YARN-5121.04.patch, 
> YARN-5121.06.patch, YARN-5121.07.patch
>
>
> container-executor has some issues that are preventing it from even compiling 
> on the OS X jenkins instance.  Let's fix those.  While we're there, let's 
> also try to take care of some of the other portability problems that have 
> crept in over the years, since it used to work great on Solaris but now 
> doesn't.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5382) RM does not audit log kill request for active applications

2016-07-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15400262#comment-15400262
 ] 

Hadoop QA commented on YARN-5382:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
23s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 39s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
24s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 44s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
20s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 7s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 23s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
35s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 31s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 31s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 19s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 1 new + 214 unchanged - 1 fixed = 215 total (was 215) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 38s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 2s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 18s 
{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 36m 35s 
{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
15s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 53m 24s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12821106/YARN-5382.08.patch |
| JIRA Issue | YARN-5382 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux abab02a9aa80 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 95f2b98 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/12569/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| javadoc | 
https://builds.apache.org/job/PreCommit-YARN-Build/12569/artifact/patchprocess/patch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/12569/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/12569/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org 

[jira] [Comment Edited] (YARN-5221) Expose UpdateResourceRequest API to allow AM to request for change in container properties

2016-07-29 Thread Subru Krishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15400253#comment-15400253
 ] 

Subru Krishnan edited comment on YARN-5221 at 7/29/16 11:50 PM:


It's a rather substantial patch :). I'll take a look at it and get back by 
Monday.

>From an initial pass, I feel it will help if we can split the patch:
  1. API changes to unify Container change requests.
  2. Backend changes to affect the unified API.
  3. NMStateStore and associated NMContainerStatus changes for RM failover.

Thoughts [~asuresh]?


was (Author: subru):
It's a rather substantial patch :). I'll take a look at it and get back by 
Monday.

> Expose UpdateResourceRequest API to allow AM to request for change in 
> container properties
> --
>
> Key: YARN-5221
> URL: https://issues.apache.org/jira/browse/YARN-5221
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-5221.001.patch, YARN-5221.002.patch, 
> YARN-5221.003.patch, YARN-5221.004.patch, YARN-5221.005.patch, 
> YARN-5221.006.patch, YARN-5221.007.patch, YARN-5221.008.patch
>
>
> YARN-1197 introduced APIs to allow an AM to request for Increase and Decrease 
> of Container Resources after initial allocation.
> YARN-5085 proposes to allow an AM to request for a change of Container 
> ExecutionType.
> This JIRA proposes to unify both of the above into an Update Container API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5221) Expose UpdateResourceRequest API to allow AM to request for change in container properties

2016-07-29 Thread Subru Krishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15400253#comment-15400253
 ] 

Subru Krishnan commented on YARN-5221:
--

It's a rather substantial patch :). I'll take a look at it and get back by 
Monday.

> Expose UpdateResourceRequest API to allow AM to request for change in 
> container properties
> --
>
> Key: YARN-5221
> URL: https://issues.apache.org/jira/browse/YARN-5221
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-5221.001.patch, YARN-5221.002.patch, 
> YARN-5221.003.patch, YARN-5221.004.patch, YARN-5221.005.patch, 
> YARN-5221.006.patch, YARN-5221.007.patch, YARN-5221.008.patch
>
>
> YARN-1197 introduced APIs to allow an AM to request for Increase and Decrease 
> of Container Resources after initial allocation.
> YARN-5085 proposes to allow an AM to request for a change of Container 
> ExecutionType.
> This JIRA proposes to unify both of the above into an Update Container API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5121) fix some container-executor portability issues

2016-07-29 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15400252#comment-15400252
 ] 

Chris Nauroth commented on YARN-5121:
-

Thanks for the detailed explanation.  It's all clear to me now.  I expect this 
will be ready to commit after your next revision to fix the few remaining 
nitpicks.  That next revision can fix the one remaining compiler warning too.

[~chris.douglas], let us know if you have any more feedback.  If not, then I 
would likely +1 and commit soon.

bq. (This whole conversation is rather timely, given that Roger Faulkner just 
passed away recently.)

I did not know the name before, but I just read an "In Memoriam" article.  
Thank you, Roger.

> fix some container-executor portability issues
> --
>
> Key: YARN-5121
> URL: https://issues.apache.org/jira/browse/YARN-5121
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Blocker
> Attachments: YARN-5121.00.patch, YARN-5121.01.patch, 
> YARN-5121.02.patch, YARN-5121.03.patch, YARN-5121.04.patch, YARN-5121.06.patch
>
>
> container-executor has some issues that are preventing it from even compiling 
> on the OS X jenkins instance.  Let's fix those.  While we're there, let's 
> also try to take care of some of the other portability problems that have 
> crept in over the years, since it used to work great on Solaris but now 
> doesn't.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5229) Refactor #isApplicationEntity and #getApplicationEvent from HBaseTimelineWriterImpl

2016-07-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15400250#comment-15400250
 ] 

Hadoop QA commented on YARN-5229:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 9m 1s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 
17s {color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 23s 
{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
42s {color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 53s 
{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
29s {color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
38s {color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 33s 
{color} | {color:green} YARN-5355 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
38s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 15s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 15s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
35s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 45s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
22s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
38s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 28s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 23s 
{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 44s 
{color} | {color:green} hadoop-yarn-server-timelineservice in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
18s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 34m 11s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12821110/YARN-229-YARN-5355.01.patch
 |
| JIRA Issue | YARN-5229 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 5d97fb0b2cc4 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | YARN-5355 / d0a62d8 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/12570/testReport/ |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice
 U: hadoop-yarn-project/hadoop-yarn |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/12570/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Refactor #isApplicationEntity and 

[jira] [Commented] (YARN-5121) fix some container-executor portability issues

2016-07-29 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15400237#comment-15400237
 ] 

Allen Wittenauer commented on YARN-5121:


bq.  I just want to double-check with you that the fchmodat.h and fdopendir.h 
implementations are not BSD-licensed, and that's why they're not listed in 
LICENSE.txt and instead have an Apache license header. Is that correct?

I wrote them based upon the other functions' implementations so I put an ASF 
license on them.  

bq. Chris D mentioned previously that this might have been a leftover from 
debugging. Did you intend to keep it.

I did remove some other debugging code, but that one I thought was useful due 
to aggressive use of ternary operators.  (UGH!  Yes, the code is compact, but 
nearly unreadable, when nested!  I probably should have rewrote them too but...)

bq. Please check the indentation on the return statement.

Argh. Yes, I'll fix.  The formatting of the C code is pretty awful and 
definitely caused me issues. haha.

bq. Is "/proc/self/path/a.out" correct? ...  Is that a.out like the default gcc 
binary output path? 

a.out was the file name before gcc existed... ;)  Anyway, here's some backing 
evidence:

http://docs.oracle.com/cd/E23824_01/html/821-1473/proc-4.html#

Solaris' proc works a bit differently than what one might be used to under 
Linux.  The keys here are /proc/pid/object and /proc/pid/path.  object gives 
you access to any mapped or page data that came from file system objects. path 
contains symbolic links to all open files and the source files of the content 
of the object dir.  object/a.out is the executable. Therefore path/a.out is a 
symbolic link to the executable itself.  With that context, it probably makes a 
lot more sense, since there are *two* ways to get to the data, depending upon 
your needs.

Anyway, from my home machine (skipping object, since the file names are present 
in path):

{code}
sunos/i386 ryoko$ pwd
/proc/6956/path

sunos/i386 ryoko$ ls -l
total 0
lrwxrwxrwx   1 allenw   users  0 Jun 21 08:05 0 -> /dev/pts/3
lrwxrwxrwx   1 allenw   users  0 Jun 21 08:05 1 -> /dev/pts/3
lrwxrwxrwx   1 allenw   users  0 Jun 21 08:05 2 -> /dev/pts/3
lrwxrwxrwx   1 allenw   users  0 Jun 21 08:05 255 -> /dev/pts/3
lrwxrwxrwx   1 allenw   users  0 Jun 21 08:05 3 -> 
/var/run/ldap_cache_door
lrwxrwxrwx   1 allenw   users  0 Jun 21 08:05 a.out -> /usr/bin/bash
lrwxrwxrwx   1 allenw   users  0 Jun 21 08:05 cwd -> /home/allenw
lrwxrwxrwx   1 allenw   users  0 Jun 21 08:05 root -> /
lrwxrwxrwx   1 allenw   users  0 Jun 21 08:05 zfs.122.65594.11300 -> 
/usr/lib/mps/libnspr4.so
lrwxrwxrwx   1 allenw   users  0 Jun 21 08:05 zfs.122.65594.18905 -> 
/lib/libmd.so.1
lrwxrwxrwx   1 allenw   users  0 Jun 21 08:05 zfs.122.65594.18918 -> 
/lib/libgen.so.1
lrwxrwxrwx   1 allenw   users  0 Jun 21 08:05 zfs.122.65594.18920 -> 
/lib/libsocket.so.1
lrwxrwxrwx   1 allenw   users  0 Jun 21 08:05 zfs.122.65594.18942 -> 
/lib/libresolv.so.2
lrwxrwxrwx   1 allenw   users  0 Jun 21 08:05 zfs.122.65594.18950 -> 
/lib/nss_files.so.1
lrwxrwxrwx   1 allenw   users  0 Jun 21 08:05 zfs.122.65594.18956 -> 
/lib/libmp.so.2
lrwxrwxrwx   1 allenw   users  0 Jun 21 08:05 zfs.122.65594.18957 -> 
/lib/nss_dns.so.1
lrwxrwxrwx   1 allenw   users  0 Jun 21 08:05 zfs.122.65594.18961 -> 
/lib/libpthread.so.1
lrwxrwxrwx   1 allenw   users  0 Jun 21 08:05 zfs.122.65594.18964 -> 
/lib/libnsl.so.1
lrwxrwxrwx   1 allenw   users  0 Jun 21 08:05 zfs.122.65594.6054 -> 
/lib/ld.so.1
lrwxrwxrwx   1 allenw   users  0 Jun 21 08:05 zfs.122.65594.6085 -> 
/lib/libcurses.so.1
lrwxrwxrwx   1 allenw   users  0 Jun 21 08:05 zfs.122.65594.6088 -> 
/lib/libdl.so.1
lrwxrwxrwx   1 allenw   users  0 Jun 21 08:05 zfs.122.65594.9190 -> 
/usr/lib/libc/libc_hwcap1.so.1
lrwxrwxrwx   1 allenw   users  0 Jun 21 08:05 zfs.122.65594.9233 -> 
/usr/lib/libldap.so.5
{code}

You'll note that a.out points to the bash executable.

(This whole conversation is rather timely, given that Roger Faulkner just 
passed away recently.)

I'll do a quick update and post a new patch.  Thanks for the review!

> fix some container-executor portability issues
> --
>
> Key: YARN-5121
> URL: https://issues.apache.org/jira/browse/YARN-5121
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Blocker
> Attachments: YARN-5121.00.patch, YARN-5121.01.patch, 
> YARN-5121.02.patch, YARN-5121.03.patch, YARN-5121.04.patch, YARN-5121.06.patch
>
>
> container-executor has some issues that are 

[jira] [Commented] (YARN-5221) Expose UpdateResourceRequest API to allow AM to request for change in container properties

2016-07-29 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15400228#comment-15400228
 ] 

Arun Suresh commented on YARN-5221:
---

ping.. [~subru], [~kasha], [~leftnoteasy]...


> Expose UpdateResourceRequest API to allow AM to request for change in 
> container properties
> --
>
> Key: YARN-5221
> URL: https://issues.apache.org/jira/browse/YARN-5221
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-5221.001.patch, YARN-5221.002.patch, 
> YARN-5221.003.patch, YARN-5221.004.patch, YARN-5221.005.patch, 
> YARN-5221.006.patch, YARN-5221.007.patch, YARN-5221.008.patch
>
>
> YARN-1197 introduced APIs to allow an AM to request for Increase and Decrease 
> of Container Resources after initial allocation.
> YARN-5085 proposes to allow an AM to request for a change of Container 
> ExecutionType.
> This JIRA proposes to unify both of the above into an Update Container API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5452) [YARN-3368] Support scheduler activities in new YARN UI

2016-07-29 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5452?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-5452:
-
Attachment: YARN-5452.1.patch

Worked with [~ChenGe], attached ver.1 patch for review.

> [YARN-3368] Support scheduler activities in new YARN UI
> ---
>
> Key: YARN-5452
> URL: https://issues.apache.org/jira/browse/YARN-5452
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
> Attachments: YARN-5452.1.patch
>
>
> YARN-4091 added scheduler activities REST API, we can support this in the new 
> YARN UI as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5452) [YARN-3368] Support scheduler activities in new YARN UI

2016-07-29 Thread Wangda Tan (JIRA)
Wangda Tan created YARN-5452:


 Summary: [YARN-3368] Support scheduler activities in new YARN UI
 Key: YARN-5452
 URL: https://issues.apache.org/jira/browse/YARN-5452
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Wangda Tan


YARN-4091 added scheduler activities REST API, we can support this in the new 
YARN UI as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5229) Refactor #isApplicationEntity and #getApplicationEvent from HBaseTimelineWriterImpl

2016-07-29 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15400194#comment-15400194
 ] 

Li Lu commented on YARN-5229:
-

+1 pending Jenkins. 

> Refactor #isApplicationEntity and #getApplicationEvent from 
> HBaseTimelineWriterImpl
> ---
>
> Key: YARN-5229
> URL: https://issues.apache.org/jira/browse/YARN-5229
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928, YARN-5355
>Reporter: Joep Rottinghuis
>Assignee: Vrushali C
>Priority: Minor
>  Labels: YARN-5355
> Attachments: YARN-229-YARN-5355.01.patch, 
> YARN-5229-YARN-2928.01.patch, YARN-5229-YARN-2928.02.patch, 
> YARN-5229-YARN-2928.03.patch, YARN-5229-YARN-2928.04.patch
>
>
> As per [~gtCarrera9] commented in YARN-5170
> bq. In HBaseTimelineWriterImpl isApplicationEntity and getApplicationEvent 
> seem to be awkward. Looks more like something related to TimelineEntity or 
> ApplicationEntity
> In YARN-5170 we just made the method private, and in this separate jira we 
> can refactor these methods to TimelineEntity or ApplicationEntity.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5229) Refactor #isApplicationEntity and #getApplicationEvent from HBaseTimelineWriterImpl

2016-07-29 Thread Vrushali C (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5229?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C updated YARN-5229:
-
Affects Version/s: YARN-5355

> Refactor #isApplicationEntity and #getApplicationEvent from 
> HBaseTimelineWriterImpl
> ---
>
> Key: YARN-5229
> URL: https://issues.apache.org/jira/browse/YARN-5229
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928, YARN-5355
>Reporter: Joep Rottinghuis
>Assignee: Vrushali C
>Priority: Minor
>  Labels: YARN-5355
> Attachments: YARN-229-YARN-5355.01.patch, 
> YARN-5229-YARN-2928.01.patch, YARN-5229-YARN-2928.02.patch, 
> YARN-5229-YARN-2928.03.patch, YARN-5229-YARN-2928.04.patch
>
>
> As per [~gtCarrera9] commented in YARN-5170
> bq. In HBaseTimelineWriterImpl isApplicationEntity and getApplicationEvent 
> seem to be awkward. Looks more like something related to TimelineEntity or 
> ApplicationEntity
> In YARN-5170 we just made the method private, and in this separate jira we 
> can refactor these methods to TimelineEntity or ApplicationEntity.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5229) Refactor #isApplicationEntity and #getApplicationEvent from HBaseTimelineWriterImpl

2016-07-29 Thread Vrushali C (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5229?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C updated YARN-5229:
-
Attachment: YARN-229-YARN-5355.01.patch

Uploading rebased patch against branch YARN-5355 

> Refactor #isApplicationEntity and #getApplicationEvent from 
> HBaseTimelineWriterImpl
> ---
>
> Key: YARN-5229
> URL: https://issues.apache.org/jira/browse/YARN-5229
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Joep Rottinghuis
>Assignee: Vrushali C
>Priority: Minor
>  Labels: YARN-5355
> Attachments: YARN-229-YARN-5355.01.patch, 
> YARN-5229-YARN-2928.01.patch, YARN-5229-YARN-2928.02.patch, 
> YARN-5229-YARN-2928.03.patch, YARN-5229-YARN-2928.04.patch
>
>
> As per [~gtCarrera9] commented in YARN-5170
> bq. In HBaseTimelineWriterImpl isApplicationEntity and getApplicationEvent 
> seem to be awkward. Looks more like something related to TimelineEntity or 
> ApplicationEntity
> In YARN-5170 we just made the method private, and in this separate jira we 
> can refactor these methods to TimelineEntity or ApplicationEntity.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5121) fix some container-executor portability issues

2016-07-29 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15400171#comment-15400171
 ] 

Chris Nauroth commented on YARN-5121:
-

[~aw], thank you for this patch.  I have confirmed a successful full build and 
run of test-container-executor on OS X and Linux.

Just a few questions:

bq. For 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/compat/{fstatat|openat|unlinkat}.h:

I just want to double-check with you that the fchmodat.h and fdopendir.h 
implementations are not BSD-licensed, and that's why they're not listed in 
LICENSE.txt and instead have an Apache license header.  Is that correct?

{code}
  fprintf(stderr,"ret = %s\n", ret);
{code}

Chris D mentioned previously that this might have been a leftover from 
debugging.  Did you intend to keep it, or should we drop it?

{code}
char* get_executable() {
 return __get_exec_readproc("/proc/self/path/a.out");
}
{code}

Please check the indentation on the return statement.

Is "/proc/self/path/a.out" correct?  The /proc/self part makes sense to me, but 
the rest of it surprised me.  Is that a.out like the default gcc binary output 
path?  I have nearly zero experience with Solaris, so I trust your knowledge 
here.  :-)


> fix some container-executor portability issues
> --
>
> Key: YARN-5121
> URL: https://issues.apache.org/jira/browse/YARN-5121
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Blocker
> Attachments: YARN-5121.00.patch, YARN-5121.01.patch, 
> YARN-5121.02.patch, YARN-5121.03.patch, YARN-5121.04.patch, YARN-5121.06.patch
>
>
> container-executor has some issues that are preventing it from even compiling 
> on the OS X jenkins instance.  Let's fix those.  While we're there, let's 
> also try to take care of some of the other portability problems that have 
> crept in over the years, since it used to work great on Solaris but now 
> doesn't.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5382) RM does not audit log kill request for active applications

2016-07-29 Thread Vrushali C (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C updated YARN-5382:
-
Attachment: YARN-5382.08.patch

Uploading v8 that addresses previous checkstyle warnings

> RM does not audit log kill request for active applications
> --
>
> Key: YARN-5382
> URL: https://issues.apache.org/jira/browse/YARN-5382
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.7.2
>Reporter: Jason Lowe
>Assignee: Vrushali C
> Attachments: YARN-5382-branch-2.7.01.patch, 
> YARN-5382-branch-2.7.02.patch, YARN-5382-branch-2.7.03.patch, 
> YARN-5382-branch-2.7.04.patch, YARN-5382-branch-2.7.05.patch, 
> YARN-5382.06.patch, YARN-5382.07.patch, YARN-5382.08.patch
>
>
> ClientRMService will audit a kill request but only if it either fails to 
> issue the kill or if the kill is sent to an already finished application.  It 
> does not create a log entry when the application is active which is arguably 
> the most important case to audit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4888) Changes in RM container allocation for identifying resource-requests explicitly

2016-07-29 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated YARN-4888:
-
Attachment: YARN-4888-v4.patch

Thanks [~asuresh] for the review, good catch on the allocationRequestId 
comparator. PFA (v4) patch that addresses your comments.

> Changes in RM container allocation for identifying resource-requests 
> explicitly
> ---
>
> Key: YARN-4888
> URL: https://issues.apache.org/jira/browse/YARN-4888
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Subru Krishnan
>Assignee: Subru Krishnan
> Attachments: YARN-4888-WIP.patch, YARN-4888-v0.patch, 
> YARN-4888-v2.patch, YARN-4888-v3.patch, YARN-4888-v4.patch, 
> YARN-4888.001.patch
>
>
> YARN-4879 puts forward the notion of identifying allocate requests 
> explicitly. This JIRA is to track the changes in RM app scheduling data 
> structures to accomplish it. Please refer to the design doc in the parent 
> JIRA for details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5113) Refactoring and other clean-up for distributed scheduling

2016-07-29 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1545#comment-1545
 ] 

Arun Suresh commented on YARN-5113:
---

Thanks [~kkaranasos], Couple of nits:

# Since we are refactoring, lets keep the configurations consistent, 
*..distributed-scheduling.enable* instead of *..enabled* (I see both kinds in 
YarnConfiguration, but I see more 'enable' than 'enabled')
# Also, currently, most of the distributed scheduling config field checking are 
ignored in {{TestYarnConfiguration}}, lets not ignore it... This means we must 
also include the default values in *yarn-defaults.xml*
# Can we add comments for all the Request and Response classes ?
# Minor nit: In DistributedSchedulingAMProtocol, lets declare the allocate 
method before the finish method.

> Refactoring and other clean-up for distributed scheduling
> -
>
> Key: YARN-5113
> URL: https://issues.apache.org/jira/browse/YARN-5113
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Konstantinos Karanasos
> Attachments: YARN-5113.001.patch, YARN-5113.002.patch, 
> YARN-5113.003.patch, YARN-5113.004.patch, YARN-5113.005.patch, 
> YARN-5113.006.patch, YARN-5113.007.patch, YARN-5113.008.patch
>
>
> This JIRA focuses on the refactoring of classes related to Distributed 
> Scheduling



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5451) Container localizers that hang are not cleaned up

2016-07-29 Thread Jason Lowe (JIRA)
Jason Lowe created YARN-5451:


 Summary: Container localizers that hang are not cleaned up
 Key: YARN-5451
 URL: https://issues.apache.org/jira/browse/YARN-5451
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Affects Versions: 2.6.0
Reporter: Jason Lowe


I ran across an old, rogue process on one of our nodes.  It apparently was a 
container localizer that somehow entered an infinite loop during startup.  The 
NM never cleaned up this broken localizer, so it happily ran forever.  The NM 
needs to do a better job of tracking localizers, including killing them if they 
appear to be hung/broken.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5404) Add the ability to split reverse zone subnets

2016-07-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15399978#comment-15399978
 ] 

Hadoop QA commented on YARN-5404:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 11m 
44s {color} | {color:green} YARN-4757 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 18s 
{color} | {color:green} YARN-4757 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
17s {color} | {color:green} YARN-4757 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 23s 
{color} | {color:green} YARN-4757 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
17s {color} | {color:green} YARN-4757 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
36s {color} | {color:green} YARN-4757 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 17s 
{color} | {color:green} YARN-4757 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
16s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 14s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 14s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 17s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
34s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 11s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 2s 
{color} | {color:green} hadoop-yarn-registry in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
18s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 18m 10s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12821074/YARN-5404-YARN-4757.003.patch
 |
| JIRA Issue | YARN-5404 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux b662f25e2259 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | YARN-4757 / 552b7cc |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/12567/testReport/ |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/12567/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Add the ability to split reverse zone subnets
> -
>
> Key: YARN-5404
> URL: https://issues.apache.org/jira/browse/YARN-5404
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
> Attachments: YARN-5404-YARN-4757.001.patch, 
> YARN-5404-YARN-4757.001.patch, YARN-5404-YARN-4757.001.patch, 
> YARN-5404-YARN-4757.002.patch, YARN-5404-YARN-4757.003.patch, 
> YARN-5404.001.patch
>
>
> In some 

[jira] [Commented] (YARN-4280) CapacityScheduler reservations may not prevent indefinite postponement on a busy cluster

2016-07-29 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15399958#comment-15399958
 ] 

Jason Lowe commented on YARN-4280:
--

+1 for the latest patch.  I'll commit this sometime next week unless there are 
further comments that need to be addressed.

> CapacityScheduler reservations may not prevent indefinite postponement on a 
> busy cluster
> 
>
> Key: YARN-4280
> URL: https://issues.apache.org/jira/browse/YARN-4280
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler
>Affects Versions: 2.6.1, 2.8.0, 2.7.1
>Reporter: Kuhu Shukla
>Assignee: Kuhu Shukla
> Attachments: YARN-4280-branch-2.009.patch, 
> YARN-4280-branch-2.8.001.patch, YARN-4280-branch-2.8.002.patch, 
> YARN-4280-branch-2.8.003.patch, YARN-4280.001.patch, YARN-4280.002.patch, 
> YARN-4280.003.patch, YARN-4280.004.patch, YARN-4280.005.patch, 
> YARN-4280.006.patch, YARN-4280.007.patch, YARN-4280.008.patch, 
> YARN-4280.008_.patch, YARN-4280.009.patch, YARN-4280.010.patch, 
> YARN-4280.011.patch, YARN-4280.012.patch, YARN-4280.013.patch, 
> YARN-4280.014.patch
>
>
> Consider the following scenario:
> There are 2 queues A(25% of the total capacity) and B(75%), both can run at 
> total cluster capacity. There are 2 applications, appX that runs on Queue A, 
> always asking for 1G containers(non-AM) and appY runs on Queue B asking for 2 
> GB containers.
> The user limit is high enough for the application to reach 100% of the 
> cluster resource. 
> appX is running at total cluster capacity, full with 1G containers releasing 
> only one container at a time. appY comes in with a request of 2GB container 
> but only 1 GB is free. Ideally, since appY is in the underserved queue, it 
> has higher priority and should reserve for its 2 GB request. Since this 
> request puts the alloc+reserve above total capacity of the cluster, 
> reservation is not made. appX comes in with a 1GB request and since 1GB is 
> still available, the request is allocated. 
> This can continue indefinitely causing priority inversion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5404) Add the ability to split reverse zone subnets

2016-07-29 Thread Shane Kumpf (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shane Kumpf updated YARN-5404:
--
Attachment: YARN-5404-YARN-4757.003.patch

> Add the ability to split reverse zone subnets
> -
>
> Key: YARN-5404
> URL: https://issues.apache.org/jira/browse/YARN-5404
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
> Attachments: YARN-5404-YARN-4757.001.patch, 
> YARN-5404-YARN-4757.001.patch, YARN-5404-YARN-4757.001.patch, 
> YARN-5404-YARN-4757.002.patch, YARN-5404-YARN-4757.003.patch, 
> YARN-5404.001.patch
>
>
> In some environments, the entire container subnet may not be used exclusively 
> by containers (ie the YARN nodemanager host IPs may also be part of the 
> larger subnet). 
> As a result, the reverse lookup zones created by the YARN Registry DNS server 
> may not match those created on the forwarders.
> For example:
> Network: 172.27.0.0
> Subnet: 255.255.248.0
> Hosts:
> 0.27.172.in-addr.arpa
> 1.27.172.in-addr.arpa
> 2.27.172.in-addr.arpa
> 3.27.172.in-addr.arpa
> Containers
> 4.27.172.in-addr.arpa
> 5.27.172.in-addr.arpa
> 6.27.172.in-addr.arpa
> 7.27.172.in-addr.arpa
> YARN Registry DNS only allows for creating (as the total IP count is greater 
> than 256):
> 27.172.in-addr.arpa
> Provide configuration to further subdivide the subnets.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5404) Add the ability to split reverse zone subnets

2016-07-29 Thread Shane Kumpf (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15399914#comment-15399914
 ] 

Shane Kumpf commented on YARN-5404:
---

Thanks for the review, [~vvasudev]. I will upload a patch with these fixes.

Regarding #3, I will clean that up, however, the anding is still necessary to 
get the positive value for that octet.

I also added some additional tests leveraging 0 and 1 for range and index.

> Add the ability to split reverse zone subnets
> -
>
> Key: YARN-5404
> URL: https://issues.apache.org/jira/browse/YARN-5404
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
> Attachments: YARN-5404-YARN-4757.001.patch, 
> YARN-5404-YARN-4757.001.patch, YARN-5404-YARN-4757.001.patch, 
> YARN-5404-YARN-4757.002.patch, YARN-5404.001.patch
>
>
> In some environments, the entire container subnet may not be used exclusively 
> by containers (ie the YARN nodemanager host IPs may also be part of the 
> larger subnet). 
> As a result, the reverse lookup zones created by the YARN Registry DNS server 
> may not match those created on the forwarders.
> For example:
> Network: 172.27.0.0
> Subnet: 255.255.248.0
> Hosts:
> 0.27.172.in-addr.arpa
> 1.27.172.in-addr.arpa
> 2.27.172.in-addr.arpa
> 3.27.172.in-addr.arpa
> Containers
> 4.27.172.in-addr.arpa
> 5.27.172.in-addr.arpa
> 6.27.172.in-addr.arpa
> 7.27.172.in-addr.arpa
> YARN Registry DNS only allows for creating (as the total IP count is greater 
> than 256):
> 27.172.in-addr.arpa
> Provide configuration to further subdivide the subnets.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5450) Enhance logging for Cluster.java around InetSocketAddress

2016-07-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15399899#comment-15399899
 ] 

Hadoop QA commented on YARN-5450:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
52s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 24s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
16s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 29s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
48s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
22s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 20s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 20s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 25s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
51s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 1s 
{color} | {color:green} hadoop-mapreduce-client-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
15s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 15m 14s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12821059/YARN-5450.01.patch |
| JIRA Issue | YARN-5450 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 195bcd1e05cd 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 95f2b98 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/12566/testReport/ |
| modules | C: 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core 
U: 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/12566/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Enhance logging for Cluster.java around InetSocketAddress
> -
>
> Key: YARN-5450
> URL: https://issues.apache.org/jira/browse/YARN-5450
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: yarn
>Reporter: sarun singla
>Assignee: Vrushali C
>Priority: 

[jira] [Updated] (YARN-5450) Enhance logging for Cluster.java around InetSocketAddress

2016-07-29 Thread Vrushali C (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5450?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C updated YARN-5450:
-
Attachment: YARN-5450.01.patch

> Enhance logging for Cluster.java around InetSocketAddress
> -
>
> Key: YARN-5450
> URL: https://issues.apache.org/jira/browse/YARN-5450
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: yarn
>Reporter: sarun singla
>Assignee: Vrushali C
>Priority: Minor
>  Labels: YARN
> Attachments: YARN-5450.01.patch
>
>
> We need to add more logging for cluster.java class around " 
> initialize(InetSocketAddress jobTrackAddr, Configuration conf) " method to 
> give better logging like about the source of the property.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5450) Enhance logging for Cluster.java around InetSocketAddress

2016-07-29 Thread Vrushali C (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5450?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C updated YARN-5450:
-
Attachment: (was: YARN-5450.01.patch)

> Enhance logging for Cluster.java around InetSocketAddress
> -
>
> Key: YARN-5450
> URL: https://issues.apache.org/jira/browse/YARN-5450
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: yarn
>Reporter: sarun singla
>Assignee: Vrushali C
>Priority: Minor
>  Labels: YARN
> Attachments: YARN-5450.01.patch
>
>
> We need to add more logging for cluster.java class around " 
> initialize(InetSocketAddress jobTrackAddr, Configuration conf) " method to 
> give better logging like about the source of the property.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5450) Enhance logging for Cluster.java around InetSocketAddress

2016-07-29 Thread Vrushali C (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5450?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C updated YARN-5450:
-
Attachment: YARN-5450.01.patch

Trying to upload the patch again, for some reason I keep getting an error. 

> Enhance logging for Cluster.java around InetSocketAddress
> -
>
> Key: YARN-5450
> URL: https://issues.apache.org/jira/browse/YARN-5450
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: yarn
>Reporter: sarun singla
>Assignee: Vrushali C
>Priority: Minor
>  Labels: YARN
> Attachments: YARN-5450.01.patch
>
>
> We need to add more logging for cluster.java class around " 
> initialize(InetSocketAddress jobTrackAddr, Configuration conf) " method to 
> give better logging like about the source of the property.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5450) Enhance logging for Cluster.java around InetSocketAddress

2016-07-29 Thread Vrushali C (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5450?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C updated YARN-5450:
-
Attachment: (was: YARN-5450.01.patch)

> Enhance logging for Cluster.java around InetSocketAddress
> -
>
> Key: YARN-5450
> URL: https://issues.apache.org/jira/browse/YARN-5450
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: yarn
>Reporter: sarun singla
>Assignee: Vrushali C
>Priority: Minor
>  Labels: YARN
>
> We need to add more logging for cluster.java class around " 
> initialize(InetSocketAddress jobTrackAddr, Configuration conf) " method to 
> give better logging like about the source of the property.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5450) Enhance logging for Cluster.java around InetSocketAddress

2016-07-29 Thread Vrushali C (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5450?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C updated YARN-5450:
-
Attachment: YARN-5450.01.patch

> Enhance logging for Cluster.java around InetSocketAddress
> -
>
> Key: YARN-5450
> URL: https://issues.apache.org/jira/browse/YARN-5450
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: yarn
>Reporter: sarun singla
>Assignee: Vrushali C
>Priority: Minor
>  Labels: YARN
> Attachments: YARN-5450.01.patch
>
>
> We need to add more logging for cluster.java class around " 
> initialize(InetSocketAddress jobTrackAddr, Configuration conf) " method to 
> give better logging like about the source of the property.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5450) Enhance logging for Cluster.java around InetSocketAddress

2016-07-29 Thread Vrushali C (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5450?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C updated YARN-5450:
-
Attachment: (was: YARN-5450.01.patch)

> Enhance logging for Cluster.java around InetSocketAddress
> -
>
> Key: YARN-5450
> URL: https://issues.apache.org/jira/browse/YARN-5450
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: yarn
>Reporter: sarun singla
>Assignee: Vrushali C
>Priority: Minor
>  Labels: YARN
> Attachments: YARN-5450.01.patch
>
>
> We need to add more logging for cluster.java class around " 
> initialize(InetSocketAddress jobTrackAddr, Configuration conf) " method to 
> give better logging like about the source of the property.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5450) Enhance logging for Cluster.java around InetSocketAddress

2016-07-29 Thread Vrushali C (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5450?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C updated YARN-5450:
-
Attachment: YARN-5450.01.patch

> Enhance logging for Cluster.java around InetSocketAddress
> -
>
> Key: YARN-5450
> URL: https://issues.apache.org/jira/browse/YARN-5450
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: yarn
>Reporter: sarun singla
>Assignee: Vrushali C
>Priority: Minor
>  Labels: YARN
> Attachments: YARN-5450.01.patch
>
>
> We need to add more logging for cluster.java class around " 
> initialize(InetSocketAddress jobTrackAddr, Configuration conf) " method to 
> give better logging like about the source of the property.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5450) Enhance logging for Cluster.java around InetSocketAddress

2016-07-29 Thread Vrushali C (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5450?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C updated YARN-5450:
-
Attachment: (was: YARN-5450.01.patch)

> Enhance logging for Cluster.java around InetSocketAddress
> -
>
> Key: YARN-5450
> URL: https://issues.apache.org/jira/browse/YARN-5450
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: yarn
>Reporter: sarun singla
>Assignee: Vrushali C
>Priority: Minor
>  Labels: YARN
>
> We need to add more logging for cluster.java class around " 
> initialize(InetSocketAddress jobTrackAddr, Configuration conf) " method to 
> give better logging like about the source of the property.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5450) Enhance logging for Cluster.java around InetSocketAddress

2016-07-29 Thread Vrushali C (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5450?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C updated YARN-5450:
-
Attachment: YARN-5450.01.patch

Hi [~sarun]
Is this patch along the lines of what you had in mind? Would appreciate more 
input to enhance further.

thanks
Vrushali


> Enhance logging for Cluster.java around InetSocketAddress
> -
>
> Key: YARN-5450
> URL: https://issues.apache.org/jira/browse/YARN-5450
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: yarn
>Reporter: sarun singla
>Assignee: Vrushali C
>Priority: Minor
>  Labels: YARN
> Attachments: YARN-5450.01.patch
>
>
> We need to add more logging for cluster.java class around " 
> initialize(InetSocketAddress jobTrackAddr, Configuration conf) " method to 
> give better logging like about the source of the property.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-5450) Enhance logging for Cluster.java around InetSocketAddress

2016-07-29 Thread Vrushali C (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5450?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C reassigned YARN-5450:


Assignee: Vrushali C

> Enhance logging for Cluster.java around InetSocketAddress
> -
>
> Key: YARN-5450
> URL: https://issues.apache.org/jira/browse/YARN-5450
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: yarn
>Reporter: sarun singla
>Assignee: Vrushali C
>Priority: Minor
>  Labels: YARN
>
> We need to add more logging for cluster.java class around " 
> initialize(InetSocketAddress jobTrackAddr, Configuration conf) " method to 
> give better logging like about the source of the property.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3662) Federation Membership State Store internal APIs

2016-07-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15399834#comment-15399834
 ] 

Hadoop QA commented on YARN-3662:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 8m 22s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
51s {color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 15s 
{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
40s {color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 53s 
{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
30s {color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
35s {color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 43s 
{color} | {color:green} YARN-2915 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
43s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 13s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 2m 13s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 13s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 35s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn: The patch generated 3 
new + 26 unchanged - 2 fixed = 29 total (was 28) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 48s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
21s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
42s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 39s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 14s 
{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 26s 
{color} | {color:green} hadoop-yarn-server-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
18s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 34m 5s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12821044/YARN-3662-YARN-2915-v7.patch
 |
| JIRA Issue | YARN-3662 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  cc  |
| uname | Linux b0095c28649f 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | YARN-2915 / 85eda58 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/12565/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn.txt
 |
|  Test Results | 

[jira] [Commented] (YARN-3662) Federation Membership State Store internal APIs

2016-07-29 Thread Subru Krishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15399797#comment-15399797
 ] 

Subru Krishnan commented on YARN-3662:
--

Thanks [~vinodkv]!

FYI I deliberately kept the modifiers public as I mentioned 
[earlier|https://issues.apache.org/jira/browse/YARN-3662?focusedCommentId=15360676]
 but I am fine with adding it when required downstream.

> Federation Membership State Store internal APIs
> ---
>
> Key: YARN-3662
> URL: https://issues.apache.org/jira/browse/YARN-3662
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Subru Krishnan
>Assignee: Subru Krishnan
> Attachments: YARN-3662-YARN-2915-v1.1.patch, 
> YARN-3662-YARN-2915-v1.patch, YARN-3662-YARN-2915-v2.patch, 
> YARN-3662-YARN-2915-v3.01.patch, YARN-3662-YARN-2915-v3.patch, 
> YARN-3662-YARN-2915-v4.patch, YARN-3662-YARN-2915-v5.patch, 
> YARN-3662-YARN-2915-v6.patch, YARN-3662-YARN-2915-v7.patch
>
>
> The Federation Application State encapsulates the information about the 
> active RM of each sub-cluster that is participating in Federation. The 
> information includes addresses for ClientRM, ApplicationMaster and Admin 
> services along with the sub_cluster _capability_ which is currently defined 
> by *ClusterMetricsInfo*. Please refer to the design doc in parent JIRA for 
> further details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-3662) Federation Membership State Store internal APIs

2016-07-29 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated YARN-3662:
--
Attachment: YARN-3662-YARN-2915-v7.patch

Same patch but with the public modifier related checkstyle warnings addressed. 
Others are not addressable.

> Federation Membership State Store internal APIs
> ---
>
> Key: YARN-3662
> URL: https://issues.apache.org/jira/browse/YARN-3662
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Subru Krishnan
>Assignee: Subru Krishnan
> Attachments: YARN-3662-YARN-2915-v1.1.patch, 
> YARN-3662-YARN-2915-v1.patch, YARN-3662-YARN-2915-v2.patch, 
> YARN-3662-YARN-2915-v3.01.patch, YARN-3662-YARN-2915-v3.patch, 
> YARN-3662-YARN-2915-v4.patch, YARN-3662-YARN-2915-v5.patch, 
> YARN-3662-YARN-2915-v6.patch, YARN-3662-YARN-2915-v7.patch
>
>
> The Federation Application State encapsulates the information about the 
> active RM of each sub-cluster that is participating in Federation. The 
> information includes addresses for ClientRM, ApplicationMaster and Admin 
> services along with the sub_cluster _capability_ which is currently defined 
> by *ClusterMetricsInfo*. Please refer to the design doc in parent JIRA for 
> further details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5428) Allow for specifying the docker client configuration directory

2016-07-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15399795#comment-15399795
 ] 

Hadoop QA commented on YARN-5428:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
41s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 16s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
41s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 52s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
25s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
40s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 34s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
44s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 13s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 13s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 38s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn: The patch generated 2 
new + 237 unchanged - 1 fixed = 239 total (was 238) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 49s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
20s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
52s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 31s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 22s 
{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 13m 3s {color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
18s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 35m 21s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.nodemanager.TestDirectoryCollection |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12821029/YARN-5428.004.patch |
| JIRA Issue | YARN-5428 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 09b4d0d0ede2 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 95f2b98 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/12564/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/12564/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
| unit test logs |  

[jira] [Created] (YARN-5450) Enhance logging for Cluster.java around InetSocketAddress

2016-07-29 Thread sarun singla (JIRA)
sarun singla created YARN-5450:
--

 Summary: Enhance logging for Cluster.java around InetSocketAddress
 Key: YARN-5450
 URL: https://issues.apache.org/jira/browse/YARN-5450
 Project: Hadoop YARN
  Issue Type: New Feature
  Components: yarn
Reporter: sarun singla
Priority: Minor


We need to add more logging for cluster.java class around " 
initialize(InetSocketAddress jobTrackAddr, Configuration conf) " method to give 
better logging like about the source of the property.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5434) Add -client|server argument for graceful decom

2016-07-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15399737#comment-15399737
 ] 

Hudson commented on YARN-5434:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #10178 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10178/])
YARN-5434. Add -client|server argument for graceful decommmission. (junping_du: 
rev 95f2b9859718eca12fb3167775cdd2dad25dde25)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/cli/TestRMAdminCLI.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/RMAdminCLI.java


> Add -client|server argument for graceful decom
> --
>
> Key: YARN-5434
> URL: https://issues.apache.org/jira/browse/YARN-5434
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: graceful
>Affects Versions: 2.8.0
>Reporter: Robert Kanter
>Assignee: Robert Kanter
>Priority: Blocker
> Fix For: 2.8.0
>
> Attachments: YARN-5343.001.patch, YARN-5343.002.patch, 
> YARN-5343.003.patch
>
>
> We should add {{-client|server}} argument to allow the user to specify if 
> they want to use the client-side graceful decom tracking, or the server-side 
> tracking (YARN-4676).
> Even though the server-side tracking won't go into 2.8, we should add the 
> arguments to 2.8 for compatibility between 2.8 and 2.9, when it's added.  In 
> 2.8, using {{-server}} would just throw an Exception.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5428) Allow for specifying the docker client configuration directory

2016-07-29 Thread Shane Kumpf (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5428?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shane Kumpf updated YARN-5428:
--
Attachment: YARN-5428.004.patch

> Allow for specifying the docker client configuration directory
> --
>
> Key: YARN-5428
> URL: https://issues.apache.org/jira/browse/YARN-5428
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
> Attachments: YARN-5428.001.patch, YARN-5428.002.patch, 
> YARN-5428.003.patch, YARN-5428.004.patch
>
>
> The docker client allows for specifying a configuration directory that 
> contains the docker client's configuration. It is common to store "docker 
> login" credentials in this config, to avoid the need to docker login on each 
> cluster member. 
> By default the docker client config is $HOME/.docker/config.json on Linux. 
> However, this does not work with the current container executor user 
> switching and it may also be desirable to centralize this configuration 
> beyond the single user's home directory.
> Note that the command line arg is for the configuration directory NOT the 
> configuration file.
> This change will be needed to allow YARN to automatically pull images at 
> localization time or within container executor.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5428) Allow for specifying the docker client configuration directory

2016-07-29 Thread Shane Kumpf (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15399721#comment-15399721
 ] 

Shane Kumpf commented on YARN-5428:
---

[~vvasudev] Thanks for the heads up. I've reformatted the patch and am 
attaching it now. Let me know if you still see any issues. Thanks!

> Allow for specifying the docker client configuration directory
> --
>
> Key: YARN-5428
> URL: https://issues.apache.org/jira/browse/YARN-5428
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
> Attachments: YARN-5428.001.patch, YARN-5428.002.patch, 
> YARN-5428.003.patch
>
>
> The docker client allows for specifying a configuration directory that 
> contains the docker client's configuration. It is common to store "docker 
> login" credentials in this config, to avoid the need to docker login on each 
> cluster member. 
> By default the docker client config is $HOME/.docker/config.json on Linux. 
> However, this does not work with the current container executor user 
> switching and it may also be desirable to centralize this configuration 
> beyond the single user's home directory.
> Note that the command line arg is for the configuration directory NOT the 
> configuration file.
> This change will be needed to allow YARN to automatically pull images at 
> localization time or within container executor.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5394) Remove bind-mount /etc/passwd to Docker Container

2016-07-29 Thread Sidharta Seethana (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15399675#comment-15399675
 ] 

Sidharta Seethana commented on YARN-5394:
-

[~tangzhankun], could you please submit the patch to jenkins if its ready for 
review?

> Remove bind-mount /etc/passwd to Docker Container
> -
>
> Key: YARN-5394
> URL: https://issues.apache.org/jira/browse/YARN-5394
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Zhankun Tang
>Assignee: Zhankun Tang
> Attachments: YARN-5394-branch-2.8.001.patch, 
> YARN-5394-branch-2.8.002.patch
>
>
> Current LCE (DockerLinuxContainerRuntime) is mounting /etc/passwd to the 
> container. And it seems uses wrong file name "/etc/password" for container.
> {panel}
> .addMountLocation("/etc/passwd", "/etc/password:ro");
> {panel}
> The biggest issue of bind-mount /etc/passwd is that it overrides the users 
> defined in Docker image which is not expected. Remove it won't affect 
> existing use cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5443) Add support for docker inspect

2016-07-29 Thread Shane Kumpf (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15399614#comment-15399614
 ] 

Shane Kumpf commented on YARN-5443:
---

Same unrelated failures as before. Please review when you get a chance. Thanks!

> Add support for docker inspect
> --
>
> Key: YARN-5443
> URL: https://issues.apache.org/jira/browse/YARN-5443
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
> Attachments: YARN-5443.001.patch, YARN-5443.002.patch
>
>
> Similar to the DockerStopCommand and DockerRunCommand, it would be desirable 
> to have a DockerInspectCommand. The initial use is for retrieving a 
> containers status, but many other uses are possible (IP information, volume 
> information, etc).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5443) Add support for docker inspect

2016-07-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15399577#comment-15399577
 ] 

Hadoop QA commented on YARN-5443:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
58s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 26s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
19s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 29s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
41s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 17s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
22s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 23s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 23s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 14s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 The patch generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 25s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
51s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 13m 13s {color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
15s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 26m 25s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.nodemanager.TestDirectoryCollection |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12820997/YARN-5443.002.patch |
| JIRA Issue | YARN-5443 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 250ab4d7c127 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 204a205 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/12562/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/12562/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-YARN-Build/12562/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/12562/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 

[jira] [Updated] (YARN-4091) Add REST API to retrieve scheduler activity

2016-07-29 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4091?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-4091:
--
Attachment: SchedulerActivityManager-TestReport v2.pdf

Attaching v2 version of test report. Few more comments are added inside the doc.

> Add REST API to retrieve scheduler activity
> ---
>
> Key: YARN-4091
> URL: https://issues.apache.org/jira/browse/YARN-4091
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler, resourcemanager
>Affects Versions: 2.7.0
>Reporter: Sunil G
>Assignee: Chen Ge
> Attachments: Improvement on debugdiagnostic information - YARN.pdf, 
> SchedulerActivityManager-TestReport v2.pdf, 
> SchedulerActivityManager-TestReport.pdf, YARN-4091-design-doc-v1.pdf, 
> YARN-4091.1.patch, YARN-4091.2.patch, YARN-4091.3.patch, YARN-4091.4.patch, 
> YARN-4091.preliminary.1.patch, app_activities.json, node_activities.json
>
>
> As schedulers are improved with various new capabilities, more configurations 
> which tunes the schedulers starts to take actions such as limit assigning 
> containers to an application, or introduce delay to allocate container etc. 
> There are no clear information passed down from scheduler to outerworld under 
> these various scenarios. This makes debugging very tougher.
> This ticket is an effort to introduce more defined states on various parts in 
> scheduler where it skips/rejects container assignment, activate application 
> etc. Such information will help user to know whats happening in scheduler.
> Attaching a short proposal for initial discussion. We would like to improve 
> on this as we discuss.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5382) RM does not audit log kill request for active applications

2016-07-29 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15399560#comment-15399560
 ] 

Vrushali C commented on YARN-5382:
--

Will fix the checkstyle issues shortly

> RM does not audit log kill request for active applications
> --
>
> Key: YARN-5382
> URL: https://issues.apache.org/jira/browse/YARN-5382
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.7.2
>Reporter: Jason Lowe
>Assignee: Vrushali C
> Attachments: YARN-5382-branch-2.7.01.patch, 
> YARN-5382-branch-2.7.02.patch, YARN-5382-branch-2.7.03.patch, 
> YARN-5382-branch-2.7.04.patch, YARN-5382-branch-2.7.05.patch, 
> YARN-5382.06.patch, YARN-5382.07.patch
>
>
> ClientRMService will audit a kill request but only if it either fails to 
> issue the kill or if the kill is sent to an already finished application.  It 
> does not create a log entry when the application is active which is arguably 
> the most important case to audit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5443) Add support for docker inspect

2016-07-29 Thread Shane Kumpf (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shane Kumpf updated YARN-5443:
--
Attachment: YARN-5443.002.patch

> Add support for docker inspect
> --
>
> Key: YARN-5443
> URL: https://issues.apache.org/jira/browse/YARN-5443
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
> Attachments: YARN-5443.001.patch, YARN-5443.002.patch
>
>
> Similar to the DockerStopCommand and DockerRunCommand, it would be desirable 
> to have a DockerInspectCommand. The initial use is for retrieving a 
> containers status, but many other uses are possible (IP information, volume 
> information, etc).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5443) Add support for docker inspect

2016-07-29 Thread Shane Kumpf (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15399508#comment-15399508
 ] 

Shane Kumpf commented on YARN-5443:
---

[~vvasudev] - Thanks for the review. I'll get a new patch up shortly. I'll also 
open a ticket to fix that test classes name.

[~jianhe] - Thanks for the feedback. Based on our discussion, that doesn't seem 
to be necessary. executePriviledgedOperation has a grabOuput argument that will 
return the commands output.

> Add support for docker inspect
> --
>
> Key: YARN-5443
> URL: https://issues.apache.org/jira/browse/YARN-5443
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
> Attachments: YARN-5443.001.patch
>
>
> Similar to the DockerStopCommand and DockerRunCommand, it would be desirable 
> to have a DockerInspectCommand. The initial use is for retrieving a 
> containers status, but many other uses are possible (IP information, volume 
> information, etc).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4091) Add REST API to retrieve scheduler activity

2016-07-29 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4091?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-4091:
--
Attachment: SchedulerActivityManager-TestReport.pdf

HI [~ChenGe] and [~leftnoteasy]

I got some time to do test  with this patch. I thought of sharing test results 
here along with few inputs.

I added this comments in the doc as well.

Comments:
# I think Diagnostic message could be improved.  "do not need more resource" => 
“Applications does not need more resource”
# For node activity, "priority": "-1" does not make sense. Could we hide the 
same from node level and show for app (container)?
# timeStamp is not meaningful ("timeStamp": "1469792611186"). Its could be date 
and time or relative to previous activity.
# *finalAllocationState* is one of the entry for application. Could we say 
*finalAppAllocationState*.
# In queue level, is “allocationState” meaningful? I think we can hide in queue 
level, thoughts.?
# As mentioned earlier, priority could be hidden in places where its -1.
# As an improvement, its better to give pending resource requests per app after 
allocation. So we can get some idea and can help a lot.
# when I tested below test case "allocation for an application is done and app 
is running. Second app is awaiting due to AM resource percentage." I could not 
get expected result. Am I missing something.? Test case 6 in the report.
# Could we also print node_label too when container is allocated


I tried some more cases and will try enhancing this report.



> Add REST API to retrieve scheduler activity
> ---
>
> Key: YARN-4091
> URL: https://issues.apache.org/jira/browse/YARN-4091
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler, resourcemanager
>Affects Versions: 2.7.0
>Reporter: Sunil G
>Assignee: Chen Ge
> Attachments: Improvement on debugdiagnostic information - YARN.pdf, 
> SchedulerActivityManager-TestReport.pdf, YARN-4091-design-doc-v1.pdf, 
> YARN-4091.1.patch, YARN-4091.2.patch, YARN-4091.3.patch, YARN-4091.4.patch, 
> YARN-4091.preliminary.1.patch, app_activities.json, node_activities.json
>
>
> As schedulers are improved with various new capabilities, more configurations 
> which tunes the schedulers starts to take actions such as limit assigning 
> containers to an application, or introduce delay to allocate container etc. 
> There are no clear information passed down from scheduler to outerworld under 
> these various scenarios. This makes debugging very tougher.
> This ticket is an effort to introduce more defined states on various parts in 
> scheduler where it skips/rejects container assignment, activate application 
> etc. Such information will help user to know whats happening in scheduler.
> Attaching a short proposal for initial discussion. We would like to improve 
> on this as we discuss.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4888) Changes in RM container allocation for identifying resource-requests explicitly

2016-07-29 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15399430#comment-15399430
 ] 

Arun Suresh commented on YARN-4888:
---

[~subru], I see just a few minor nits:

# I feel the *UNDEFINED* SchedulerRequestKey should have allocationRequestId = 
-1 instance of 0 (to follow Priority)
# Shouldnt the comparison be (o.allocationRequestId - getAllocationRequestId()) 
 to maintain similar sort order as priority ?
# In {{RegularContainerAllocator}}, you can revert the spurious changes in the 
imports
# ditto in {{FiCaSchedulerApp}}, {{FSAppAttempt}}, {{TestCapacityScheduler}} 
and {{TestFairScheduler}}

Thanks for the extensive tests.

> Changes in RM container allocation for identifying resource-requests 
> explicitly
> ---
>
> Key: YARN-4888
> URL: https://issues.apache.org/jira/browse/YARN-4888
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Subru Krishnan
>Assignee: Subru Krishnan
> Attachments: YARN-4888-WIP.patch, YARN-4888-v0.patch, 
> YARN-4888-v2.patch, YARN-4888-v3.patch, YARN-4888.001.patch
>
>
> YARN-4879 puts forward the notion of identifying allocate requests 
> explicitly. This JIRA is to track the changes in RM app scheduling data 
> structures to accomplish it. Please refer to the design doc in the parent 
> JIRA for details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-5442) TestYarnClient fails in trunk

2016-07-29 Thread Rohith Sharma K S (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S resolved YARN-5442.
-
Resolution: Duplicate

Closing as duplicate!

> TestYarnClient fails in trunk
> -
>
> Key: YARN-5442
> URL: https://issues.apache.org/jira/browse/YARN-5442
> Project: Hadoop YARN
>  Issue Type: Test
>Reporter: Xuan Gong
>
> testReservationDelete(org.apache.hadoop.yarn.client.api.impl.TestYarnClient)  
> Time elapsed: 2.218 sec  <<< FAILURE!
> java.lang.AssertionError: Exhausted attempts in checking if node capacity was 
> added to the plan
>   at org.junit.Assert.fail(Assert.java:88)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TestYarnClient.setupMiniYARNCluster(TestYarnClient.java:1222)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TestYarnClient.testReservationDelete(TestYarnClient.java:1584)
> testUpdateReservation(org.apache.hadoop.yarn.client.api.impl.TestYarnClient)  
> Time elapsed: 2.181 sec  <<< FAILURE!
> java.lang.AssertionError: Exhausted attempts in checking if node capacity was 
> added to the plan
>   at org.junit.Assert.fail(Assert.java:88)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TestYarnClient.setupMiniYARNCluster(TestYarnClient.java:1222)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TestYarnClient.testUpdateReservation(TestYarnClient.java:1300)
> testListReservationsByTimeIntervalContainingNoReservations(org.apache.hadoop.yarn.client.api.impl.TestYarnClient)
>   Time elapsed: 2.257 sec  <<< FAILURE!
> java.lang.AssertionError: Exhausted attempts in checking if node capacity was 
> added to the plan
>   at org.junit.Assert.fail(Assert.java:88)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TestYarnClient.setupMiniYARNCluster(TestYarnClient.java:1222)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TestYarnClient.testListReservationsByTimeIntervalContainingNoReservations(TestYarnClient.java:1494)
> testCreateReservation(org.apache.hadoop.yarn.client.api.impl.TestYarnClient)  
> Time elapsed: 2.29 sec  <<< FAILURE!
> java.lang.AssertionError: Exhausted attempts in checking if node capacity was 
> added to the plan
>   at org.junit.Assert.fail(Assert.java:88)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TestYarnClient.setupMiniYARNCluster(TestYarnClient.java:1222)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TestYarnClient.testCreateReservation(TestYarnClient.java:1257)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5449) nodemanager process is hanged, and lost from resourcemanager

2016-07-29 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15399251#comment-15399251
 ] 

Rohith Sharma K S commented on YARN-5449:
-

[~shurong.mai] It would be better if you can give at least some information 
abut the issue. Just creating an issue does not give any idea about it. I hope 
you created JIRA with some intention, probably you can your analysis too. For 
the issues like hanged, it is good that if you would have taken thread dump 
report.

> nodemanager process is hanged, and lost from resourcemanager
> 
>
> Key: YARN-5449
> URL: https://issues.apache.org/jira/browse/YARN-5449
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.2.0
> Environment:  2.6.32-573.8.1.el6.x86_64 GNU/Linux
>Reporter: mai shurong
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5448) Resource in Cluster Metrics is not sum of resources in all nodes of all partitions

2016-07-29 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15399248#comment-15399248
 ] 

Naganarasimha G R commented on YARN-5448:
-

[~sunilg]
bq. But resource allocation is only some % of X. (not for full cluster). 
Suppose the node is mapped to a Exclusive Partition and users are asking for 
*other partitions* then irrespective of the configuration of accesibiltiy of 
this partition to any queue, *resource allocation is only some % of X*. 
And suppose users tries to submit the app when the partition is *not* mapped 
then anyway Application Submission fails with appropriate exception.  So it 
will not be like a surprice for the admin or its not something which will get 
unnoticed.

bq. But I am not sure how these labels can be visible enough to user from 
scheduler UI to convey the issue. Could you share how it may come.
Was planning to show the Partition information in different color (red) with a 
tooltip indicating the information "the partition is not assigned to any leaf 
queue"

> Resource in Cluster Metrics is not sum of resources in all nodes of all 
> partitions
> --
>
> Key: YARN-5448
> URL: https://issues.apache.org/jira/browse/YARN-5448
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, resourcemanager, webapp
>Affects Versions: 2.7.2
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
> Attachments: NodesPage.png, schedulerPage.png
>
>
> Currently Resource info from Cluster Metrics are got from Queue Metrics's 
> *available resource + allocated resource*. Hence if there are some nodes 
> which belongs to partition but if that partition is not associated with any 
> queue then in the capacity scheduler partition hierarchy shows this nodes 
> resources under its partition but Cluster metrics doesn't show. 
> Apart from this in the Nodes page too Metrics overview table is shown. So if 
> we show Resource info from Queue Metrics User will not be able to co relate 
> it. (have attached the images for the same)
> IIUC idea of not showing in the *Metrics overview table* is to highlight that 
> configuration is not proper. This needs to be some how conveyed through  
> parititon-by-queue-hierarchy chart.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5333) Some recovered apps are put into default queue when RM HA

2016-07-29 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15399237#comment-15399237
 ] 

Rohith Sharma K S commented on YARN-5333:
-

I just tried modifying the code, the below error I was talking that RMWebApp 
start fails. 
{noformat}
com.google.inject.CreationException: Unable to create injector, see the 
following errors:

1) Binding to null instances is not allowed. Use toProvider(Providers.of(null)) 
if this is your intended behaviour.
  at org.apache.hadoop.yarn.webapp.WebApps$Builder$2.configure(WebApps.java:335)

1 error
at 
com.google.inject.internal.Errors.throwCreationExceptionIfErrorsExist(Errors.java:466)
at 
com.google.inject.internal.InternalInjectorCreator.initializeStatically(InternalInjectorCreator.java:155)
at 
com.google.inject.internal.InternalInjectorCreator.build(InternalInjectorCreator.java:107)
at com.google.inject.Guice.createInjector(Guice.java:96)
at com.google.inject.Guice.createInjector(Guice.java:73)
at com.google.inject.Guice.createInjector(Guice.java:62)
at org.apache.hadoop.yarn.webapp.WebApps$Builder.build(WebApps.java:331)
at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:372)
at 
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:1025)
at 
org.apache.hadoop.yarn.server.resourcemanager.MockRM.startWepApp(MockRM.java:909)
at 
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1127)
at 
org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
at 
org.apache.hadoop.yarn.server.resourcemanager.TestRMHA.testRMDispatcherForHA(TestRMHA.java:333)
{noformat}

Apart from above, other point is RMWebService is started  in StandBy RM where 
in REST calls can be made. Since if we do not initialize active services, then 
we could expect NPE from RMWebService. There are many more things to take care 
if we go for initializing active services during transitionToActive. 

> Some recovered apps are put into default queue when RM HA
> -
>
> Key: YARN-5333
> URL: https://issues.apache.org/jira/browse/YARN-5333
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jun Gong
>Assignee: Jun Gong
> Attachments: YARN-5333.01.patch, YARN-5333.02.patch, 
> YARN-5333.03.patch
>
>
> Enable RM HA and use FairScheduler, 
> {{yarn.scheduler.fair.allow-undeclared-pools}} is set to false, 
> {{yarn.scheduler.fair.user-as-default-queue}} is set to false.
> Reproduce steps:
> 1. Start two RMs.
> 2. After RMs are running, change both RM's file 
> {{etc/hadoop/fair-scheduler.xml}}, then add some queues.
> 3. Submit some apps to the new added queues.
> 4. Stop the active RM, then the standby RM will transit to active and recover 
> apps.
> However the new active RM will put recovered apps into default queue because 
> it might have not loaded the new {{fair-scheduler.xml}}. We need call 
> {{initScheduler}} before start active services or bring {{refreshAll()}} in 
> front of {{rm.transitionToActive()}}. *It seems it is also important for 
> other scheduler*.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5448) Resource in Cluster Metrics is not sum of resources in all nodes of all partitions

2016-07-29 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15399069#comment-15399069
 ] 

Sunil G commented on YARN-5448:
---

>From use case perspective, there are 2 use case will pop from user
# I have *N* nodes in my cluster configured with *M* GB of resource. But my 
cluster resource in web UI is not showing *N x M* resource. (Current behavior)
# I have *X* resource in my cluster and cluster resource of web UI is 
displaying the same too. But resource allocation is only some % of *X*. (not 
for full cluster). 
2nd question will pop on later after fixing in the proposed way here.

I think these 2 are debatable topic and we can choose which one we can answer. 
As from the second part of your proposal, you are planning to show such labels 
in scheduler UI. But I am not sure how these labels can be visible enough to 
user from scheduler UI  to convey the issue. Could you share how it may come.

Hence I thought, we can have a column like "Non usable cluster resource" and we 
can hide if labels are not enabled too. As mentioned we can wait for other 
folks to pitch in too.

> Resource in Cluster Metrics is not sum of resources in all nodes of all 
> partitions
> --
>
> Key: YARN-5448
> URL: https://issues.apache.org/jira/browse/YARN-5448
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, resourcemanager, webapp
>Affects Versions: 2.7.2
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
> Attachments: NodesPage.png, schedulerPage.png
>
>
> Currently Resource info from Cluster Metrics are got from Queue Metrics's 
> *available resource + allocated resource*. Hence if there are some nodes 
> which belongs to partition but if that partition is not associated with any 
> queue then in the capacity scheduler partition hierarchy shows this nodes 
> resources under its partition but Cluster metrics doesn't show. 
> Apart from this in the Nodes page too Metrics overview table is shown. So if 
> we show Resource info from Queue Metrics User will not be able to co relate 
> it. (have attached the images for the same)
> IIUC idea of not showing in the *Metrics overview table* is to highlight that 
> configuration is not proper. This needs to be some how conveyed through  
> parititon-by-queue-hierarchy chart.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5449) nodemanager process is hanged, and lost from resourcemanager

2016-07-29 Thread mai shurong (JIRA)
mai shurong created YARN-5449:
-

 Summary: nodemanager process is hanged, and lost from 
resourcemanager
 Key: YARN-5449
 URL: https://issues.apache.org/jira/browse/YARN-5449
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Affects Versions: 2.2.0
 Environment:  2.6.32-573.8.1.el6.x86_64 GNU/Linux
Reporter: mai shurong






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4624) NPE in PartitionQueueCapacitiesInfo while accessing Schduler UI

2016-07-29 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15399029#comment-15399029
 ] 

Naganarasimha G R commented on YARN-4624:
-

+1 for 2.8,

> NPE in PartitionQueueCapacitiesInfo while accessing Schduler UI
> ---
>
> Key: YARN-4624
> URL: https://issues.apache.org/jira/browse/YARN-4624
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: SchedulerUIWithOutLabelMapping.png, YARN-2674-002.patch, 
> YARN-4624-003.patch, YARN-4624.patch
>
>
> Scenario:
> ===
> Configure nodelables and add to cluster
> Start the cluster
> {noformat}
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.PartitionQueueCapacitiesInfo.getMaxAMLimitPercentage(PartitionQueueCapacitiesInfo.java:114)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.renderQueueCapacityInfo(CapacitySchedulerPage.java:163)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.renderLeafQueueInfoWithPartition(CapacitySchedulerPage.java:105)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.render(CapacitySchedulerPage.java:94)
>   at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.render(HtmlBlock.java:69)
>   at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.renderPartial(HtmlBlock.java:79)
>   at org.apache.hadoop.yarn.webapp.View.render(View.java:235)
>   at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock$Block.subView(HtmlBlock.java:43)
>   at 
> org.apache.hadoop.yarn.webapp.hamlet.HamletImpl$EImp._v(HamletImpl.java:117)
>   at org.apache.hadoop.yarn.webapp.hamlet.Hamlet$LI._(Hamlet.java:7702)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$QueueBlock.render(CapacitySchedulerPage.java:293)
>   at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.render(HtmlBlock.java:69)
>   at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.renderPartial(HtmlBlock.java:79)
>   at org.apache.hadoop.yarn.webapp.View.render(View.java:235)
>   at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock$Block.subView(HtmlBlock.java:43)
>   at 
> org.apache.hadoop.yarn.webapp.hamlet.HamletImpl$EImp._v(HamletImpl.java:117)
>   at org.apache.hadoop.yarn.webapp.hamlet.Hamlet$LI._(Hamlet.java:7702)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$QueuesBlock.render(CapacitySchedulerPage.java:447)
>   at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.render(HtmlBlock.java:69)
>   at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.renderPartial(HtmlBlock.java:79)
>   at org.apache.hadoop.yarn.webapp.View.render(View.java:235)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4624) NPE in PartitionQueueCapacitiesInfo while accessing Schduler UI

2016-07-29 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-4624:
--
Target Version/s: 2.8.0

> NPE in PartitionQueueCapacitiesInfo while accessing Schduler UI
> ---
>
> Key: YARN-4624
> URL: https://issues.apache.org/jira/browse/YARN-4624
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: SchedulerUIWithOutLabelMapping.png, YARN-2674-002.patch, 
> YARN-4624-003.patch, YARN-4624.patch
>
>
> Scenario:
> ===
> Configure nodelables and add to cluster
> Start the cluster
> {noformat}
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.PartitionQueueCapacitiesInfo.getMaxAMLimitPercentage(PartitionQueueCapacitiesInfo.java:114)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.renderQueueCapacityInfo(CapacitySchedulerPage.java:163)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.renderLeafQueueInfoWithPartition(CapacitySchedulerPage.java:105)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.render(CapacitySchedulerPage.java:94)
>   at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.render(HtmlBlock.java:69)
>   at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.renderPartial(HtmlBlock.java:79)
>   at org.apache.hadoop.yarn.webapp.View.render(View.java:235)
>   at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock$Block.subView(HtmlBlock.java:43)
>   at 
> org.apache.hadoop.yarn.webapp.hamlet.HamletImpl$EImp._v(HamletImpl.java:117)
>   at org.apache.hadoop.yarn.webapp.hamlet.Hamlet$LI._(Hamlet.java:7702)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$QueueBlock.render(CapacitySchedulerPage.java:293)
>   at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.render(HtmlBlock.java:69)
>   at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.renderPartial(HtmlBlock.java:79)
>   at org.apache.hadoop.yarn.webapp.View.render(View.java:235)
>   at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock$Block.subView(HtmlBlock.java:43)
>   at 
> org.apache.hadoop.yarn.webapp.hamlet.HamletImpl$EImp._v(HamletImpl.java:117)
>   at org.apache.hadoop.yarn.webapp.hamlet.Hamlet$LI._(Hamlet.java:7702)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$QueuesBlock.render(CapacitySchedulerPage.java:447)
>   at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.render(HtmlBlock.java:69)
>   at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.renderPartial(HtmlBlock.java:79)
>   at org.apache.hadoop.yarn.webapp.View.render(View.java:235)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4624) NPE in PartitionQueueCapacitiesInfo while accessing Schduler UI

2016-07-29 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-4624:
--
Fix Version/s: (was: 2.8.0)

> NPE in PartitionQueueCapacitiesInfo while accessing Schduler UI
> ---
>
> Key: YARN-4624
> URL: https://issues.apache.org/jira/browse/YARN-4624
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: SchedulerUIWithOutLabelMapping.png, YARN-2674-002.patch, 
> YARN-4624-003.patch, YARN-4624.patch
>
>
> Scenario:
> ===
> Configure nodelables and add to cluster
> Start the cluster
> {noformat}
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.PartitionQueueCapacitiesInfo.getMaxAMLimitPercentage(PartitionQueueCapacitiesInfo.java:114)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.renderQueueCapacityInfo(CapacitySchedulerPage.java:163)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.renderLeafQueueInfoWithPartition(CapacitySchedulerPage.java:105)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.render(CapacitySchedulerPage.java:94)
>   at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.render(HtmlBlock.java:69)
>   at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.renderPartial(HtmlBlock.java:79)
>   at org.apache.hadoop.yarn.webapp.View.render(View.java:235)
>   at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock$Block.subView(HtmlBlock.java:43)
>   at 
> org.apache.hadoop.yarn.webapp.hamlet.HamletImpl$EImp._v(HamletImpl.java:117)
>   at org.apache.hadoop.yarn.webapp.hamlet.Hamlet$LI._(Hamlet.java:7702)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$QueueBlock.render(CapacitySchedulerPage.java:293)
>   at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.render(HtmlBlock.java:69)
>   at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.renderPartial(HtmlBlock.java:79)
>   at org.apache.hadoop.yarn.webapp.View.render(View.java:235)
>   at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock$Block.subView(HtmlBlock.java:43)
>   at 
> org.apache.hadoop.yarn.webapp.hamlet.HamletImpl$EImp._v(HamletImpl.java:117)
>   at org.apache.hadoop.yarn.webapp.hamlet.Hamlet$LI._(Hamlet.java:7702)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$QueuesBlock.render(CapacitySchedulerPage.java:447)
>   at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.render(HtmlBlock.java:69)
>   at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.renderPartial(HtmlBlock.java:79)
>   at org.apache.hadoop.yarn.webapp.View.render(View.java:235)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4624) NPE in PartitionQueueCapacitiesInfo while accessing Schduler UI

2016-07-29 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-4624:
--
Fix Version/s: 2.8.0

> NPE in PartitionQueueCapacitiesInfo while accessing Schduler UI
> ---
>
> Key: YARN-4624
> URL: https://issues.apache.org/jira/browse/YARN-4624
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Fix For: 2.8.0
>
> Attachments: SchedulerUIWithOutLabelMapping.png, YARN-2674-002.patch, 
> YARN-4624-003.patch, YARN-4624.patch
>
>
> Scenario:
> ===
> Configure nodelables and add to cluster
> Start the cluster
> {noformat}
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.PartitionQueueCapacitiesInfo.getMaxAMLimitPercentage(PartitionQueueCapacitiesInfo.java:114)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.renderQueueCapacityInfo(CapacitySchedulerPage.java:163)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.renderLeafQueueInfoWithPartition(CapacitySchedulerPage.java:105)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.render(CapacitySchedulerPage.java:94)
>   at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.render(HtmlBlock.java:69)
>   at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.renderPartial(HtmlBlock.java:79)
>   at org.apache.hadoop.yarn.webapp.View.render(View.java:235)
>   at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock$Block.subView(HtmlBlock.java:43)
>   at 
> org.apache.hadoop.yarn.webapp.hamlet.HamletImpl$EImp._v(HamletImpl.java:117)
>   at org.apache.hadoop.yarn.webapp.hamlet.Hamlet$LI._(Hamlet.java:7702)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$QueueBlock.render(CapacitySchedulerPage.java:293)
>   at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.render(HtmlBlock.java:69)
>   at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.renderPartial(HtmlBlock.java:79)
>   at org.apache.hadoop.yarn.webapp.View.render(View.java:235)
>   at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock$Block.subView(HtmlBlock.java:43)
>   at 
> org.apache.hadoop.yarn.webapp.hamlet.HamletImpl$EImp._v(HamletImpl.java:117)
>   at org.apache.hadoop.yarn.webapp.hamlet.Hamlet$LI._(Hamlet.java:7702)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$QueuesBlock.render(CapacitySchedulerPage.java:447)
>   at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.render(HtmlBlock.java:69)
>   at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.renderPartial(HtmlBlock.java:79)
>   at org.apache.hadoop.yarn.webapp.View.render(View.java:235)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4624) NPE in PartitionQueueCapacitiesInfo while accessing Schduler UI

2016-07-29 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15399022#comment-15399022
 ] 

Sunil G commented on YARN-4624:
---

[~Naganarasimha Garla]. As you know, with YARN-4304 there was an improvement 
planned to hide {{maxAMResourcePercenatageLimit}} for parent queue. In DAO 
object, if we can set the variable to null, we can achieve the same. Hence we 
chose {{Float}} and was setting null in case of ParentQueue. With v3 patch, we 
also wanted to do a null check in UI to avoid the pblm mentioned in this JIRA. 
So couple of typecasting needed there to convert from float to Float and 
findbugs reported this as error. we could hide this error to have this 
optimization in and I was mentioning for same in earlier comments. Only concern 
was that, we were doing some double type casting to achieve what we are 
expecting. So i thought, is its really good to do something like that which is 
not a clean code. Hence though of going with v1 patch. Am fine either way.
Its good if we could also take input from [~leftnoteasy] [~brahmareddy] and 
[~bibinchundatt] too who were involved and can come to conclusion for 2.8. 

I would like to keep target version as 2.8 if its fine in either way.

> NPE in PartitionQueueCapacitiesInfo while accessing Schduler UI
> ---
>
> Key: YARN-4624
> URL: https://issues.apache.org/jira/browse/YARN-4624
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Fix For: 2.8.0
>
> Attachments: SchedulerUIWithOutLabelMapping.png, YARN-2674-002.patch, 
> YARN-4624-003.patch, YARN-4624.patch
>
>
> Scenario:
> ===
> Configure nodelables and add to cluster
> Start the cluster
> {noformat}
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.PartitionQueueCapacitiesInfo.getMaxAMLimitPercentage(PartitionQueueCapacitiesInfo.java:114)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.renderQueueCapacityInfo(CapacitySchedulerPage.java:163)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.renderLeafQueueInfoWithPartition(CapacitySchedulerPage.java:105)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.render(CapacitySchedulerPage.java:94)
>   at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.render(HtmlBlock.java:69)
>   at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.renderPartial(HtmlBlock.java:79)
>   at org.apache.hadoop.yarn.webapp.View.render(View.java:235)
>   at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock$Block.subView(HtmlBlock.java:43)
>   at 
> org.apache.hadoop.yarn.webapp.hamlet.HamletImpl$EImp._v(HamletImpl.java:117)
>   at org.apache.hadoop.yarn.webapp.hamlet.Hamlet$LI._(Hamlet.java:7702)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$QueueBlock.render(CapacitySchedulerPage.java:293)
>   at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.render(HtmlBlock.java:69)
>   at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.renderPartial(HtmlBlock.java:79)
>   at org.apache.hadoop.yarn.webapp.View.render(View.java:235)
>   at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock$Block.subView(HtmlBlock.java:43)
>   at 
> org.apache.hadoop.yarn.webapp.hamlet.HamletImpl$EImp._v(HamletImpl.java:117)
>   at org.apache.hadoop.yarn.webapp.hamlet.Hamlet$LI._(Hamlet.java:7702)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$QueuesBlock.render(CapacitySchedulerPage.java:447)
>   at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.render(HtmlBlock.java:69)
>   at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.renderPartial(HtmlBlock.java:79)
>   at org.apache.hadoop.yarn.webapp.View.render(View.java:235)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5443) Add support for docker inspect

2016-07-29 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15399008#comment-15399008
 ] 

Jian He commented on YARN-5443:
---

[~shaneku...@gmail.com], I think container-executor.c#run_docker also needs to 
be changed to pipe the docker command output to stdout. Otherwise, java code 
won't be able to get the docker inspect output.

> Add support for docker inspect
> --
>
> Key: YARN-5443
> URL: https://issues.apache.org/jira/browse/YARN-5443
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
> Attachments: YARN-5443.001.patch
>
>
> Similar to the DockerStopCommand and DockerRunCommand, it would be desirable 
> to have a DockerInspectCommand. The initial use is for retrieving a 
> containers status, but many other uses are possible (IP information, volume 
> information, etc).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4624) NPE in PartitionQueueCapacitiesInfo while accessing Schduler UI

2016-07-29 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15399001#comment-15399001
 ] 

Naganarasimha G R commented on YARN-4624:
-

[~sunilg], 
bq. we can group this new metrics (maxAMPercentageLimit etc) in other DAO 
object and can use it.
If this is the alternate approach would it bring incompatibility later ? If so 
better to address in this jira and get it fixed as part of 2.8.
Well if the approach was already discussed then we can go ahead with V3 patch 
and just add it to findbugs exclude file.

> NPE in PartitionQueueCapacitiesInfo while accessing Schduler UI
> ---
>
> Key: YARN-4624
> URL: https://issues.apache.org/jira/browse/YARN-4624
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: SchedulerUIWithOutLabelMapping.png, YARN-2674-002.patch, 
> YARN-4624-003.patch, YARN-4624.patch
>
>
> Scenario:
> ===
> Configure nodelables and add to cluster
> Start the cluster
> {noformat}
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.PartitionQueueCapacitiesInfo.getMaxAMLimitPercentage(PartitionQueueCapacitiesInfo.java:114)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.renderQueueCapacityInfo(CapacitySchedulerPage.java:163)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.renderLeafQueueInfoWithPartition(CapacitySchedulerPage.java:105)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.render(CapacitySchedulerPage.java:94)
>   at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.render(HtmlBlock.java:69)
>   at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.renderPartial(HtmlBlock.java:79)
>   at org.apache.hadoop.yarn.webapp.View.render(View.java:235)
>   at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock$Block.subView(HtmlBlock.java:43)
>   at 
> org.apache.hadoop.yarn.webapp.hamlet.HamletImpl$EImp._v(HamletImpl.java:117)
>   at org.apache.hadoop.yarn.webapp.hamlet.Hamlet$LI._(Hamlet.java:7702)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$QueueBlock.render(CapacitySchedulerPage.java:293)
>   at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.render(HtmlBlock.java:69)
>   at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.renderPartial(HtmlBlock.java:79)
>   at org.apache.hadoop.yarn.webapp.View.render(View.java:235)
>   at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock$Block.subView(HtmlBlock.java:43)
>   at 
> org.apache.hadoop.yarn.webapp.hamlet.HamletImpl$EImp._v(HamletImpl.java:117)
>   at org.apache.hadoop.yarn.webapp.hamlet.Hamlet$LI._(Hamlet.java:7702)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$QueuesBlock.render(CapacitySchedulerPage.java:447)
>   at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.render(HtmlBlock.java:69)
>   at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.renderPartial(HtmlBlock.java:79)
>   at org.apache.hadoop.yarn.webapp.View.render(View.java:235)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5448) Resource in Cluster Metrics is not sum of resources in all nodes of all partitions

2016-07-29 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15398991#comment-15398991
 ] 

Naganarasimha G R commented on YARN-5448:
-

Thanks [~sunilg] for your thoughts,
Well yes its based on perception and its debatable, but my two cents not in 
favor of your approach are :
# This is one off wrong configuration scenario for this if we add additional 
columns, then in general usage(after correction) its not of much use.
# Currently there are only two resources which are getting monitored (cpu and 
memory), what about other resources which in future we want to add? so adding 
multiple columns for each resource for this purpose doesn't seem good.
# ??Admin can immediately notice that there are some label which are 
not-configured or part of cluster resource?? : Well its anyway related to 
Scheduler page if admin sees a warning in *parititon-by-queue-hierarchy chart* 
IMHO it should be sufficient.

I would like to get the opinion of others too.

> Resource in Cluster Metrics is not sum of resources in all nodes of all 
> partitions
> --
>
> Key: YARN-5448
> URL: https://issues.apache.org/jira/browse/YARN-5448
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, resourcemanager, webapp
>Affects Versions: 2.7.2
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
> Attachments: NodesPage.png, schedulerPage.png
>
>
> Currently Resource info from Cluster Metrics are got from Queue Metrics's 
> *available resource + allocated resource*. Hence if there are some nodes 
> which belongs to partition but if that partition is not associated with any 
> queue then in the capacity scheduler partition hierarchy shows this nodes 
> resources under its partition but Cluster metrics doesn't show. 
> Apart from this in the Nodes page too Metrics overview table is shown. So if 
> we show Resource info from Queue Metrics User will not be able to co relate 
> it. (have attached the images for the same)
> IIUC idea of not showing in the *Metrics overview table* is to highlight that 
> configuration is not proper. This needs to be some how conveyed through  
> parititon-by-queue-hierarchy chart.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4997) Update fair scheduler to use pluggable auth provider

2016-07-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15398987#comment-15398987
 ] 

Hadoop QA commented on YARN-4997:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
1s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 25s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
42s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 11s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
32s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 5s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
57s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 19s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 19s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 40s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn: The patch generated 1 
new + 322 unchanged - 3 fixed = 323 total (was 325) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 5s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
26s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 4s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 44s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 14s 
{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 36m 23s 
{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
17s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 63m 25s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | 
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
|  |  Inconsistent synchronization of 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.authorizer;
 locked 50% of time  Unsynchronized access at FairScheduler.java:50% of time  
Unsynchronized access at FairScheduler.java:[line 1602] |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12820905/YARN-4997-002.patch |
| JIRA Issue | YARN-4997 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 2fb4bc271934 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 204a205 |
| Default 

[jira] [Commented] (YARN-5446) Cluster Resource Usage table is not displayed when user clicks on hadoop logo on top of web UI home page

2016-07-29 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15398975#comment-15398975
 ] 

Sunil G commented on YARN-5446:
---

Thanks [~ChenGe] for reporting the same.
I think internal redirection is not coming correct. Do you have a patch for 
same, I could help to verify. If not, i can try checking the problem.

>  Cluster Resource Usage table is not displayed when user clicks on hadoop 
> logo on top of web UI home page
> -
>
> Key: YARN-5446
> URL: https://issues.apache.org/jira/browse/YARN-5446
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Chen Ge
> Attachments: screenshot-1.png
>
>
> Under latest 3368, when user clicks hadoop icon in cluster overview page, 
> "Cluster Resource Usage By Applications" could not be correctly displayed.
> Following is error description found on website.
> {code}
> donut-chart.js:110 Uncaught TypeError: Cannot read property 'value' of 
> undefined(anonymous function) @ 
> donut-chart.js:110arguments.length.each.value.function.value.textContent @ 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5446) Cluster Resource Usage table is not displayed when user clicks on hadoop logo on top of web UI home page

2016-07-29 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5446?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-5446:
--
Summary:  Cluster Resource Usage table is not displayed when user clicks on 
hadoop logo on top of web UI home page  (was: Find a bug when clicking hadoop 
icon(in the left top corner) in cluster overview page)

>  Cluster Resource Usage table is not displayed when user clicks on hadoop 
> logo on top of web UI home page
> -
>
> Key: YARN-5446
> URL: https://issues.apache.org/jira/browse/YARN-5446
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Chen Ge
> Attachments: screenshot-1.png
>
>
> Under latest 3368, when user clicks hadoop icon in cluster overview page, 
> "Cluster Resource Usage By Applications" could not be correctly displayed.
> Following is error description found on website.
> {code}
> donut-chart.js:110 Uncaught TypeError: Cannot read property 'value' of 
> undefined(anonymous function) @ 
> donut-chart.js:110arguments.length.each.value.function.value.textContent @ 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4624) NPE in PartitionQueueCapacitiesInfo while accessing Schduler UI

2016-07-29 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15398970#comment-15398970
 ] 

Sunil G commented on YARN-4624:
---

Yes [~Naganarasimha Garla]. We need to make some progress here, as its a long 
pending task.

v3 patch comes with a find bugs issue. We do type conversion from to box type 
two times and causing this problem. I think we can go with v1 patch itself to 
resolve the issue for now. Improvement to hide some non-used items can be done 
in an another improvement jira. Thoughts?. 
Also patch might need a rebase, [~brahmareddy], could you please help to rebase 
once v1 or v3 is decided. If you are busy, i could also help to do the same.

> NPE in PartitionQueueCapacitiesInfo while accessing Schduler UI
> ---
>
> Key: YARN-4624
> URL: https://issues.apache.org/jira/browse/YARN-4624
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: SchedulerUIWithOutLabelMapping.png, YARN-2674-002.patch, 
> YARN-4624-003.patch, YARN-4624.patch
>
>
> Scenario:
> ===
> Configure nodelables and add to cluster
> Start the cluster
> {noformat}
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.PartitionQueueCapacitiesInfo.getMaxAMLimitPercentage(PartitionQueueCapacitiesInfo.java:114)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.renderQueueCapacityInfo(CapacitySchedulerPage.java:163)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.renderLeafQueueInfoWithPartition(CapacitySchedulerPage.java:105)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.render(CapacitySchedulerPage.java:94)
>   at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.render(HtmlBlock.java:69)
>   at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.renderPartial(HtmlBlock.java:79)
>   at org.apache.hadoop.yarn.webapp.View.render(View.java:235)
>   at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock$Block.subView(HtmlBlock.java:43)
>   at 
> org.apache.hadoop.yarn.webapp.hamlet.HamletImpl$EImp._v(HamletImpl.java:117)
>   at org.apache.hadoop.yarn.webapp.hamlet.Hamlet$LI._(Hamlet.java:7702)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$QueueBlock.render(CapacitySchedulerPage.java:293)
>   at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.render(HtmlBlock.java:69)
>   at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.renderPartial(HtmlBlock.java:79)
>   at org.apache.hadoop.yarn.webapp.View.render(View.java:235)
>   at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock$Block.subView(HtmlBlock.java:43)
>   at 
> org.apache.hadoop.yarn.webapp.hamlet.HamletImpl$EImp._v(HamletImpl.java:117)
>   at org.apache.hadoop.yarn.webapp.hamlet.Hamlet$LI._(Hamlet.java:7702)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$QueuesBlock.render(CapacitySchedulerPage.java:447)
>   at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.render(HtmlBlock.java:69)
>   at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.renderPartial(HtmlBlock.java:79)
>   at org.apache.hadoop.yarn.webapp.View.render(View.java:235)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5448) Resource in Cluster Metrics is not sum of resources in all nodes of all partitions

2016-07-29 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15398962#comment-15398962
 ] 

Sunil G commented on YARN-5448:
---

bq.Metrics overview table needs to show the resources of all the active NM'
[~Naganarasimha Garla]. I think its better to separate non-configured label 
resources as separate column in metrics table. Few advantages
- Admin can immediately notice that there are some label which are 
not-configured or part of cluster resource , hence cannot be used. Any action 
can be taken with one look in metrics table. As I see, admins may choose not to 
configure any label due reasons like a) some labels need to be taken out of 
rotation b) may be a configuration mistake etc. 
- As per proposed approach, if we add up non-configured label resources to 
cluster resources, we need to go to scheduler page to get details. Yes, its a 
perception of seeing the data. I think its better if metrics are available 
separately in the very first place in cluster metrics table. 
- I think we can also have the approach in scheduler to have some info message 
for non-used or configured labels. It will definitely help to go for indepth 
analysis.

Thoughts?

> Resource in Cluster Metrics is not sum of resources in all nodes of all 
> partitions
> --
>
> Key: YARN-5448
> URL: https://issues.apache.org/jira/browse/YARN-5448
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, resourcemanager, webapp
>Affects Versions: 2.7.2
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
> Attachments: NodesPage.png, schedulerPage.png
>
>
> Currently Resource info from Cluster Metrics are got from Queue Metrics's 
> *available resource + allocated resource*. Hence if there are some nodes 
> which belongs to partition but if that partition is not associated with any 
> queue then in the capacity scheduler partition hierarchy shows this nodes 
> resources under its partition but Cluster metrics doesn't show. 
> Apart from this in the Nodes page too Metrics overview table is shown. So if 
> we show Resource info from Queue Metrics User will not be able to co relate 
> it. (have attached the images for the same)
> IIUC idea of not showing in the *Metrics overview table* is to highlight that 
> configuration is not proper. This needs to be some how conveyed through  
> parititon-by-queue-hierarchy chart.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4997) Update fair scheduler to use pluggable auth provider

2016-07-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15398953#comment-15398953
 ] 

Hadoop QA commented on YARN-4997:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
50s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 21s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
41s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 10s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
30s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
54s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 48s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
0s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 14s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 14s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 46s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn: The patch generated 1 
new + 334 unchanged - 3 fixed = 335 total (was 337) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 5s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
26s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 6s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 50s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 17s 
{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 33m 25s 
{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
17s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 60m 8s {color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | 
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
|  |  Inconsistent synchronization of 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.authorizer;
 locked 50% of time  Unsynchronized access at FairScheduler.java:50% of time  
Unsynchronized access at FairScheduler.java:[line 1602] |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12820898/YARN-4997-002.patch |
| JIRA Issue | YARN-4997 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 8a642c07ec10 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git 

[jira] [Comment Edited] (YARN-5448) Resource in Cluster Metrics is not sum of resources in all nodes of all partitions

2016-07-29 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15398927#comment-15398927
 ] 

Naganarasimha G R edited comment on YARN-5448 at 7/29/16 8:14 AM:
--

As per discussion with [~wangda], we had considered following options :
# *Metrics overview table* needs to show the resources of all the active NM's 
# *parititon-by-queue-hierarchy chart*, we can show non-usable partition (with 
some NM's) , and under the partition show "the partition is not assigned to any 
queue" Or other option is to show Partition resource in different font/color  
(red) and having tool tip as "the partition is not assigned to any queue" 
Thoughts ?
CC [~sunilg],[~brahma], [~kanaka] & [~bibinchundatt]


was (Author: naganarasimha):
As per discussion with [~wangda], we had considered following options :
# *Metrics overview table* needs to show the resources of all the active NM's 
# *parititon-by-queue-hierarchy chart*, we can show non-usable partition (wi) , 
and under the partition show "the partition is not assigned to any queue" Or 
other option is to show Partition resource in different font/color  (red) and 
having tool tip as "the partition is not assigned to any queue" 
Thoughts ?
CC [~sunilg],[~brahma], [~kanaka] & [~bibinchundatt]

> Resource in Cluster Metrics is not sum of resources in all nodes of all 
> partitions
> --
>
> Key: YARN-5448
> URL: https://issues.apache.org/jira/browse/YARN-5448
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, resourcemanager, webapp
>Affects Versions: 2.7.2
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
> Attachments: NodesPage.png, schedulerPage.png
>
>
> Currently Resource info from Cluster Metrics are got from Queue Metrics's 
> *available resource + allocated resource*. Hence if there are some nodes 
> which belongs to partition but if that partition is not associated with any 
> queue then in the capacity scheduler partition hierarchy shows this nodes 
> resources under its partition but Cluster metrics doesn't show. 
> Apart from this in the Nodes page too Metrics overview table is shown. So if 
> we show Resource info from Queue Metrics User will not be able to co relate 
> it. (have attached the images for the same)
> IIUC idea of not showing in the *Metrics overview table* is to highlight that 
> configuration is not proper. This needs to be some how conveyed through  
> parititon-by-queue-hierarchy chart.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5448) Resource in Cluster Metrics is not sum of resources in all nodes of all partitions

2016-07-29 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15398927#comment-15398927
 ] 

Naganarasimha G R commented on YARN-5448:
-

As per discussion with [~wangda], we had considered following options :
# *Metrics overview table* needs to show the resources of all the active NM's 
# *parititon-by-queue-hierarchy chart*, we can show non-usable partition (wi) , 
and under the partition show "the partition is not assigned to any queue" Or 
other option is to show Partition resource in different font/color  (red) and 
having tool tip as "the partition is not assigned to any queue" 
Thoughts ?
CC [~sunilg],[~brahma], [~kanaka] & [~bibinchundatt]

> Resource in Cluster Metrics is not sum of resources in all nodes of all 
> partitions
> --
>
> Key: YARN-5448
> URL: https://issues.apache.org/jira/browse/YARN-5448
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, resourcemanager, webapp
>Affects Versions: 2.7.2
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
> Attachments: NodesPage.png, schedulerPage.png
>
>
> Currently Resource info from Cluster Metrics are got from Queue Metrics's 
> *available resource + allocated resource*. Hence if there are some nodes 
> which belongs to partition but if that partition is not associated with any 
> queue then in the capacity scheduler partition hierarchy shows this nodes 
> resources under its partition but Cluster metrics doesn't show. 
> Apart from this in the Nodes page too Metrics overview table is shown. So if 
> we show Resource info from Queue Metrics User will not be able to co relate 
> it. (have attached the images for the same)
> IIUC idea of not showing in the *Metrics overview table* is to highlight that 
> configuration is not proper. This needs to be some how conveyed through  
> parititon-by-queue-hierarchy chart.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5394) Remove bind-mount /etc/passwd to Docker Container

2016-07-29 Thread Zhankun Tang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15398924#comment-15398924
 ] 

Zhankun Tang commented on YARN-5394:


Title and description updated

> Remove bind-mount /etc/passwd to Docker Container
> -
>
> Key: YARN-5394
> URL: https://issues.apache.org/jira/browse/YARN-5394
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Zhankun Tang
>Assignee: Zhankun Tang
> Attachments: YARN-5394-branch-2.8.001.patch, 
> YARN-5394-branch-2.8.002.patch
>
>
> Current LCE (DockerLinuxContainerRuntime) is mounting /etc/passwd to the 
> container. And it seems uses wrong file name "/etc/password" for container.
> {panel}
> .addMountLocation("/etc/passwd", "/etc/password:ro");
> {panel}
> The biggest issue of bind-mount /etc/passwd is that it overrides the users 
> defined in Docker image which is not expected. Remove it won't affect 
> existing use cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5394) Remove bind-mount /etc/passwd to Docker Container

2016-07-29 Thread Zhankun Tang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhankun Tang updated YARN-5394:
---
Description: 
Current LCE (DockerLinuxContainerRuntime) is mounting /etc/passwd to the 
container. And it seems uses wrong file name "/etc/password" for container.
{panel}
.addMountLocation("/etc/passwd", "/etc/password:ro");
{panel}
The biggest issue of bind-mount /etc/passwd is that it overrides the users 
defined in Docker image which is not expected. Remove it won't affect existing 
use cases.

  was:
Current LCE (DockerLinuxContainerRuntime) is mounting /etc/passwd to the 
container. But it seems to use wrong file name "/etc/password" for container.
{panel}
.addMountLocation("/etc/passwd", "/etc/password:ro");
{panel}
This causes LCE failed to launch the Docker container if the Docker images 
don't create the same user name and UID in it.


> Remove bind-mount /etc/passwd to Docker Container
> -
>
> Key: YARN-5394
> URL: https://issues.apache.org/jira/browse/YARN-5394
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Zhankun Tang
>Assignee: Zhankun Tang
> Attachments: YARN-5394-branch-2.8.001.patch, 
> YARN-5394-branch-2.8.002.patch
>
>
> Current LCE (DockerLinuxContainerRuntime) is mounting /etc/passwd to the 
> container. And it seems uses wrong file name "/etc/password" for container.
> {panel}
> .addMountLocation("/etc/passwd", "/etc/password:ro");
> {panel}
> The biggest issue of bind-mount /etc/passwd is that it overrides the users 
> defined in Docker image which is not expected. Remove it won't affect 
> existing use cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5448) Resource in Cluster Metrics is not sum of resources in all nodes of all partitions

2016-07-29 Thread Naganarasimha G R (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naganarasimha G R updated YARN-5448:

Attachment: NodesPage.png
schedulerPage.png

> Resource in Cluster Metrics is not sum of resources in all nodes of all 
> partitions
> --
>
> Key: YARN-5448
> URL: https://issues.apache.org/jira/browse/YARN-5448
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, resourcemanager, webapp
>Affects Versions: 2.7.2
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
> Attachments: NodesPage.png, schedulerPage.png
>
>
> Currently Resource info from Cluster Metrics are got from Queue Metrics's 
> *available resource + allocated resource*. Hence if there are some nodes 
> which belongs to partition but if that partition is not associated with any 
> queue then in the capacity scheduler partition hierarchy shows this nodes 
> resources under its partition but Cluster metrics doesn't show. 
> Apart from this in the Nodes page too Metrics overview table is shown. So if 
> we show Resource info from Queue Metrics User will not be able to co relate 
> it. (have attached the images for the same)
> IIUC idea of not showing in the *Metrics overview table* is to highlight that 
> configuration is not proper. This needs to be some how conveyed through  
> parititon-by-queue-hierarchy chart.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5394) Remove bind-mount /etc/passwd to Docker Container

2016-07-29 Thread Zhankun Tang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhankun Tang updated YARN-5394:
---
Summary: Remove bind-mount /etc/passwd to Docker Container  (was: Correct 
the wrong file name when mounting /etc/passwd to Docker Container)

> Remove bind-mount /etc/passwd to Docker Container
> -
>
> Key: YARN-5394
> URL: https://issues.apache.org/jira/browse/YARN-5394
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Zhankun Tang
>Assignee: Zhankun Tang
> Attachments: YARN-5394-branch-2.8.001.patch, 
> YARN-5394-branch-2.8.002.patch
>
>
> Current LCE (DockerLinuxContainerRuntime) is mounting /etc/passwd to the 
> container. But it seems to use wrong file name "/etc/password" for container.
> {panel}
> .addMountLocation("/etc/passwd", "/etc/password:ro");
> {panel}
> This causes LCE failed to launch the Docker container if the Docker images 
> don't create the same user name and UID in it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5448) Resource in Cluster Metrics is not sum of resources in all nodes of all partitions

2016-07-29 Thread Naganarasimha G R (JIRA)
Naganarasimha G R created YARN-5448:
---

 Summary: Resource in Cluster Metrics is not sum of resources in 
all nodes of all partitions
 Key: YARN-5448
 URL: https://issues.apache.org/jira/browse/YARN-5448
 Project: Hadoop YARN
  Issue Type: Bug
  Components: capacity scheduler, resourcemanager, webapp
Affects Versions: 2.7.2
Reporter: Naganarasimha G R
Assignee: Naganarasimha G R


Currently Resource info from Cluster Metrics are got from Queue Metrics's 
*available resource + allocated resource*. Hence if there are some nodes which 
belongs to partition but if that partition is not associated with any queue 
then in the capacity scheduler partition hierarchy shows this nodes resources 
under its partition but Cluster metrics doesn't show. 
Apart from this in the Nodes page too Metrics overview table is shown. So if we 
show Resource info from Queue Metrics User will not be able to co relate it. 
(have attached the images for the same)
IIUC idea of not showing in the *Metrics overview table* is to highlight that 
configuration is not proper. This needs to be some how conveyed through  
parititon-by-queue-hierarchy chart.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4997) Update fair scheduler to use pluggable auth provider

2016-07-29 Thread Tao Jie (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tao Jie updated YARN-4997:
--
Attachment: YARN-4997-002.patch

> Update fair scheduler to use pluggable auth provider
> 
>
> Key: YARN-4997
> URL: https://issues.apache.org/jira/browse/YARN-4997
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 2.8.0
>Reporter: Daniel Templeton
>Assignee: Tao Jie
> Attachments: YARN-4997-001.patch, YARN-4997-002.patch
>
>
> Now that YARN-3100 has made the authorization pluggable, it should be 
> supported by the fair scheduler.  YARN-3100 only updated the capacity 
> scheduler.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4997) Update fair scheduler to use pluggable auth provider

2016-07-29 Thread Tao Jie (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tao Jie updated YARN-4997:
--
Attachment: (was: YARN-4997-002.patch)

> Update fair scheduler to use pluggable auth provider
> 
>
> Key: YARN-4997
> URL: https://issues.apache.org/jira/browse/YARN-4997
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 2.8.0
>Reporter: Daniel Templeton
>Assignee: Tao Jie
> Attachments: YARN-4997-001.patch, YARN-4997-002.patch
>
>
> Now that YARN-3100 has made the authorization pluggable, it should be 
> supported by the fair scheduler.  YARN-3100 only updated the capacity 
> scheduler.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5394) Correct the wrong file name when mounting /etc/passwd to Docker Container

2016-07-29 Thread Zhankun Tang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhankun Tang updated YARN-5394:
---
Attachment: YARN-5394-branch-2.8.002.patch

Upload a new patch of removing /etc/passwd bind-mount. This won't affect 
existing use cases.

> Correct the wrong file name when mounting /etc/passwd to Docker Container
> -
>
> Key: YARN-5394
> URL: https://issues.apache.org/jira/browse/YARN-5394
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Zhankun Tang
>Assignee: Zhankun Tang
> Attachments: YARN-5394-branch-2.8.001.patch, 
> YARN-5394-branch-2.8.002.patch
>
>
> Current LCE (DockerLinuxContainerRuntime) is mounting /etc/passwd to the 
> container. But it seems to use wrong file name "/etc/password" for container.
> {panel}
> .addMountLocation("/etc/passwd", "/etc/password:ro");
> {panel}
> This causes LCE failed to launch the Docker container if the Docker images 
> don't create the same user name and UID in it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4997) Update fair scheduler to use pluggable auth provider

2016-07-29 Thread Tao Jie (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15398871#comment-15398871
 ] 

Tao Jie commented on YARN-4997:
---

Attached file fixed findbugs and unittests.

> Update fair scheduler to use pluggable auth provider
> 
>
> Key: YARN-4997
> URL: https://issues.apache.org/jira/browse/YARN-4997
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 2.8.0
>Reporter: Daniel Templeton
>Assignee: Tao Jie
> Attachments: YARN-4997-001.patch, YARN-4997-002.patch
>
>
> Now that YARN-3100 has made the authorization pluggable, it should be 
> supported by the fair scheduler.  YARN-3100 only updated the capacity 
> scheduler.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4997) Update fair scheduler to use pluggable auth provider

2016-07-29 Thread Tao Jie (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tao Jie updated YARN-4997:
--
Attachment: YARN-4997-002.patch

> Update fair scheduler to use pluggable auth provider
> 
>
> Key: YARN-4997
> URL: https://issues.apache.org/jira/browse/YARN-4997
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 2.8.0
>Reporter: Daniel Templeton
>Assignee: Tao Jie
> Attachments: YARN-4997-001.patch, YARN-4997-002.patch
>
>
> Now that YARN-3100 has made the authorization pluggable, it should be 
> supported by the fair scheduler.  YARN-3100 only updated the capacity 
> scheduler.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5444) TestLinuxContainerExecutorWithMocks failed due to unstable assumption.

2016-07-29 Thread Varun Vasudev (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5444?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15398820#comment-15398820
 ] 

Varun Vasudev commented on YARN-5444:
-

+1. I'll commit this tomorrow if no one objects.

> TestLinuxContainerExecutorWithMocks failed due to unstable assumption.
> --
>
> Key: YARN-5444
> URL: https://issues.apache.org/jira/browse/YARN-5444
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: YARN-5444.001.patch
>
>
> Test case {{testLaunchCommandWithoutPriority}} and {{testStartLocalizer}} are 
> based on the assumption that Yarn configuration files won't be loaded, which 
> is not true in some situations. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5428) Allow for specifying the docker client configuration directory

2016-07-29 Thread Varun Vasudev (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15398803#comment-15398803
 ] 

Varun Vasudev commented on YARN-5428:
-

Thanks for the patch [~shaneku...@gmail.com]. Mostly good, except for some 
formatting fixes - the 'if' conditions formatting seems off.

> Allow for specifying the docker client configuration directory
> --
>
> Key: YARN-5428
> URL: https://issues.apache.org/jira/browse/YARN-5428
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
> Attachments: YARN-5428.001.patch, YARN-5428.002.patch, 
> YARN-5428.003.patch
>
>
> The docker client allows for specifying a configuration directory that 
> contains the docker client's configuration. It is common to store "docker 
> login" credentials in this config, to avoid the need to docker login on each 
> cluster member. 
> By default the docker client config is $HOME/.docker/config.json on Linux. 
> However, this does not work with the current container executor user 
> switching and it may also be desirable to centralize this configuration 
> beyond the single user's home directory.
> Note that the command line arg is for the configuration directory NOT the 
> configuration file.
> This change will be needed to allow YARN to automatically pull images at 
> localization time or within container executor.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5404) Add the ability to split reverse zone subnets

2016-07-29 Thread Varun Vasudev (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15398796#comment-15398796
 ] 

Varun Vasudev commented on YARN-5404:
-

Thanks for the patch [~shaneku...@gmail.com]. Some more fixes required -

1)
{code}
+if(parsedRange<0) {
+  LOG.error("Range cannot be negative: Supplied range: ", parsedRange);
+}
{code}
We should throw an exception here.

2)
{code}
+return ipCount / parsedRange;
{code}
The range check for parsedRange allows parsedRange to be 0 which would lead to 
this division being infinity.

3)
{code}
+  results[i] <<= 8;
+  results[i] |= octets[i] & 0xff;
{code}
No need for this. You can just do {code} results[i] = octets[i] {code}

4)
Rename ReverseZoneUtilsTest to TestReverseZoneUtils

5)
The formatting for the patch seems off. The 'if' statements for example don't 
seem correct.

> Add the ability to split reverse zone subnets
> -
>
> Key: YARN-5404
> URL: https://issues.apache.org/jira/browse/YARN-5404
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
> Attachments: YARN-5404-YARN-4757.001.patch, 
> YARN-5404-YARN-4757.001.patch, YARN-5404-YARN-4757.001.patch, 
> YARN-5404-YARN-4757.002.patch, YARN-5404.001.patch
>
>
> In some environments, the entire container subnet may not be used exclusively 
> by containers (ie the YARN nodemanager host IPs may also be part of the 
> larger subnet). 
> As a result, the reverse lookup zones created by the YARN Registry DNS server 
> may not match those created on the forwarders.
> For example:
> Network: 172.27.0.0
> Subnet: 255.255.248.0
> Hosts:
> 0.27.172.in-addr.arpa
> 1.27.172.in-addr.arpa
> 2.27.172.in-addr.arpa
> 3.27.172.in-addr.arpa
> Containers
> 4.27.172.in-addr.arpa
> 5.27.172.in-addr.arpa
> 6.27.172.in-addr.arpa
> 7.27.172.in-addr.arpa
> YARN Registry DNS only allows for creating (as the total IP count is greater 
> than 256):
> 27.172.in-addr.arpa
> Provide configuration to further subdivide the subnets.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-5443) Add support for docker inspect

2016-07-29 Thread Varun Vasudev (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15398782#comment-15398782
 ] 

Varun Vasudev edited comment on YARN-5443 at 7/29/16 6:16 AM:
--

Thanks for the patch [~shaneku...@gmail.com]. Couple of changes -
1)
{code}
+  public static final String EXITED_STATUS = "exited";
+  public static final String RUNNING_STATUS = "running";
{code}

No need for these - they're not being used anywhere.

2)
Please rename DockerInspectCommandTest to TestDockerInspectCommand.

Also, I've noticed that DockerStopCommandTest has been committed to the code 
base. That needs to be re-named to TestDockerStopCommand as well. We should 
file a ticket to rename that class.


was (Author: vvasudev):
Thanks for the patch [~shaneku...@gmail.com]. Couple of changes -
1)
{code}
+  public static final String EXITED_STATUS = "exited";
+  public static final String RUNNING_STATUS = "running";
{code}

No need for these - they're not being used anywhere.

2)
Please rename DockerInspectCommandTest to TestDockerInspectCommand.

Also, I've noticed that DockerStopCommandTest has been committed to the code 
base. That needs to be re-named to TestDockerStopCommand as well.

> Add support for docker inspect
> --
>
> Key: YARN-5443
> URL: https://issues.apache.org/jira/browse/YARN-5443
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
> Attachments: YARN-5443.001.patch
>
>
> Similar to the DockerStopCommand and DockerRunCommand, it would be desirable 
> to have a DockerInspectCommand. The initial use is for retrieving a 
> containers status, but many other uses are possible (IP information, volume 
> information, etc).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5443) Add support for docker inspect

2016-07-29 Thread Varun Vasudev (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15398782#comment-15398782
 ] 

Varun Vasudev commented on YARN-5443:
-

Thanks for the patch [~shaneku...@gmail.com]. Couple of changes -
1)
{code}
+  public static final String EXITED_STATUS = "exited";
+  public static final String RUNNING_STATUS = "running";
{code}

No need for these - they're not being used anywhere.

2)
Please rename DockerInspectCommandTest to TestDockerInspectCommand.

Also, I've noticed that DockerStopCommandTest has been committed to the code 
base. That needs to be re-named to TestDockerStopCommand as well.

> Add support for docker inspect
> --
>
> Key: YARN-5443
> URL: https://issues.apache.org/jira/browse/YARN-5443
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
> Attachments: YARN-5443.001.patch
>
>
> Similar to the DockerStopCommand and DockerRunCommand, it would be desirable 
> to have a DockerInspectCommand. The initial use is for retrieving a 
> containers status, but many other uses are possible (IP information, volume 
> information, etc).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-1394) RM to inform AMs when a container completed due to NM going offline -planned or unplanned

2016-07-29 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15398772#comment-15398772
 ] 

Rohith Sharma K S commented on YARN-1394:
-

Yes, it looks to be overlap with YARN-3224 at higher end. I would like to hear 
from reporter and close this JIRA?
[~ste...@apache.org] Would you kindly give your opinion?

> RM to inform AMs when a container completed due to NM going offline -planned 
> or unplanned
> -
>
> Key: YARN-1394
> URL: https://issues.apache.org/jira/browse/YARN-1394
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: graceful
>Reporter: Steve Loughran
>Assignee: Rohith Sharma K S
>
> YARN-914 proposes graceful decommission of an NM, and NMs already have the 
> right to go offline.
> If AMs could be told that a container completed from an NM option -offline vs 
> decommission, the AM could use that in its future blacklisting and placement 
> policy. 
> This matters in long-lived services which may like to place new instances 
> where they were placed before, and track hosts failure rates



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



  1   2   >