[jira] [Commented] (YARN-8137) Parallelize node addition in SLS

2018-04-19 Thread Abhishek Modi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16445388#comment-16445388
 ] 

Abhishek Modi commented on YARN-8137:
-

Added a new patch after rebasing with trunk.

cc [~elgoiri]

> Parallelize node addition in SLS
> 
>
> Key: YARN-8137
> URL: https://issues.apache.org/jira/browse/YARN-8137
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Abhishek Modi
>Assignee: Abhishek Modi
>Priority: Major
> Attachments: YARN-8137.001.patch, YARN-8137.002.patch, 
> YARN-8137.003.patch
>
>
> Right now, nodes are added sequentially and it can take a long time if there 
> are large number of nodes. With this change nodes will be added in parallel 
> and thus reduce the node addition time.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8137) Parallelize node addition in SLS

2018-04-19 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16445385#comment-16445385
 ] 

genericqa commented on YARN-8137:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
32s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 44s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 46s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 
15s{color} | {color:green} hadoop-sls in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 63m 11s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | YARN-8137 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12919959/YARN-8137.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux ed695749fa77 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / da5bcf5 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/20419/testReport/ |
| Max. process+thread count | 456 (vs. ulimit of 1) |
| modules | C: hadoop-tools/hadoop-sls U: hadoop-tools/hadoop-sls |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/20419/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Parallelize node addition in SLS
> --

[jira] [Commented] (YARN-6827) [ATS1/1.5] NPE exception while publishing recovering applications into ATS during RM restart.

2018-04-19 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16445384#comment-16445384
 ] 

Rohith Sharma K S commented on YARN-6827:
-

Cherry-picked to branch-2 as well. thanks to [~sunilg] for review and 
committing the patch. 

> [ATS1/1.5] NPE exception while publishing recovering applications into ATS 
> during RM restart.
> -
>
> Key: YARN-6827
> URL: https://issues.apache.org/jira/browse/YARN-6827
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Major
> Fix For: 2.10.0, 3.2.0, 3.1.1, 3.0.3
>
> Attachments: YARN-6827.01.patch
>
>
> While recovering application, it is observed that NPE exception is thrown as 
> below.
> {noformat}
> 017-07-13 14:08:12,476 ERROR 
> org.apache.hadoop.yarn.server.resourcemanager.metrics.TimelineServiceV1Publisher:
>  Error when publishing entity 
> [YARN_APPLICATION,application_1499929227397_0001]
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl.putEntities(TimelineClientImpl.java:178)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.metrics.TimelineServiceV1Publisher.putEntity(TimelineServiceV1Publisher.java:368)
> {noformat}
> This is because in RM service start, active services are started first in Non 
> HA case and later ATSv1 services are started. In HA case, tansitionToActive 
> event has come first before ATS service are started.
> This gives sufficient time to active services recover the applications which 
> tries to publish into ATSv1 while recovering. Since ATS services are not 
> started yet, it throws NPE.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6827) [ATS1/1.5] NPE exception while publishing recovering applications into ATS during RM restart.

2018-04-19 Thread Rohith Sharma K S (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S updated YARN-6827:

Fix Version/s: 2.10.0

> [ATS1/1.5] NPE exception while publishing recovering applications into ATS 
> during RM restart.
> -
>
> Key: YARN-6827
> URL: https://issues.apache.org/jira/browse/YARN-6827
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Major
> Fix For: 2.10.0, 3.2.0, 3.1.1, 3.0.3
>
> Attachments: YARN-6827.01.patch
>
>
> While recovering application, it is observed that NPE exception is thrown as 
> below.
> {noformat}
> 017-07-13 14:08:12,476 ERROR 
> org.apache.hadoop.yarn.server.resourcemanager.metrics.TimelineServiceV1Publisher:
>  Error when publishing entity 
> [YARN_APPLICATION,application_1499929227397_0001]
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl.putEntities(TimelineClientImpl.java:178)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.metrics.TimelineServiceV1Publisher.putEntity(TimelineServiceV1Publisher.java:368)
> {noformat}
> This is because in RM service start, active services are started first in Non 
> HA case and later ATSv1 services are started. In HA case, tansitionToActive 
> event has come first before ATS service are started.
> This gives sufficient time to active services recover the applications which 
> tries to publish into ATSv1 while recovering. Since ATS services are not 
> started yet, it throws NPE.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8189) [UI2] Nodes page column headers are half truncated

2018-04-19 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8189?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-8189:
--
Attachment: Screen Shot 2018-04-20 at 12.04.15 PM.png

> [UI2] Nodes page column headers are half truncated
> --
>
> Key: YARN-8189
> URL: https://issues.apache.org/jira/browse/YARN-8189
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.1.0
>Reporter: Sunil G
>Assignee: Sunil G
>Priority: Major
> Attachments: Screen Shot 2018-04-20 at 12.04.15 PM.png, 
> YARN-8189.001.patch
>
>
> Increase column width in Node page.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8189) [UI2] Nodes page column headers are half truncated

2018-04-19 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8189?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-8189:
--
Attachment: YARN-8189.001.patch

> [UI2] Nodes page column headers are half truncated
> --
>
> Key: YARN-8189
> URL: https://issues.apache.org/jira/browse/YARN-8189
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.1.0
>Reporter: Sunil G
>Assignee: Sunil G
>Priority: Major
> Attachments: YARN-8189.001.patch
>
>
> Increase column width in Node page.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8189) [UI2] Nodes page column headers are half truncated

2018-04-19 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16445375#comment-16445375
 ] 

Sunil G commented on YARN-8189:
---

cc [~rohithsharma] pls  help to review.

> [UI2] Nodes page column headers are half truncated
> --
>
> Key: YARN-8189
> URL: https://issues.apache.org/jira/browse/YARN-8189
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.1.0
>Reporter: Sunil G
>Assignee: Sunil G
>Priority: Major
> Attachments: YARN-8189.001.patch
>
>
> Increase column width in Node page.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-8189) [UI2] Nodes page column headers are half truncated

2018-04-19 Thread Sunil G (JIRA)
Sunil G created YARN-8189:
-

 Summary: [UI2] Nodes page column headers are half truncated
 Key: YARN-8189
 URL: https://issues.apache.org/jira/browse/YARN-8189
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 3.1.0
Reporter: Sunil G
Assignee: Sunil G


Increase column width in Node page.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-5888) [UI2] Improve unit tests for new YARN UI

2018-04-19 Thread Akhil PB (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16444116#comment-16444116
 ] 

Akhil PB edited comment on YARN-5888 at 4/20/18 6:04 AM:
-

ASF license error seems not correct. Thanks, [~sunilg] for rebasing my old 
patch.

Tests are passing locally in my cluster. [~sunilg] please help me to commit the 
patch.


was (Author: akhilpb):
ASF license error seems not correct. Thanks, [~sunilg] for rebasing my old 
patch.

Tests are passing locally in my cluster. Please [~sunilg] to commit me the 
patch.

> [UI2] Improve unit tests for new YARN UI
> 
>
> Key: YARN-5888
> URL: https://issues.apache.org/jira/browse/YARN-5888
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-ui-v2
>Reporter: Akhil PB
>Assignee: Akhil PB
>Priority: Minor
> Fix For: 3.2.0, 3.1.1
>
> Attachments: YARN-5888.001.patch, YARN-5888.002.patch, 
> YARN-5888.003.patch
>
>
> - Add missing test cases in new YARN UI
> - Fix test cases errors in new YARN UI 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8188) RM Nodes UI data table index for sorting column is messed up

2018-04-19 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16445349#comment-16445349
 ] 

Weiwei Yang commented on YARN-8188:
---

Hi [~sunilg] could you please help to review this one? Thanks

> RM Nodes UI data table index for sorting column is messed up
> 
>
> Key: YARN-8188
> URL: https://issues.apache.org/jira/browse/YARN-8188
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager, webapp
>Affects Versions: 3.1.0
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Major
> Attachments: YARN-8188.001.patch
>
>
> Click {{Allocation Tags}} column, got following exception
> {noformat}
> yarn.dt.plugins.js:37 Uncaught TypeError: Cannot read property '1' of null
> at Object.jQuery.fn.dataTableExt.oSort.title-numeric-desc 
> (yarn.dt.plugins.js:37)
> at jquery.dataTables.min.js:86
> at Array.sort (native)
> at $ (jquery.dataTables.min.js:86)
> at f (jquery.dataTables.min.js:89)
> at jquery.dataTables.min.js:89
> {noformat}
> This is caused by YARN-7779, since it adds a new column of allocation tags, 
> the index needs to be updated as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8137) Parallelize node addition in SLS

2018-04-19 Thread Abhishek Modi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhishek Modi updated YARN-8137:

Attachment: YARN-8137.003.patch

> Parallelize node addition in SLS
> 
>
> Key: YARN-8137
> URL: https://issues.apache.org/jira/browse/YARN-8137
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Abhishek Modi
>Assignee: Abhishek Modi
>Priority: Major
> Attachments: YARN-8137.001.patch, YARN-8137.002.patch, 
> YARN-8137.003.patch
>
>
> Right now, nodes are added sequentially and it can take a long time if there 
> are large number of nodes. With this change nodes will be added in parallel 
> and thus reduce the node addition time.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8188) RM Nodes UI data table index for sorting column is messed up

2018-04-19 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16445336#comment-16445336
 ] 

genericqa commented on YARN-8188:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
28s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 35s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 56s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 69m 53s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}119m 49s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestQueueManagementDynamicEditPolicy
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | YARN-8188 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12919942/YARN-8188.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 7f54795877f7 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / da5bcf5 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/20418/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/20418/testReport/ |
| Max. process+thread count | 871 (vs. ulimit of 1

[jira] [Commented] (YARN-7939) Yarn Service Upgrade: add support to upgrade a component instance

2018-04-19 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16445261#comment-16445261
 ] 

genericqa commented on YARN-7939:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
34s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 12 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 58s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
17s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 10m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 
53s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 31s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 34 new + 402 unchanged - 2 fixed = 436 total (was 404) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 45s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 29m 
23s{color} | {color:green} hadoop-yarn-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 11m 
37s{color} | {color:green} hadoop-yarn-services-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
13s{color} | {color:green} hadoop-yarn-services-api in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  1m 
 3s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}130m 16s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | YARN-7939 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12919936/YARN-7939.008.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  cc  |
| uname | Linux bb887da0adf2 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/

[jira] [Commented] (YARN-8188) RM Nodes UI data table index for sorting column is messed up

2018-04-19 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16445218#comment-16445218
 ] 

Weiwei Yang commented on YARN-8188:
---

In nodes page, the column index for "Mem Used" and "Mem Available" is moved 
from [8, 9] to [9, 10] since YARN-7779 added another column to the table. Need 
to update the index accordingly. I have verified in a cluster, and the 
exception (from Chrome dev console) has gone after applied the patch.

> RM Nodes UI data table index for sorting column is messed up
> 
>
> Key: YARN-8188
> URL: https://issues.apache.org/jira/browse/YARN-8188
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager, webapp
>Affects Versions: 3.1.0
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Major
> Attachments: YARN-8188.001.patch
>
>
> Click {{Allocation Tags}} column, got following exception
> {noformat}
> yarn.dt.plugins.js:37 Uncaught TypeError: Cannot read property '1' of null
> at Object.jQuery.fn.dataTableExt.oSort.title-numeric-desc 
> (yarn.dt.plugins.js:37)
> at jquery.dataTables.min.js:86
> at Array.sort (native)
> at $ (jquery.dataTables.min.js:86)
> at f (jquery.dataTables.min.js:89)
> at jquery.dataTables.min.js:89
> {noformat}
> This is caused by YARN-7779, since it adds a new column of allocation tags, 
> the index needs to be updated as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8188) RM Nodes UI data table index for sorting column is messed up

2018-04-19 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated YARN-8188:
--
Attachment: YARN-8188.001.patch

> RM Nodes UI data table index for sorting column is messed up
> 
>
> Key: YARN-8188
> URL: https://issues.apache.org/jira/browse/YARN-8188
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager, webapp
>Affects Versions: 3.1.0
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Major
> Attachments: YARN-8188.001.patch
>
>
> Click {{Allocation Tags}} column, got following exception
> {noformat}
> yarn.dt.plugins.js:37 Uncaught TypeError: Cannot read property '1' of null
> at Object.jQuery.fn.dataTableExt.oSort.title-numeric-desc 
> (yarn.dt.plugins.js:37)
> at jquery.dataTables.min.js:86
> at Array.sort (native)
> at $ (jquery.dataTables.min.js:86)
> at f (jquery.dataTables.min.js:89)
> at jquery.dataTables.min.js:89
> {noformat}
> This is caused by YARN-7779, since it adds a new column of allocation tags, 
> the index needs to be updated as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-8188) RM Nodes UI data table index for sorting column is messed up

2018-04-19 Thread Weiwei Yang (JIRA)
Weiwei Yang created YARN-8188:
-

 Summary: RM Nodes UI data table index for sorting column is messed 
up
 Key: YARN-8188
 URL: https://issues.apache.org/jira/browse/YARN-8188
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager, webapp
Affects Versions: 3.1.0
Reporter: Weiwei Yang
Assignee: Weiwei Yang


Click {{Allocation Tags}} column, got following exception

{noformat}
yarn.dt.plugins.js:37 Uncaught TypeError: Cannot read property '1' of null
at Object.jQuery.fn.dataTableExt.oSort.title-numeric-desc 
(yarn.dt.plugins.js:37)
at jquery.dataTables.min.js:86
at Array.sort (native)
at $ (jquery.dataTables.min.js:86)
at f (jquery.dataTables.min.js:89)
at jquery.dataTables.min.js:89
{noformat}

This is caused by YARN-7779, since it adds a new column of allocation tags, the 
index needs to be updated as well.




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Closed] (YARN-7249) Fix CapacityScheduler NPE issue when a container preempted while the node is being removed

2018-04-19 Thread Konstantin Shvachko (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7249?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko closed YARN-7249.
-

> Fix CapacityScheduler NPE issue when a container preempted while the node is 
> being removed
> --
>
> Key: YARN-7249
> URL: https://issues.apache.org/jira/browse/YARN-7249
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.8.1, 2.7.5
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Blocker
> Fix For: 2.8.2, 2.7.6
>
> Attachments: YARN-7249.branch-2.8.001.patch
>
>
> This issue could happen when 3 conditions satisfied:
> 1) A node is removing from scheduler.
> 2) A container running on the node is being preempted. 
> 3) A rare race condition causes scheduler pass a null node to leaf queue.
> Fix of the problem is to add a null node check inside CapacityScheduler.
> Stack trace:
> {code}
> 2017-08-31 02:51:24,748 FATAL resourcemanager.ResourceManager 
> (ResourceManager.java:run(714)) - Error in handling event type 
> KILL_RESERVED_CONTAINER to the scheduler 
> java.lang.NullPointerException 
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue.completedContainer(LeafQueue.java:1308)
>  
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.completedContainerInternal(CapacityScheduler.java:1469)
>  
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.AbstractYarnScheduler.completedContainer(AbstractYarnScheduler.java:497)
>  
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.killReservedContainer(CapacityScheduler.java:1505)
>  
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.handle(CapacityScheduler.java:1341)
>  
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.handle(CapacityScheduler.java:127)
>  
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$SchedulerEventDispatcher$EventProcessor.run(ResourceManager.java:705)
>  
> {code}
> This is an issue only existed in 2.8.x



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7939) Yarn Service Upgrade: add support to upgrade a component instance

2018-04-19 Thread Chandni Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16445117#comment-16445117
 ] 

Chandni Singh commented on YARN-7939:
-

Patch 8 contains the fix that aligns the state of the component in spec with 
the state in the memory when finalization is done.
[~eyang] [[~gsaha] could you please review the latest patch?

> Yarn Service Upgrade: add support to upgrade a component instance 
> --
>
> Key: YARN-7939
> URL: https://issues.apache.org/jira/browse/YARN-7939
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Chandni Singh
>Assignee: Chandni Singh
>Priority: Major
> Attachments: YARN-7939.001.patch, YARN-7939.002.patch, 
> YARN-7939.003.patch, YARN-7939.004.patch, YARN-7939.005.patch, 
> YARN-7939.006.patch, YARN-7939.007.patch, YARN-7939.008.patch
>
>
> Yarn core supports in-place upgrade of containers. A yarn service can 
> leverage that to provide in-place upgrade of component instances. Please see 
> YARN-7512 for details.
> Will add support to upgrade a single component instance first and then 
> iteratively add other APIs and features.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7939) Yarn Service Upgrade: add support to upgrade a component instance

2018-04-19 Thread Chandni Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7939?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chandni Singh updated YARN-7939:

Attachment: YARN-7939.008.patch

> Yarn Service Upgrade: add support to upgrade a component instance 
> --
>
> Key: YARN-7939
> URL: https://issues.apache.org/jira/browse/YARN-7939
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Chandni Singh
>Assignee: Chandni Singh
>Priority: Major
> Attachments: YARN-7939.001.patch, YARN-7939.002.patch, 
> YARN-7939.003.patch, YARN-7939.004.patch, YARN-7939.005.patch, 
> YARN-7939.006.patch, YARN-7939.007.patch, YARN-7939.008.patch
>
>
> Yarn core supports in-place upgrade of containers. A yarn service can 
> leverage that to provide in-place upgrade of component instances. Please see 
> YARN-7512 for details.
> Will add support to upgrade a single component instance first and then 
> iteratively add other APIs and features.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8184) Too many metrics if containerLocalizer/ResourceLocalizationService uses ReadWriteDiskValidator

2018-04-19 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16445069#comment-16445069
 ] 

genericqa commented on YARN-8184:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
35s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 14s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
33s{color} | {color:red} hadoop-yarn-server-nodemanager in the patch failed. 
{color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 30s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 18m 
53s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 75m  0s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | YARN-8184 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12919928/YARN-8184.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 168c13c27cc0 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / da5bcf5 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-YARN-Build/20416/artifact/out/patch-mvninstall-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/20416/testReport/ |
| Max. process+thread count | 342 (vs. ulimit of 1) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanag

[jira] [Commented] (YARN-8151) Yarn RM Epoch should wrap around

2018-04-19 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16445040#comment-16445040
 ] 

genericqa commented on YARN-8151:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
27s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
3s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
 0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 28s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
51s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 42s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
37s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
15s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 67m 55s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}153m  7s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.reservation.TestCapacityOverTimePolicy |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | YARN-8151 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12919904/YARN-8151.04.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux 618cfb93e36d 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:5

[jira] [Commented] (YARN-8186) [Router] Federation: routing getAppState REST invocations transparently to multiple RMs

2018-04-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16445029#comment-16445029
 ] 

Hudson commented on YARN-8186:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14031 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14031/])
YARN-8186. [Router] Federation: routing getAppState REST invocations (inigoiri: 
rev da5bcf5f7d40913de2981731e951d662a3279562)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/FederationInterceptorREST.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/webapp/MockDefaultRequestInterceptorREST.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/webapp/TestFederationInterceptorREST.java


> [Router] Federation: routing getAppState REST invocations transparently to 
> multiple RMs
> ---
>
> Key: YARN-8186
> URL: https://issues.apache.org/jira/browse/YARN-8186
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>Assignee: Giovanni Matteo Fumarola
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: YARN-8186-YARN-7402.v1.patch, 
> YARN-8186-YARN-7402.v2.patch
>
>
> This JIRA tracks the design/implementation of the layer for routing 
> RMWebServicesProtocol requests to the appropriate RM(s) in a federated YARN 
> cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8168) Add support in Winutils for reporting CPU cores in all CPU groups, and aggregate kernel time, idle time and user time for all CPU groups

2018-04-19 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16445023#comment-16445023
 ] 

genericqa commented on YARN-8168:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 26m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
59m 45s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 25m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 25m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 25m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 14s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
49s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
37s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}105m 23s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | YARN-8168 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12919909/YARN-8168.000.patch |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux 2abec718d482 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 1134af9 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/20414/testReport/ |
| Max. process+thread count | 1379 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/20414/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add support in Winutils for reporting CPU cores in all CPU groups, and 
> aggregate kernel time, idle time and user time for all CPU groups
> 
>
> Key: YARN-8168
> URL: https://issues.apache.org/jira/browse/YARN-8168
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Xiao Liang
>Priority: Major
>  Labels: windows
> Attachments: YARN-8168.000.patch
>
>
> Currently winutils can only report the CPU cores of the CPU group that it's 
> running in, and the cpuTimeMs calculated from kernel time, idle time and user 
> time is also for that CPU

[jira] [Updated] (YARN-8184) Too many metrics if containerLocalizer/ResourceLocalizationService uses ReadWriteDiskValidator

2018-04-19 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-8184:
---
Attachment: YARN-8184.002.patch

> Too many metrics if containerLocalizer/ResourceLocalizationService uses 
> ReadWriteDiskValidator
> --
>
> Key: YARN-8184
> URL: https://issues.apache.org/jira/browse/YARN-8184
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: Yufei Gu
>Assignee: Yufei Gu
>Priority: Major
> Attachments: YARN-8184.001.patch, YARN-8184.002.patch
>
>
> ContainerLocalizer or ResourceLocalizationService will use the 
> ReadWriteDiskValidator as its disk validator when it downloads files if we 
> configure the yarn.nodemanger.disk-validator to ReadWriteDiskValidator's 
> name. In that case, ReadWriteDiskValidator will create a metric item for each 
> directory localized, which will be too many metrics. We should let 
> ContainerLocalizer only use the basic disk validator.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8183) yClient for Kill Application stuck in infinite loop with message "Waiting for Application to be killed"

2018-04-19 Thread Suma Shivaprasad (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16445001#comment-16445001
 ] 

Suma Shivaprasad commented on YARN-8183:


[~sunil.gov...@gmail.com] Can you pls review the patch?

> yClient for Kill Application stuck in infinite loop with message "Waiting for 
> Application to be killed"
> ---
>
> Key: YARN-8183
> URL: https://issues.apache.org/jira/browse/YARN-8183
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Affects Versions: 3.0.0, 3.1.0
>Reporter: Sumana Sathish
>Assignee: Suma Shivaprasad
>Priority: Critical
> Attachments: YARN-8183.1.patch
>
>
> yclient gets stuck in killing application with repeatedly printing following 
> message
> {code}
> INFO impl.YarnClientImpl: Waiting for application 
> application_1523604760756_0001 to be killed.{code}
> RM shows following exception
> {code}
>  ERROR resourcemanager.ResourceManager (ResourceManager.java:handle(995)) - 
> Error in handling event type APP_UPDATE_SAVED for application application_ID
> java.util.ConcurrentModificationException
> at java.util.HashMap$HashIterator.nextNode(HashMap.java:1442)
> at java.util.HashMap$EntryIterator.next(HashMap.java:1476)
> at java.util.HashMap$EntryIterator.next(HashMap.java:1474)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptMetrics.convertAtomicLongMaptoLongMap(RMAppAttemptMetrics.java:212)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptMetrics.getAggregateAppResourceUsage(RMAppAttemptMetrics.java:133)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl.getRMAppMetrics(RMAppImpl.java:1660)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.metrics.TimelineServiceV2Publisher.appFinished(TimelineServiceV2Publisher.java:178)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.metrics.CombinedSystemMetricsPublisher.appFinished(CombinedSystemMetricsPublisher.java:73)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl$FinalTransition.transition(RMAppImpl.java:1470)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl$AppKilledTransition.transition(RMAppImpl.java:1408)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl$AppKilledTransition.transition(RMAppImpl.java:1400)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl$FinalStateSavedTransition.transition(RMAppImpl.java:1177)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl$FinalStateSavedTransition.transition(RMAppImpl.java:1164)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$MultipleInternalArc.doTransition(StateMachineFactory.java:385)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:302)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.access$500(StateMachineFactory.java:46)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:487)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl.handle(RMAppImpl.java:898)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl.handle(RMAppImpl.java:118)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationEventDispatcher.handle(ResourceManager.java:993)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationEventDispatcher.handle(ResourceManager.java:977)
> at 
> org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:197)
> at 
> org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:126)
> at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8186) [Router] Federation: routing getAppState REST invocations transparently to multiple RMs

2018-04-19 Thread JIRA

[ 
https://issues.apache.org/jira/browse/YARN-8186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16444989#comment-16444989
 ] 

Íñigo Goiri commented on YARN-8186:
---

Yetus came with a +1 and the three new unit tests run with no porlbems.
+1 on  [^YARN-8186-YARN-7402.v2.patch].
I'll commit to trunk.

> [Router] Federation: routing getAppState REST invocations transparently to 
> multiple RMs
> ---
>
> Key: YARN-8186
> URL: https://issues.apache.org/jira/browse/YARN-8186
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>Assignee: Giovanni Matteo Fumarola
>Priority: Major
> Attachments: YARN-8186-YARN-7402.v1.patch, 
> YARN-8186-YARN-7402.v2.patch
>
>
> This JIRA tracks the design/implementation of the layer for routing 
> RMWebServicesProtocol requests to the appropriate RM(s) in a federated YARN 
> cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8186) [Router] Federation: routing getAppState REST invocations transparently to multiple RMs

2018-04-19 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16444982#comment-16444982
 ] 

genericqa commented on YARN-8186:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 26m  
4s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-7402 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
 8s{color} | {color:green} YARN-7402 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
20s{color} | {color:green} YARN-7402 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} YARN-7402 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} YARN-7402 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 10s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
31s{color} | {color:green} YARN-7402 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} YARN-7402 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 56s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
20s{color} | {color:green} hadoop-yarn-server-router in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 75m 11s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-8186 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12919910/YARN-8186-YARN-7402.v2.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 07f24e757a5a 4.4.0-116-generic #140-Ubuntu SMP Mon Feb 12 
21:23:04 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | YARN-7402 / f9c69ca |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/20415/testReport/ |
| Max. process+thread count | 757 (vs. ulimit of 1) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/20415/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> [Router] Federation: 

[jira] [Comment Edited] (YARN-7781) Update YARN-Services-Examples.md to be in sync with the latest code

2018-04-19 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16444972#comment-16444972
 ] 

Eric Yang edited comment on YARN-7781 at 4/19/18 11:20 PM:
---

[~gsaha] The patch looks good.  Sorry, there was one problem that I did not 
catch before.  The examples are showing nginx, but nginx does not work until 
YARN-7654 is committed because nginx depends on ENTRY_POINT support and run 
privileged container.  It would be good to change the example to use 
centos:httpd-24-centos7, and launch_command: /usr/bin/run-httpd for functional 
examples.


was (Author: eyang):
[~gsaha] The patch looks good.  Sorry, there was one problem that I did not 
catch before.  The example are showing nginx, but nginx does not work until 
YARN-7654 is committed because nginx depends on ENTRY_POINT support and run 
privileged container.  It would be good to change the example to use 
centos:httpd-24-centos7, and launch_command: /usr/bin/run-httpd for functional 
examples.

> Update YARN-Services-Examples.md to be in sync with the latest code
> ---
>
> Key: YARN-7781
> URL: https://issues.apache.org/jira/browse/YARN-7781
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gour Saha
>Assignee: Gour Saha
>Priority: Major
> Attachments: YARN-7781.01.patch, YARN-7781.02.patch, 
> YARN-7781.03.patch, YARN-7781.04.patch, YARN-7781.05.patch, YARN-7781.06.patch
>
>
> Update YARN-Services-Examples.md to make the following additions/changes:
> 1. Add an additional URL and PUT Request JSON to support flex:
> Update to flex up/down the no of containers (instances) of a component of a 
> service
> PUT URL – http://localhost:8088/app/v1/services/hello-world
> PUT Request JSON
> {code}
> {
>   "components" : [ {
> "name" : "hello",
> "number_of_containers" : 3
>   } ]
> }
> {code}
> 2. Modify all occurrences of /ws/ to /app/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7781) Update YARN-Services-Examples.md to be in sync with the latest code

2018-04-19 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16444972#comment-16444972
 ] 

Eric Yang commented on YARN-7781:
-

[~gsaha] The patch looks good.  Sorry, there was one problem that I did not 
catch before.  The example are showing nginx, but nginx does not work until 
YARN-7654 is committed because nginx depends on ENTRY_POINT support and run 
privileged container.  It would be good to change the example to use 
centos:httpd-24-centos7, and launch_command: /usr/bin/run-httpd for functional 
examples.

> Update YARN-Services-Examples.md to be in sync with the latest code
> ---
>
> Key: YARN-7781
> URL: https://issues.apache.org/jira/browse/YARN-7781
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gour Saha
>Assignee: Gour Saha
>Priority: Major
> Attachments: YARN-7781.01.patch, YARN-7781.02.patch, 
> YARN-7781.03.patch, YARN-7781.04.patch, YARN-7781.05.patch, YARN-7781.06.patch
>
>
> Update YARN-Services-Examples.md to make the following additions/changes:
> 1. Add an additional URL and PUT Request JSON to support flex:
> Update to flex up/down the no of containers (instances) of a component of a 
> service
> PUT URL – http://localhost:8088/app/v1/services/hello-world
> PUT Request JSON
> {code}
> {
>   "components" : [ {
> "name" : "hello",
> "number_of_containers" : 3
>   } ]
> }
> {code}
> 2. Modify all occurrences of /ws/ to /app/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8185) Improve log in class DirectoryCollection

2018-04-19 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16444962#comment-16444962
 ] 

genericqa commented on YARN-8185:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
29s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 31m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 50s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 44s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 19m  2s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 82m  2s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.nodemanager.containermanager.TestContainerManager |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | YARN-8185 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12919896/YARN-8185.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux cb35df398bff 3.13.0-137-generic #186-Ubuntu SMP Mon Dec 4 
19:09:19 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 7d06806 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/20413/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/20413/testReport/ |
| Max. process+thread count | 320 (vs. ulimit of 1) |
| modules | C: 
hadoo

[jira] [Commented] (YARN-8064) Docker ".cmd" files should not be put in hadoop.tmp.dir

2018-04-19 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16444951#comment-16444951
 ] 

Eric Yang commented on YARN-8064:
-

{code}
2018-04-19 22:32:01,480 WARN 
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.docker.DockerClient:
 Unable to write docker command to 
/tmp/hadoop-yarn/nm-local-dir/nmPrivate/application_1524177007026_0001/container_1524177007026_0001_01_02
{code}

It looks like the container directory creation might have race condition with 
cmd file generation.  I see this error logged when container directory is 
empty.  In my test case, there is nothing to localize, hence container 
directory doesn't exist, and resulting in cmd file generation fails.

> Docker ".cmd" files should not be put in hadoop.tmp.dir
> ---
>
> Key: YARN-8064
> URL: https://issues.apache.org/jira/browse/YARN-8064
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Badger
>Assignee: Eric Badger
>Priority: Critical
> Attachments: YARN-8064.001.patch, YARN-8064.002.patch, 
> YARN-8064.003.patch, YARN-8064.004.patch, YARN-8064.005.patch, 
> YARN-8064.006.patch, YARN-8064.007.patch, YARN-8064.008.patch
>
>
> Currently all of the docker command files are being put into 
> {{hadoop.tmp.dir}}, which doesn't get cleaned up. So, eventually all of the 
> inodes will fill up and no more tasks will be able to run



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8064) Docker ".cmd" files should not be put in hadoop.tmp.dir

2018-04-19 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16444931#comment-16444931
 ] 

Eric Yang commented on YARN-8064:
-

[~ebadger] I am getting errors when trying out the patch 008:

{code}
Invalid conf file provided, unable to open file : 
/tmp/hadoop-yarn/nm-local-dir/nmPrivate/application_1524177007026_0001/container_1524177007026_0001_01_03/docker.container_1524177007026_0001_01_031851872809113698536.cmd
Error constructing docker command, docker error code=1, error message='Invalid 
command file passed'

Stdout: main : command provided 4
main : run as user is hbase
main : requested yarn user is hbase
Creating script paths...
Creating local dirs...

Full command array for failed execution:
[/usr/local/hadoop-3.2.0-SNAPSHOT/bin/container-executor, hbase, hbase, 4, 
application_1524177007026_0001, container_1524177007026_0001_01_03, 
/tmp/hadoop-yarn/nm-local-dir/usercache/hbase/appcache/application_1524177007026_0001/container_1524177007026_0001_01_03,
 
/tmp/hadoop-yarn/nm-local-dir/nmPrivate/application_1524177007026_0001/container_1524177007026_0001_01_03/launch_container.sh,
 
/tmp/hadoop-yarn/nm-local-dir/nmPrivate/application_1524177007026_0001/container_1524177007026_0001_01_03/container_1524177007026_0001_01_03.tokens,
 
/tmp/hadoop-yarn/nm-local-dir/nmPrivate/application_1524177007026_0001/container_1524177007026_0001_01_03/container_1524177007026_0001_01_03.pid,
 /tmp/hadoop-yarn/nm-local-dir, /usr/local/hadoop-3.2.0-SNAPSHOT/logs/userlogs, 
/tmp/hadoop-yarn/nm-local-dir/nmPrivate/application_1524177007026_0001/container_1524177007026_0001_01_03/docker.container_1524177007026_0001_01_031851872809113698536.cmd,
 cgroups=none]
2018-04-19 22:31:31,093 WARN 
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.DockerLinuxContainerRuntime:
 Launch container failed. Exception:
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.privileged.PrivilegedOperationException:
 ExitCodeException exitCode=29: Invalid conf file provided, unable to open file 
: 
/tmp/hadoop-yarn/nm-local-dir/nmPrivate/application_1524177007026_0001/container_1524177007026_0001_01_03/docker.container_1524177007026_0001_01_031851872809113698536.cmd
Error constructing docker command, docker error code=1, error message='Invalid 
command file passed'

at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.privileged.PrivilegedOperationExecutor.executePrivilegedOperation(PrivilegedOperationExecutor.java:180)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.DockerLinuxContainerRuntime.launchContainer(DockerLinuxContainerRuntime.java:910)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.DelegatingLinuxContainerRuntime.launchContainer(DelegatingLinuxContainerRuntime.java:141)
at 
org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.handleLaunchForLaunchType(LinuxContainerExecutor.java:564)
at 
org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.launchContainer(LinuxContainerExecutor.java:479)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.launchContainer(ContainerLaunch.java:492)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:304)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:101)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: ExitCodeException exitCode=29: Invalid conf file provided, unable to 
open file : 
/tmp/hadoop-yarn/nm-local-dir/nmPrivate/application_1524177007026_0001/container_1524177007026_0001_01_03/docker.container_1524177007026_0001_01_031851872809113698536.cmd
Error constructing docker command, docker error code=1, error message='Invalid 
command file passed'

at org.apache.hadoop.util.Shell.runCommand(Shell.java:1009)
at org.apache.hadoop.util.Shell.run(Shell.java:902)
at 
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:1227)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.privileged.PrivilegedOperationExecutor.executePrivilegedOperation(PrivilegedOperationExecutor.java:152)
... 11 more
{code}

After the first occurrence of logged exception happens, I don't see cmd files 
created for other containers.


> Docker ".cmd" files should not be put in hadoop.tmp.dir
> ---
>
> Key

[jira] [Commented] (YARN-8183) yClient for Kill Application stuck in infinite loop with message "Waiting for Application to be killed"

2018-04-19 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16444925#comment-16444925
 ] 

genericqa commented on YARN-8183:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 29m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 47s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 31s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 1 new + 4 unchanged - 0 fixed = 5 total (was 4) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 52s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 68m  
5s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}129m 42s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | YARN-8183 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12919877/YARN-8183.1.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 36543aea00a0 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 7d06806 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/20408/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/20408/testReport/ |
| Max. process+thread count | 8

[jira] [Commented] (YARN-8177) Fix documentation for node label support

2018-04-19 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8177?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16444920#comment-16444920
 ] 

genericqa commented on YARN-8177:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
41s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
37m 16s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 27s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 50m 27s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | YARN-8177 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12919886/YARN-8177.1.patch |
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux 5e1d85f49ad6 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 7d06806 |
| maven | version: Apache Maven 3.3.9 |
| Max. process+thread count | 341 (vs. ulimit of 1) |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/20409/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Fix documentation for node label support 
> -
>
> Key: YARN-8177
> URL: https://issues.apache.org/jira/browse/YARN-8177
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.1.0
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
>Priority: Major
> Attachments: YARN-8177.1.patch
>
>
> Capacity Scheduler Dynamic Queues feature documentation needs to be fixed for 
> node label support with examples.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8184) Too many metrics if containerLocalizer/ResourceLocalizationService uses ReadWriteDiskValidator

2018-04-19 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16444916#comment-16444916
 ] 

genericqa commented on YARN-8184:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
38s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 24s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
16s{color} | {color:red} hadoop-yarn-server-nodemanager in the patch failed. 
{color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
16s{color} | {color:red} hadoop-yarn-server-nodemanager in the patch failed. 
{color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 16s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 14s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 The patch generated 1 new + 0 unchanged - 149 fixed = 1 total (was 149) 
{color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
18s{color} | {color:red} hadoop-yarn-server-nodemanager in the patch failed. 
{color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  3m 
59s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
17s{color} | {color:red} hadoop-yarn-server-nodemanager in the patch failed. 
{color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
14s{color} | {color:red} hadoop-yarn-server-nodemanager in the patch failed. 
{color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 17s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 48m 12s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | YARN-8184 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12919894/YARN-8184.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux ebcd92732ec8 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 7d06806 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-YARN-Build/20410/artifact/out/patch-mvninstall-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-serve

[jira] [Resolved] (YARN-8181) Docker container run_time

2018-04-19 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli resolved YARN-8181.
---
Resolution: Invalid

[~sajavadi], please see http://hadoop.apache.org/mailing_lists.html. You can 
send emails to u...@hadoop.apache.org. You can subscribe to the list for other 
related discussions.

Resolving this for now.

> Docker container run_time
> -
>
> Key: YARN-8181
> URL: https://issues.apache.org/jira/browse/YARN-8181
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Seyyed Ahmad Javadi
>Priority: Major
>
> Hi All,
> I want to use docker container run time but could not solve the facing 
> problem. I am following the guide below and the NM log is as follows. I can 
> not see any docker containers to be created. It works when I use default LCE. 
> Please also find how I submit a job at the end as well.
> Do you have any guide on how can I make Docker rum_time works?
> May you please let me know how can use LCE binary to make sure my docker 
> setup is correct?
> I confirmed that "docker run" works fine. I really like this developing 
> feature and would like to contribute to it. Many thanks in advance.
> [https://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/DockerContainers.html]
> {code:java}
> NM LOG:
> ...
> 2018-04-19 11:49:24,568 INFO SecurityLogger.org.apache.hadoop.ipc.Server: 
> Auth successful for appattempt_1524151293356_0005_01 (auth:SIMPLE)
> 2018-04-19 11:49:24,580 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl:
>  Start request for container_1524151293356_0005_01_01 by user ubuntu
> 2018-04-19 11:49:24,584 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl:
>  Creating a new application reference for app application_1524151293356_0005
> 2018-04-19 11:49:24,584 INFO 
> org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=ubuntu    
> IP=130.245.127.176    OPERATION=Start Container Request    
> TARGET=ContainerManageImpl    RESULT=SUCCESS    
> APPID=application_1524151293356_0005    
> CONTAINERID=container_1524151293356_0005_01_01
> 2018-04-19 11:49:24,585 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl:
>  Application application_1524151293356_0005 transitioned from NEW to INITING
> 2018-04-19 11:49:24,585 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl:
>  Adding container_1524151293356_0005_01_01 to application 
> application_1524151293356_0005
> 2018-04-19 11:49:24,585 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl:
>  Application application_1524151293356_0005 transitioned from INITING to 
> RUNNING
> 2018-04-19 11:49:24,588 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl:
>  Container container_1524151293356_0005_01_01 transitioned from NEW to 
> LOCALIZING
> 2018-04-19 11:49:24,588 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got 
> event CONTAINER_INIT for appId application_1524151293356_0005
> 2018-04-19 11:49:24,589 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService:
>  Created localizer for container_1524151293356_0005_01_01
> 2018-04-19 11:49:24,616 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService:
>  Writing credentials to the nmPrivate file 
> /tmp/hadoop-ubuntu/nm-local-dir/nmPrivate/container_1524151293356_0005_01_01.tokens
> 2018-04-19 11:49:28,090 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl:
>  Container container_1524151293356_0005_01_01 transitioned from 
> LOCALIZING to SCHEDULED
> 2018-04-19 11:49:28,090 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.scheduler.ContainerScheduler:
>  Starting container [container_1524151293356_0005_01_01]
> 2018-04-19 11:49:28,212 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl:
>  Container container_1524151293356_0005_01_01 transitioned from SCHEDULED 
> to RUNNING
> 2018-04-19 11:49:28,212 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl:
>  Starting resource-monitoring for container_1524151293356_0005_01_01
> 2018-04-19 11:49:29,401 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch:
>  Container container_1524151293356_0005_01_01 succeeded
> 2018-04-19 11:49:29,401 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl:
>  Container container_1524151293356_0005_01_01 transitioned from RUNNING 
> to EX

[jira] [Commented] (YARN-8181) Docker container run_time

2018-04-19 Thread Seyyed Ahmad Javadi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16444905#comment-16444905
 ] 

Seyyed Ahmad Javadi commented on YARN-8181:
---

Thanks again and sure, how can we take the discussion to the user list?

I was not actually sure that bug report is the correct option, sorry for that. 
After adding the above to the container-executor.cfg, I get back to the first 
place where containers seem end very soon. I very much like to have a 
discussion on how Dockerfile should be and other steps, since I could not find 
a detailed guide for such a cool feature online.

> Docker container run_time
> -
>
> Key: YARN-8181
> URL: https://issues.apache.org/jira/browse/YARN-8181
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Seyyed Ahmad Javadi
>Priority: Major
>
> Hi All,
> I want to use docker container run time but could not solve the facing 
> problem. I am following the guide below and the NM log is as follows. I can 
> not see any docker containers to be created. It works when I use default LCE. 
> Please also find how I submit a job at the end as well.
> Do you have any guide on how can I make Docker rum_time works?
> May you please let me know how can use LCE binary to make sure my docker 
> setup is correct?
> I confirmed that "docker run" works fine. I really like this developing 
> feature and would like to contribute to it. Many thanks in advance.
> [https://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/DockerContainers.html]
> {code:java}
> NM LOG:
> ...
> 2018-04-19 11:49:24,568 INFO SecurityLogger.org.apache.hadoop.ipc.Server: 
> Auth successful for appattempt_1524151293356_0005_01 (auth:SIMPLE)
> 2018-04-19 11:49:24,580 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl:
>  Start request for container_1524151293356_0005_01_01 by user ubuntu
> 2018-04-19 11:49:24,584 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl:
>  Creating a new application reference for app application_1524151293356_0005
> 2018-04-19 11:49:24,584 INFO 
> org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=ubuntu    
> IP=130.245.127.176    OPERATION=Start Container Request    
> TARGET=ContainerManageImpl    RESULT=SUCCESS    
> APPID=application_1524151293356_0005    
> CONTAINERID=container_1524151293356_0005_01_01
> 2018-04-19 11:49:24,585 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl:
>  Application application_1524151293356_0005 transitioned from NEW to INITING
> 2018-04-19 11:49:24,585 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl:
>  Adding container_1524151293356_0005_01_01 to application 
> application_1524151293356_0005
> 2018-04-19 11:49:24,585 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl:
>  Application application_1524151293356_0005 transitioned from INITING to 
> RUNNING
> 2018-04-19 11:49:24,588 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl:
>  Container container_1524151293356_0005_01_01 transitioned from NEW to 
> LOCALIZING
> 2018-04-19 11:49:24,588 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got 
> event CONTAINER_INIT for appId application_1524151293356_0005
> 2018-04-19 11:49:24,589 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService:
>  Created localizer for container_1524151293356_0005_01_01
> 2018-04-19 11:49:24,616 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService:
>  Writing credentials to the nmPrivate file 
> /tmp/hadoop-ubuntu/nm-local-dir/nmPrivate/container_1524151293356_0005_01_01.tokens
> 2018-04-19 11:49:28,090 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl:
>  Container container_1524151293356_0005_01_01 transitioned from 
> LOCALIZING to SCHEDULED
> 2018-04-19 11:49:28,090 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.scheduler.ContainerScheduler:
>  Starting container [container_1524151293356_0005_01_01]
> 2018-04-19 11:49:28,212 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl:
>  Container container_1524151293356_0005_01_01 transitioned from SCHEDULED 
> to RUNNING
> 2018-04-19 11:49:28,212 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl:
>  Starting resource-monitoring for container_1524151293356_0005_01_01
> 2018-04-19 11:49:29,401 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch:
>  Container co

[jira] [Commented] (YARN-8151) Yarn RM Epoch should wrap around

2018-04-19 Thread Young Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16444893#comment-16444893
 ] 

Young Chen commented on YARN-8151:
--

Thanks [~giovanni.fumarola]. Added a patch with suggestions fixed

> Yarn RM Epoch should wrap around
> 
>
> Key: YARN-8151
> URL: https://issues.apache.org/jira/browse/YARN-8151
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Young Chen
>Assignee: Young Chen
>Priority: Major
> Attachments: YARN-8151.01.patch, YARN-8151.01.patch, 
> YARN-8151.02.patch, YARN-8151.03.patch, YARN-8151.04.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8181) Docker container run_time

2018-04-19 Thread Shane Kumpf (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16444886#comment-16444886
 ] 

Shane Kumpf commented on YARN-8181:
---

I'm glad to hear that helped.

The error is here:
{code:java}
Invalid docker ro mount 
'/tmp/hadoop-ubuntu/nm-local-dir/filecache:/tmp/hadoop-ubuntu/nm-local-dir/filecache',
 realpath=/tmp/hadoop-ubuntu/nm-local-dir/filecache
Error constructing docker command, docker error code=13, error message='Invalid 
docker read-only mount'{code}
The nm-local-dir, {{/tmp/hadoop-ubuntu/nm-local-dir,}} is missing from 
{{docker.allowed.ro-mounts}} in {{container-executor.cfg}}.

Currently, this looks to be configuration related more than a software bug 
(although, it does highlight doc improvements that I will file). If the above 
doesn't resolve the issue, can we take the discussion to the user list instead 
of here in a bug report? Doing so would have the benefit of helping other users 
that are trying out these features, but aren't following the bug reports. :)

> Docker container run_time
> -
>
> Key: YARN-8181
> URL: https://issues.apache.org/jira/browse/YARN-8181
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Seyyed Ahmad Javadi
>Priority: Major
>
> Hi All,
> I want to use docker container run time but could not solve the facing 
> problem. I am following the guide below and the NM log is as follows. I can 
> not see any docker containers to be created. It works when I use default LCE. 
> Please also find how I submit a job at the end as well.
> Do you have any guide on how can I make Docker rum_time works?
> May you please let me know how can use LCE binary to make sure my docker 
> setup is correct?
> I confirmed that "docker run" works fine. I really like this developing 
> feature and would like to contribute to it. Many thanks in advance.
> [https://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/DockerContainers.html]
> {code:java}
> NM LOG:
> ...
> 2018-04-19 11:49:24,568 INFO SecurityLogger.org.apache.hadoop.ipc.Server: 
> Auth successful for appattempt_1524151293356_0005_01 (auth:SIMPLE)
> 2018-04-19 11:49:24,580 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl:
>  Start request for container_1524151293356_0005_01_01 by user ubuntu
> 2018-04-19 11:49:24,584 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl:
>  Creating a new application reference for app application_1524151293356_0005
> 2018-04-19 11:49:24,584 INFO 
> org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=ubuntu    
> IP=130.245.127.176    OPERATION=Start Container Request    
> TARGET=ContainerManageImpl    RESULT=SUCCESS    
> APPID=application_1524151293356_0005    
> CONTAINERID=container_1524151293356_0005_01_01
> 2018-04-19 11:49:24,585 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl:
>  Application application_1524151293356_0005 transitioned from NEW to INITING
> 2018-04-19 11:49:24,585 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl:
>  Adding container_1524151293356_0005_01_01 to application 
> application_1524151293356_0005
> 2018-04-19 11:49:24,585 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl:
>  Application application_1524151293356_0005 transitioned from INITING to 
> RUNNING
> 2018-04-19 11:49:24,588 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl:
>  Container container_1524151293356_0005_01_01 transitioned from NEW to 
> LOCALIZING
> 2018-04-19 11:49:24,588 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got 
> event CONTAINER_INIT for appId application_1524151293356_0005
> 2018-04-19 11:49:24,589 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService:
>  Created localizer for container_1524151293356_0005_01_01
> 2018-04-19 11:49:24,616 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService:
>  Writing credentials to the nmPrivate file 
> /tmp/hadoop-ubuntu/nm-local-dir/nmPrivate/container_1524151293356_0005_01_01.tokens
> 2018-04-19 11:49:28,090 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl:
>  Container container_1524151293356_0005_01_01 transitioned from 
> LOCALIZING to SCHEDULED
> 2018-04-19 11:49:28,090 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.scheduler.ContainerScheduler:
>  Starting container [container_1524151293356_0005_01_01]
> 2018-04-19 11:49:28,212 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl:
>  Container

[jira] [Commented] (YARN-8122) Component health threshold monitor

2018-04-19 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16444876#comment-16444876
 ] 

Eric Yang commented on YARN-8122:
-

[~gsaha] Thank you for the patch.  I try to simulate the cluster with bad 
docker daemon on one of the node manager. 
 I see that containers are getting relaunched and the relaunching at a steady 
rate.  When the calculation happens, it doesn't take into account of how many 
container has failed and retried during the container-health-threshold.window.  
The calculation is only base on number of current running containers.  Hence, 
service is reporting healthy instead of unhealthy.  I think the more accurate 
calculation would be health-threshold.percent = completed + running container / 
 total launched container with in health-threshold.window.  Another simplified 
calculation is total failed container / total launched in 
container-health.threshold.window should be less than 1 - 
health-threshold.percent.

Nginx relies on supervisor to start the processes.  It will not work without 
ENTRY_POINT support.  I can not get the example to work.  Therefore, I think it 
would be safer to use centos/httpd-24-centos7 with launch command: 
/usr/bin/run-httpd in the example.

> Component health threshold monitor
> --
>
> Key: YARN-8122
> URL: https://issues.apache.org/jira/browse/YARN-8122
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gour Saha
>Assignee: Gour Saha
>Priority: Major
> Attachments: YARN-8122.001.patch, YARN-8122.002.patch, 
> YARN-8122.003.patch, YARN-8122.004.patch, YARN-8122.draft.patch
>
>
> Slider supported component health threshold monitoring with SLIDER-1246. It 
> would be good to have this feature for YARN Service too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8186) [Router] Federation: routing getAppState REST invocations transparently to multiple RMs

2018-04-19 Thread JIRA

[ 
https://issues.apache.org/jira/browse/YARN-8186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16444872#comment-16444872
 ] 

Íñigo Goiri commented on YARN-8186:
---

 [^YARN-8186-YARN-7402.v2.patch] looks good.
Let's wait for Yetus to run the tests.

> [Router] Federation: routing getAppState REST invocations transparently to 
> multiple RMs
> ---
>
> Key: YARN-8186
> URL: https://issues.apache.org/jira/browse/YARN-8186
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>Assignee: Giovanni Matteo Fumarola
>Priority: Major
> Attachments: YARN-8186-YARN-7402.v1.patch, 
> YARN-8186-YARN-7402.v2.patch
>
>
> This JIRA tracks the design/implementation of the layer for routing 
> RMWebServicesProtocol requests to the appropriate RM(s) in a federated YARN 
> cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8186) [Router] Federation: routing getAppState REST invocations transparently to multiple RMs

2018-04-19 Thread Giovanni Matteo Fumarola (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16444868#comment-16444868
 ] 

Giovanni Matteo Fumarola commented on YARN-8186:


Thanks [~elgoiri] for the fast review.

Pushed v2 with an additional check.

> [Router] Federation: routing getAppState REST invocations transparently to 
> multiple RMs
> ---
>
> Key: YARN-8186
> URL: https://issues.apache.org/jira/browse/YARN-8186
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>Assignee: Giovanni Matteo Fumarola
>Priority: Major
> Attachments: YARN-8186-YARN-7402.v1.patch, 
> YARN-8186-YARN-7402.v2.patch
>
>
> This JIRA tracks the design/implementation of the layer for routing 
> RMWebServicesProtocol requests to the appropriate RM(s) in a federated YARN 
> cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8186) [Router] Federation: routing getAppState REST invocations transparently to multiple RMs

2018-04-19 Thread Giovanni Matteo Fumarola (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8186?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giovanni Matteo Fumarola updated YARN-8186:
---
Attachment: YARN-8186-YARN-7402.v2.patch

> [Router] Federation: routing getAppState REST invocations transparently to 
> multiple RMs
> ---
>
> Key: YARN-8186
> URL: https://issues.apache.org/jira/browse/YARN-8186
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>Assignee: Giovanni Matteo Fumarola
>Priority: Major
> Attachments: YARN-8186-YARN-7402.v1.patch, 
> YARN-8186-YARN-7402.v2.patch
>
>
> This JIRA tracks the design/implementation of the layer for routing 
> RMWebServicesProtocol requests to the appropriate RM(s) in a federated YARN 
> cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8168) Add support in Winutils for reporting CPU cores in all CPU groups, and aggregate kernel time, idle time and user time for all CPU groups

2018-04-19 Thread Xiao Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Liang updated YARN-8168:
-
Attachment: YARN-8168.000.patch

> Add support in Winutils for reporting CPU cores in all CPU groups, and 
> aggregate kernel time, idle time and user time for all CPU groups
> 
>
> Key: YARN-8168
> URL: https://issues.apache.org/jira/browse/YARN-8168
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Xiao Liang
>Priority: Major
>  Labels: windows
> Attachments: YARN-8168.000.patch
>
>
> Currently winutils can only report the CPU cores of the CPU group that it's 
> running in, and the cpuTimeMs calculated from kernel time, idle time and user 
> time is also for that CPU group only, which is not complete and incorrect for 
> systems with multiple CPU groups (NUMA system for example).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8181) Docker container run_time

2018-04-19 Thread Seyyed Ahmad Javadi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16444862#comment-16444862
 ] 

Seyyed Ahmad Javadi commented on YARN-8181:
---

{color:#33}Thank you so much Shane,{color}

 

{color:#33}I did what you have suggested and now I am getting another 
error, I could not solve it, thus really appropriated if you give your 
suggestion on that as well?{color}

{color:#33}Is this fine that my $HADOOP_HOME is /home/ubuntu/hadoop-3.1.0 ? 
{color}

{color:#33}I am using use ubuntu and did create new user/group for hadoop 
installation purpose. {color}

 

{color:#33}container-executor.cfg{color}
{code:java}
yarn.nodemanager.linux-container-executor.group=ubuntu
min.user.id=0
#feature.tc.enabled=1
#feature.docker.enabled=1
allowed.system.users=ubuntu
# The configs below deal with settings for Docker
[docker]
module.enabled=true
docker.privileged-containers.enabled=true
docker.binary=/usr/bin/docker
docker.allowed.capabilities=SYS_CHROOT,MKNOD,SETFCAP,SETPCAP,FSETID,CHOWN,AUDIT_WRITE,SETGID,NET_RAW,FOWNER,SETUID,DAC_OVERRIDE,KILL,NET_BIND_SERVICE
#  docker.allowed.devices=## comma seperated list of devices that can be 
mounted into a container
docker.allowed.networks=bridge,host,none
docker.allowed.ro-mounts=/sys/fs/cgroup
docker.privileged-containers.registries=local
#docker.host-pid-namespace.enabled=false
docker.allowed.rw-mounts=/home/ubuntu/hadoop-3.1.0,/home/ubuntu/hadoop-3.1.0/logs
#docker.privileged-containers.enabled=true
#docker.allowed.volume-drivers=## comma seperated list of allowed volume-drivers

# The configs below deal with settings for FPGA resource
#[fpga]
#  module.enabled=## Enable/Disable the FPGA resource handler module. set to 
"true" to enable, disabled by default
#  fpga.major-device-number=## Major device number of FPGA, by default is 246. 
Strongly recommend setting this
#  fpga.allowed-device-minor-numbers=## Comma separated allowed minor device 
numbers, empty means all FPGA devices managed by YARN.


#[docker]
#  module.enabled=true
#  docker.privileged-containers.enabled=true
#  docker.privileged-containers.registries=centos
#  
docker.allowed.capabilities=SYS_CHROOT,MKNOD,SETFCAP,SETPCAP,FSETID,CHOWN,AUDIT_WRITE,SETGID,NET_RAW,FOWNER,SETUID,DAC_OVERRIDE,KILL,NET_BIND_SERVICE
#  docker.allowed.networks=bridge,host,none
#  docker.allowed.ro-mounts=/sys/fs/cgroup
#  docker.allowed.rw-mounts=/var/hadoop/yarn/local-dir,/var/hadoop/yarn/log-dir
~   
 
{code}
{code:java}
.. ~/hadoop-3.1.0/etc/hadoop$ docker images
REPOSITORY    TAG IMAGE ID    CREATED   
  SIZE
local/hadoop-ubuntu   latest  d8335693084b    7 hours ago   
  2.06GB
hadoop-ubuntu latest  d8335693084b    7 hours ago   
  2.06GB
ubuntu    16.04   c9d990395902    7 days ago
  113MB
{code}
{code:java}
2018-04-19 17:29:09,413 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth 
successful for appattempt_1524172890453_0002_02 (auth:SIMPLE)
2018-04-19 17:29:09,425 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl:
 Start request for container_1524172890453_0002_02_01 by user ubuntu
2018-04-19 17:29:09,448 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl:
 Creating a new application reference for app application_1524172890453_0002
2018-04-19 17:29:09,449 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl:
 Application application_1524172890453_0002 transitioned from NEW to INITING
2018-04-19 17:29:09,450 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl:
 Adding container_1524172890453_0002_02_01 to application 
application_1524172890453_0002
2018-04-19 17:29:09,450 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl:
 Application application_1524172890453_0002 transitioned from INITING to RUNNING
2018-04-19 17:29:09,451 INFO 
org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=ubuntu    
IP=130.245.127.176    OPERATION=Start Container Request    
TARGET=ContainerManageImpl    RESULT=SUCCESS    
APPID=application_1524172890453_0002    
CONTAINERID=container_1524172890453_0002_02_01
2018-04-19 17:29:09,454 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl:
 Container container_1524172890453_0002_02_01 transitioned from NEW to 
LOCALIZING
2018-04-19 17:29:09,454 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got 
event CONTAINER_INIT for appId application_1524172890453_0002
2018-04-19 17:29:09,455 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationServic

[jira] [Commented] (YARN-8064) Docker ".cmd" files should not be put in hadoop.tmp.dir

2018-04-19 Thread Eric Badger (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16444863#comment-16444863
 ] 

Eric Badger commented on YARN-8064:
---

bq. Eric Badger I usually clone a separate source tree, and run 
./dev-support/bin/test-patch YARN-XXX.nnn.patch. This generates similar reports 
locally. Findbug is not as in depth as the one configured on Jenkins server, 
but it will catch most of the issues.
Thanks for the tip, [~eyang]! I'll try that out next time.

> Docker ".cmd" files should not be put in hadoop.tmp.dir
> ---
>
> Key: YARN-8064
> URL: https://issues.apache.org/jira/browse/YARN-8064
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Badger
>Assignee: Eric Badger
>Priority: Critical
> Attachments: YARN-8064.001.patch, YARN-8064.002.patch, 
> YARN-8064.003.patch, YARN-8064.004.patch, YARN-8064.005.patch, 
> YARN-8064.006.patch, YARN-8064.007.patch, YARN-8064.008.patch
>
>
> Currently all of the docker command files are being put into 
> {{hadoop.tmp.dir}}, which doesn't get cleaned up. So, eventually all of the 
> inodes will fill up and no more tasks will be able to run



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8137) Parallelize node addition in SLS

2018-04-19 Thread JIRA

[ 
https://issues.apache.org/jira/browse/YARN-8137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16444860#comment-16444860
 ] 

Íñigo Goiri commented on YARN-8137:
---

[~abmodi], the patch does not apply to trunk cleanly anymore; I guess it was 
the one we pushed a couple days ago.
Do you mind rebasing?

> Parallelize node addition in SLS
> 
>
> Key: YARN-8137
> URL: https://issues.apache.org/jira/browse/YARN-8137
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Abhishek Modi
>Assignee: Abhishek Modi
>Priority: Major
> Attachments: YARN-8137.001.patch, YARN-8137.002.patch
>
>
> Right now, nodes are added sequentially and it can take a long time if there 
> are large number of nodes. With this change nodes will be added in parallel 
> and thus reduce the node addition time.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8064) Docker ".cmd" files should not be put in hadoop.tmp.dir

2018-04-19 Thread Eric Badger (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16444855#comment-16444855
 ] 

Eric Badger commented on YARN-8064:
---

Unit test is unrelated and is tracked by YARN-7700

> Docker ".cmd" files should not be put in hadoop.tmp.dir
> ---
>
> Key: YARN-8064
> URL: https://issues.apache.org/jira/browse/YARN-8064
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Badger
>Assignee: Eric Badger
>Priority: Critical
> Attachments: YARN-8064.001.patch, YARN-8064.002.patch, 
> YARN-8064.003.patch, YARN-8064.004.patch, YARN-8064.005.patch, 
> YARN-8064.006.patch, YARN-8064.007.patch, YARN-8064.008.patch
>
>
> Currently all of the docker command files are being put into 
> {{hadoop.tmp.dir}}, which doesn't get cleaned up. So, eventually all of the 
> inodes will fill up and no more tasks will be able to run



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-8187) [UI2] clicking on Individual Nodes does not contain breadcums in Nodes Page

2018-04-19 Thread Sumana Sathish (JIRA)
Sumana Sathish created YARN-8187:


 Summary: [UI2] clicking on Individual Nodes does not contain 
breadcums in Nodes Page
 Key: YARN-8187
 URL: https://issues.apache.org/jira/browse/YARN-8187
 Project: Hadoop YARN
  Issue Type: Bug
  Components: yarn-ui-v2
Reporter: Sumana Sathish
Assignee: Zian Chen


1. Click on 'Nodes' Tab in the RM home page
2. Click on individual node under 'Node HTTP Address' 
3. No breadcrums available like  '/Home/Nodes/Node Id/ 
4. breadcums comes back once we click on other tabs like 'List of 
Applications', 'List of Containers'.





--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8186) [Router] Federation: routing getAppState REST invocations transparently to multiple RMs

2018-04-19 Thread JIRA

[ 
https://issues.apache.org/jira/browse/YARN-8186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16444849#comment-16444849
 ] 

Íñigo Goiri commented on YARN-8186:
---

Thanks [~giovanni.fumarola] for  [^YARN-8186-YARN-7402.v1.patch].
It looks good, one minor improvement: can we do some more check in addition to 
just {{assertNotNull(responseGet);}}?

> [Router] Federation: routing getAppState REST invocations transparently to 
> multiple RMs
> ---
>
> Key: YARN-8186
> URL: https://issues.apache.org/jira/browse/YARN-8186
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>Assignee: Giovanni Matteo Fumarola
>Priority: Major
> Attachments: YARN-8186-YARN-7402.v1.patch
>
>
> This JIRA tracks the design/implementation of the layer for routing 
> RMWebServicesProtocol requests to the appropriate RM(s) in a federated YARN 
> cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8186) [Router] Federation: routing getAppState REST invocations transparently to multiple RMs

2018-04-19 Thread Giovanni Matteo Fumarola (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8186?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giovanni Matteo Fumarola updated YARN-8186:
---
Description: This JIRA tracks the design/implementation of the layer for 
routing RMWebServicesProtocol requests to the appropriate RM(s) in a federated 
YARN cluster.

> [Router] Federation: routing getAppState REST invocations transparently to 
> multiple RMs
> ---
>
> Key: YARN-8186
> URL: https://issues.apache.org/jira/browse/YARN-8186
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>Assignee: Giovanni Matteo Fumarola
>Priority: Major
> Attachments: YARN-8186-YARN-7402.v1.patch
>
>
> This JIRA tracks the design/implementation of the layer for routing 
> RMWebServicesProtocol requests to the appropriate RM(s) in a federated YARN 
> cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8186) [Router] Federation: routing getAppState REST invocations transparently to multiple RMs

2018-04-19 Thread Giovanni Matteo Fumarola (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8186?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giovanni Matteo Fumarola updated YARN-8186:
---
Attachment: YARN-8186-YARN-7402.v1.patch

> [Router] Federation: routing getAppState REST invocations transparently to 
> multiple RMs
> ---
>
> Key: YARN-8186
> URL: https://issues.apache.org/jira/browse/YARN-8186
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>Priority: Major
> Attachments: YARN-8186-YARN-7402.v1.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-8186) [Router] Federation: routing getAppState REST invocations transparently to multiple RMs

2018-04-19 Thread Giovanni Matteo Fumarola (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8186?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giovanni Matteo Fumarola reassigned YARN-8186:
--

Assignee: Giovanni Matteo Fumarola

> [Router] Federation: routing getAppState REST invocations transparently to 
> multiple RMs
> ---
>
> Key: YARN-8186
> URL: https://issues.apache.org/jira/browse/YARN-8186
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>Assignee: Giovanni Matteo Fumarola
>Priority: Major
> Attachments: YARN-8186-YARN-7402.v1.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8151) Yarn RM Epoch should wrap around

2018-04-19 Thread Young Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Young Chen updated YARN-8151:
-
Attachment: YARN-8151.04.patch

> Yarn RM Epoch should wrap around
> 
>
> Key: YARN-8151
> URL: https://issues.apache.org/jira/browse/YARN-8151
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Young Chen
>Assignee: Young Chen
>Priority: Major
> Attachments: YARN-8151.01.patch, YARN-8151.01.patch, 
> YARN-8151.02.patch, YARN-8151.03.patch, YARN-8151.04.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-8186) [Router] Federation: routing getAppState REST invocations transparently to multiple RMs

2018-04-19 Thread Giovanni Matteo Fumarola (JIRA)
Giovanni Matteo Fumarola created YARN-8186:
--

 Summary: [Router] Federation: routing getAppState REST invocations 
transparently to multiple RMs
 Key: YARN-8186
 URL: https://issues.apache.org/jira/browse/YARN-8186
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Giovanni Matteo Fumarola






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Reopened] (YARN-7212) [Atsv2] TimelineSchemaCreator fails to create flowrun table causes RegionServer down!

2018-04-19 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli reopened YARN-7212:
---

Reopening and resolving instead as a dup.

> [Atsv2] TimelineSchemaCreator fails to create flowrun table causes 
> RegionServer down!
> -
>
> Key: YARN-7212
> URL: https://issues.apache.org/jira/browse/YARN-7212
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Rohith Sharma K S
>Priority: Major
>
> *Hbase-2.0* officially support *hadoop-alpha* compilations. So I was trying 
> to build and test with HBase-2.0. But table schema creation fails and causes 
> RegionServer to shutdown with following error
> {noformat}
> Caused by: java.lang.NoSuchMethodError: 
> org.apache.hadoop.hbase.Tag.asList([BII)Ljava/util/List;
> at 
> org.apache.hadoop.yarn.server.timelineservice.storage.flow.FlowScanner.getCurrentAggOp(FlowScanner.java:250)
> at 
> org.apache.hadoop.yarn.server.timelineservice.storage.flow.FlowScanner.nextInternal(FlowScanner.java:226)
> at 
> org.apache.hadoop.yarn.server.timelineservice.storage.flow.FlowScanner.next(FlowScanner.java:145)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFlusher.performFlush(StoreFlusher.java:132)
> at 
> org.apache.hadoop.hbase.regionserver.DefaultStoreFlusher.flushSnapshot(DefaultStoreFlusher.java:75)
> at org.apache.hadoop.hbase.regionserver.HStore.flushCache(HStore.java:973)
> at 
> org.apache.hadoop.hbase.regionserver.HStore$StoreFlusherImpl.flushCache(HStore.java:2252)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushCacheAndCommit(HRegion.java:2672)
> {noformat}
> Since HBase-2.0 community is ready to release Hadoop-3.x compatible versions, 
> ATSv2 also need to support HBase-2.0 versions. For this, we need to take up a 
> task of test and validate HBase-2.0 issues! 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-7212) [Atsv2] TimelineSchemaCreator fails to create flowrun table causes RegionServer down!

2018-04-19 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli resolved YARN-7212.
---
Resolution: Duplicate

> [Atsv2] TimelineSchemaCreator fails to create flowrun table causes 
> RegionServer down!
> -
>
> Key: YARN-7212
> URL: https://issues.apache.org/jira/browse/YARN-7212
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Rohith Sharma K S
>Priority: Major
>
> *Hbase-2.0* officially support *hadoop-alpha* compilations. So I was trying 
> to build and test with HBase-2.0. But table schema creation fails and causes 
> RegionServer to shutdown with following error
> {noformat}
> Caused by: java.lang.NoSuchMethodError: 
> org.apache.hadoop.hbase.Tag.asList([BII)Ljava/util/List;
> at 
> org.apache.hadoop.yarn.server.timelineservice.storage.flow.FlowScanner.getCurrentAggOp(FlowScanner.java:250)
> at 
> org.apache.hadoop.yarn.server.timelineservice.storage.flow.FlowScanner.nextInternal(FlowScanner.java:226)
> at 
> org.apache.hadoop.yarn.server.timelineservice.storage.flow.FlowScanner.next(FlowScanner.java:145)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFlusher.performFlush(StoreFlusher.java:132)
> at 
> org.apache.hadoop.hbase.regionserver.DefaultStoreFlusher.flushSnapshot(DefaultStoreFlusher.java:75)
> at org.apache.hadoop.hbase.regionserver.HStore.flushCache(HStore.java:973)
> at 
> org.apache.hadoop.hbase.regionserver.HStore$StoreFlusherImpl.flushCache(HStore.java:2252)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushCacheAndCommit(HRegion.java:2672)
> {noformat}
> Since HBase-2.0 community is ready to release Hadoop-3.x compatible versions, 
> ATSv2 also need to support HBase-2.0 versions. For this, we need to take up a 
> task of test and validate HBase-2.0 issues! 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8185) Improve log in class DirectoryCollection

2018-04-19 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8185?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-8185:
---
Component/s: nodemanager

> Improve log in class DirectoryCollection
> 
>
> Key: YARN-8185
> URL: https://issues.apache.org/jira/browse/YARN-8185
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Affects Versions: 3.1.0
>Reporter: Yufei Gu
>Assignee: Yufei Gu
>Priority: Major
> Attachments: YARN-8185.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8185) Improve log in class DirectoryCollection

2018-04-19 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8185?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-8185:
---
Affects Version/s: 3.1.0

> Improve log in class DirectoryCollection
> 
>
> Key: YARN-8185
> URL: https://issues.apache.org/jira/browse/YARN-8185
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Affects Versions: 3.1.0
>Reporter: Yufei Gu
>Assignee: Yufei Gu
>Priority: Major
> Attachments: YARN-8185.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8185) Improve log in class DirectoryCollection

2018-04-19 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8185?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-8185:
---
Attachment: YARN-8185.001.patch

> Improve log in class DirectoryCollection
> 
>
> Key: YARN-8185
> URL: https://issues.apache.org/jira/browse/YARN-8185
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Yufei Gu
>Assignee: Yufei Gu
>Priority: Major
> Attachments: YARN-8185.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8184) Too many metrics if containerLocalizer/ResourceLocalizationService uses ReadWriteDiskValidator

2018-04-19 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-8184:
---
Attachment: YARN-8184.001.patch

> Too many metrics if containerLocalizer/ResourceLocalizationService uses 
> ReadWriteDiskValidator
> --
>
> Key: YARN-8184
> URL: https://issues.apache.org/jira/browse/YARN-8184
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: Yufei Gu
>Assignee: Yufei Gu
>Priority: Major
> Attachments: YARN-8184.001.patch
>
>
> ContainerLocalizer or ResourceLocalizationService will use the 
> ReadWriteDiskValidator as its disk validator when it downloads files if we 
> configure the yarn.nodemanger.disk-validator to ReadWriteDiskValidator's 
> name. In that case, ReadWriteDiskValidator will create a metric item for each 
> directory localized, which will be too many metrics. We should let 
> ContainerLocalizer only use the basic disk validator.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-8184) Too many metrics if containerLocalizer/ResourceLocalizationService uses ReadWriteDiskValidator

2018-04-19 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu reassigned YARN-8184:
--

Assignee: Yufei Gu

> Too many metrics if containerLocalizer/ResourceLocalizationService uses 
> ReadWriteDiskValidator
> --
>
> Key: YARN-8184
> URL: https://issues.apache.org/jira/browse/YARN-8184
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: Yufei Gu
>Assignee: Yufei Gu
>Priority: Major
> Attachments: YARN-8184.001.patch
>
>
> ContainerLocalizer or ResourceLocalizationService will use the 
> ReadWriteDiskValidator as its disk validator when it downloads files if we 
> configure the yarn.nodemanger.disk-validator to ReadWriteDiskValidator's 
> name. In that case, ReadWriteDiskValidator will create a metric item for each 
> directory localized, which will be too many metrics. We should let 
> ContainerLocalizer only use the basic disk validator.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8184) Too many metrics if containerLocalizer/ResourceLocalizationService uses ReadWriteDiskValidator

2018-04-19 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-8184:
---
Summary: Too many metrics if containerLocalizer/ResourceLocalizationService 
uses ReadWriteDiskValidator  (was: Too many metrics if containerLocalizer uses 
ReadWriteDiskValidator)

> Too many metrics if containerLocalizer/ResourceLocalizationService uses 
> ReadWriteDiskValidator
> --
>
> Key: YARN-8184
> URL: https://issues.apache.org/jira/browse/YARN-8184
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: Yufei Gu
>Priority: Major
>
> ContainerLocalizer will use the ReadWriteDiskValidator as its disk validator 
> when it downloads files if we configure the yarn.nodemanger.disk-validator to 
> ReadWriteDiskValidator's name. In that case, ReadWriteDiskValidator will 
> create a metric item for each directory localized, which will be too many 
> metrics. We should let ContainerLocalizer only use the basic disk validator.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8184) Too many metrics if containerLocalizer/ResourceLocalizationService uses ReadWriteDiskValidator

2018-04-19 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-8184:
---
Description: ContainerLocalizer or ResourceLocalizationService will use the 
ReadWriteDiskValidator as its disk validator when it downloads files if we 
configure the yarn.nodemanger.disk-validator to ReadWriteDiskValidator's name. 
In that case, ReadWriteDiskValidator will create a metric item for each 
directory localized, which will be too many metrics. We should let 
ContainerLocalizer only use the basic disk validator.  (was: ContainerLocalizer 
will use the ReadWriteDiskValidator as its disk validator when it downloads 
files if we configure the yarn.nodemanger.disk-validator to 
ReadWriteDiskValidator's name. In that case, ReadWriteDiskValidator will create 
a metric item for each directory localized, which will be too many metrics. We 
should let ContainerLocalizer only use the basic disk validator.)

> Too many metrics if containerLocalizer/ResourceLocalizationService uses 
> ReadWriteDiskValidator
> --
>
> Key: YARN-8184
> URL: https://issues.apache.org/jira/browse/YARN-8184
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: Yufei Gu
>Priority: Major
>
> ContainerLocalizer or ResourceLocalizationService will use the 
> ReadWriteDiskValidator as its disk validator when it downloads files if we 
> configure the yarn.nodemanger.disk-validator to ReadWriteDiskValidator's 
> name. In that case, ReadWriteDiskValidator will create a metric item for each 
> directory localized, which will be too many metrics. We should let 
> ContainerLocalizer only use the basic disk validator.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8064) Docker ".cmd" files should not be put in hadoop.tmp.dir

2018-04-19 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16444776#comment-16444776
 ] 

genericqa commented on YARN-8064:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
35s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 10s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 31s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 19m 17s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 75m 18s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.nodemanager.containermanager.scheduler.TestContainerSchedulerQueuing
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | YARN-8064 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12919869/YARN-8064.008.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 5375c1578e8c 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 7d06806 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/20407/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/20407/testReport/ |
| Max. process+thread count | 327 (vs. ulimit of 1) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop

[jira] [Created] (YARN-8185) Improve log in class DirectoryCollection

2018-04-19 Thread Yufei Gu (JIRA)
Yufei Gu created YARN-8185:
--

 Summary: Improve log in class DirectoryCollection
 Key: YARN-8185
 URL: https://issues.apache.org/jira/browse/YARN-8185
 Project: Hadoop YARN
  Issue Type: Improvement
Reporter: Yufei Gu
Assignee: Yufei Gu






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-8184) Too many metrics if containerLocalizer uses ReadWriteDiskValidator

2018-04-19 Thread Yufei Gu (JIRA)
Yufei Gu created YARN-8184:
--

 Summary: Too many metrics if containerLocalizer uses 
ReadWriteDiskValidator
 Key: YARN-8184
 URL: https://issues.apache.org/jira/browse/YARN-8184
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Reporter: Yufei Gu


ContainerLocalizer will use the ReadWriteDiskValidator as its disk validator 
when it downloads files if we configure the yarn.nodemanger.disk-validator to 
ReadWriteDiskValidator's name. In that case, ReadWriteDiskValidator will create 
a metric item for each directory localized, which will be too many metrics. We 
should let ContainerLocalizer only use the basic disk validator.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8177) Fix documentation for node label support

2018-04-19 Thread Suma Shivaprasad (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suma Shivaprasad updated YARN-8177:
---
Attachment: YARN-8177.1.patch

> Fix documentation for node label support 
> -
>
> Key: YARN-8177
> URL: https://issues.apache.org/jira/browse/YARN-8177
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.1.0
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
>Priority: Major
> Attachments: YARN-8177.1.patch
>
>
> Capacity Scheduler Dynamic Queues feature documentation needs to be fixed for 
> node label support with examples.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8151) Yarn RM Epoch should wrap around

2018-04-19 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16444755#comment-16444755
 ] 

genericqa commented on YARN-8151:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
53s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 27s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
55s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 18s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
45s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
13s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 66m 
44s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}161m 18s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | YARN-8151 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12919848/YARN-8151.03.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux 660f2453c5e4 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precom

[jira] [Commented] (YARN-8064) Docker ".cmd" files should not be put in hadoop.tmp.dir

2018-04-19 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16444697#comment-16444697
 ] 

Eric Yang commented on YARN-8064:
-

[~ebadger] I usually clone a separate source tree, and run 
./dev-support/bin/test-patch YARN-XXX.nnn.patch.  This generates similar 
reports locally.  Findbug is not as in depth as the one configured on Jenkins 
server, but it will catch most of the issues.

> Docker ".cmd" files should not be put in hadoop.tmp.dir
> ---
>
> Key: YARN-8064
> URL: https://issues.apache.org/jira/browse/YARN-8064
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Badger
>Assignee: Eric Badger
>Priority: Critical
> Attachments: YARN-8064.001.patch, YARN-8064.002.patch, 
> YARN-8064.003.patch, YARN-8064.004.patch, YARN-8064.005.patch, 
> YARN-8064.006.patch, YARN-8064.007.patch, YARN-8064.008.patch
>
>
> Currently all of the docker command files are being put into 
> {{hadoop.tmp.dir}}, which doesn't get cleaned up. So, eventually all of the 
> inodes will fill up and no more tasks will be able to run



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8183) yClient for Kill Application stuck in infinite loop with message "Waiting for Application to be killed"

2018-04-19 Thread Suma Shivaprasad (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suma Shivaprasad updated YARN-8183:
---
Affects Version/s: 3.0.0

> yClient for Kill Application stuck in infinite loop with message "Waiting for 
> Application to be killed"
> ---
>
> Key: YARN-8183
> URL: https://issues.apache.org/jira/browse/YARN-8183
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Affects Versions: 3.0.0, 3.1.0
>Reporter: Sumana Sathish
>Assignee: Suma Shivaprasad
>Priority: Critical
> Attachments: YARN-8183.1.patch
>
>
> yclient gets stuck in killing application with repeatedly printing following 
> message
> {code}
> INFO impl.YarnClientImpl: Waiting for application 
> application_1523604760756_0001 to be killed.{code}
> RM shows following exception
> {code}
>  ERROR resourcemanager.ResourceManager (ResourceManager.java:handle(995)) - 
> Error in handling event type APP_UPDATE_SAVED for application application_ID
> java.util.ConcurrentModificationException
> at java.util.HashMap$HashIterator.nextNode(HashMap.java:1442)
> at java.util.HashMap$EntryIterator.next(HashMap.java:1476)
> at java.util.HashMap$EntryIterator.next(HashMap.java:1474)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptMetrics.convertAtomicLongMaptoLongMap(RMAppAttemptMetrics.java:212)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptMetrics.getAggregateAppResourceUsage(RMAppAttemptMetrics.java:133)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl.getRMAppMetrics(RMAppImpl.java:1660)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.metrics.TimelineServiceV2Publisher.appFinished(TimelineServiceV2Publisher.java:178)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.metrics.CombinedSystemMetricsPublisher.appFinished(CombinedSystemMetricsPublisher.java:73)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl$FinalTransition.transition(RMAppImpl.java:1470)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl$AppKilledTransition.transition(RMAppImpl.java:1408)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl$AppKilledTransition.transition(RMAppImpl.java:1400)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl$FinalStateSavedTransition.transition(RMAppImpl.java:1177)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl$FinalStateSavedTransition.transition(RMAppImpl.java:1164)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$MultipleInternalArc.doTransition(StateMachineFactory.java:385)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:302)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.access$500(StateMachineFactory.java:46)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:487)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl.handle(RMAppImpl.java:898)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl.handle(RMAppImpl.java:118)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationEventDispatcher.handle(ResourceManager.java:993)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationEventDispatcher.handle(ResourceManager.java:977)
> at 
> org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:197)
> at 
> org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:126)
> at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8183) yClient for Kill Application stuck in infinite loop with message "Waiting for Application to be killed"

2018-04-19 Thread Suma Shivaprasad (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suma Shivaprasad updated YARN-8183:
---
Attachment: YARN-8183.1.patch

> yClient for Kill Application stuck in infinite loop with message "Waiting for 
> Application to be killed"
> ---
>
> Key: YARN-8183
> URL: https://issues.apache.org/jira/browse/YARN-8183
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Reporter: Sumana Sathish
>Assignee: Suma Shivaprasad
>Priority: Critical
> Attachments: YARN-8183.1.patch
>
>
> yclient gets stuck in killing application with repeatedly printing following 
> message
> {code}
> INFO impl.YarnClientImpl: Waiting for application 
> application_1523604760756_0001 to be killed.{code}
> RM shows following exception
> {code}
>  ERROR resourcemanager.ResourceManager (ResourceManager.java:handle(995)) - 
> Error in handling event type APP_UPDATE_SAVED for application application_ID
> java.util.ConcurrentModificationException
> at java.util.HashMap$HashIterator.nextNode(HashMap.java:1442)
> at java.util.HashMap$EntryIterator.next(HashMap.java:1476)
> at java.util.HashMap$EntryIterator.next(HashMap.java:1474)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptMetrics.convertAtomicLongMaptoLongMap(RMAppAttemptMetrics.java:212)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptMetrics.getAggregateAppResourceUsage(RMAppAttemptMetrics.java:133)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl.getRMAppMetrics(RMAppImpl.java:1660)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.metrics.TimelineServiceV2Publisher.appFinished(TimelineServiceV2Publisher.java:178)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.metrics.CombinedSystemMetricsPublisher.appFinished(CombinedSystemMetricsPublisher.java:73)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl$FinalTransition.transition(RMAppImpl.java:1470)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl$AppKilledTransition.transition(RMAppImpl.java:1408)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl$AppKilledTransition.transition(RMAppImpl.java:1400)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl$FinalStateSavedTransition.transition(RMAppImpl.java:1177)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl$FinalStateSavedTransition.transition(RMAppImpl.java:1164)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$MultipleInternalArc.doTransition(StateMachineFactory.java:385)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:302)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.access$500(StateMachineFactory.java:46)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:487)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl.handle(RMAppImpl.java:898)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl.handle(RMAppImpl.java:118)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationEventDispatcher.handle(ResourceManager.java:993)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationEventDispatcher.handle(ResourceManager.java:977)
> at 
> org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:197)
> at 
> org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:126)
> at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8183) yClient for Kill Application stuck in infinite loop with message "Waiting for Application to be killed"

2018-04-19 Thread Suma Shivaprasad (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16444689#comment-16444689
 ] 

Suma Shivaprasad commented on YARN-8183:


This is occuring due to concurrent update and get on resourceUsageMap which was 
added in YARN-6232 to track resource usage by resource profile. Attaching a 
patch to fix this by changing the maps to ConcurrentHashMap.

> yClient for Kill Application stuck in infinite loop with message "Waiting for 
> Application to be killed"
> ---
>
> Key: YARN-8183
> URL: https://issues.apache.org/jira/browse/YARN-8183
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Reporter: Sumana Sathish
>Assignee: Suma Shivaprasad
>Priority: Critical
>
> yclient gets stuck in killing application with repeatedly printing following 
> message
> {code}
> INFO impl.YarnClientImpl: Waiting for application 
> application_1523604760756_0001 to be killed.{code}
> RM shows following exception
> {code}
>  ERROR resourcemanager.ResourceManager (ResourceManager.java:handle(995)) - 
> Error in handling event type APP_UPDATE_SAVED for application application_ID
> java.util.ConcurrentModificationException
> at java.util.HashMap$HashIterator.nextNode(HashMap.java:1442)
> at java.util.HashMap$EntryIterator.next(HashMap.java:1476)
> at java.util.HashMap$EntryIterator.next(HashMap.java:1474)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptMetrics.convertAtomicLongMaptoLongMap(RMAppAttemptMetrics.java:212)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptMetrics.getAggregateAppResourceUsage(RMAppAttemptMetrics.java:133)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl.getRMAppMetrics(RMAppImpl.java:1660)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.metrics.TimelineServiceV2Publisher.appFinished(TimelineServiceV2Publisher.java:178)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.metrics.CombinedSystemMetricsPublisher.appFinished(CombinedSystemMetricsPublisher.java:73)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl$FinalTransition.transition(RMAppImpl.java:1470)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl$AppKilledTransition.transition(RMAppImpl.java:1408)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl$AppKilledTransition.transition(RMAppImpl.java:1400)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl$FinalStateSavedTransition.transition(RMAppImpl.java:1177)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl$FinalStateSavedTransition.transition(RMAppImpl.java:1164)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$MultipleInternalArc.doTransition(StateMachineFactory.java:385)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:302)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.access$500(StateMachineFactory.java:46)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:487)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl.handle(RMAppImpl.java:898)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl.handle(RMAppImpl.java:118)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationEventDispatcher.handle(ResourceManager.java:993)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationEventDispatcher.handle(ResourceManager.java:977)
> at 
> org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:197)
> at 
> org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:126)
> at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8182) [UI2] Proxy- Clicking on nodes under Nodes HeatMap gives 401 error

2018-04-19 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16444686#comment-16444686
 ] 

genericqa commented on YARN-8182:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
39s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
33m 24s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 16s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 44m 59s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | YARN-8182 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12919857/YARN-8182.001.patch |
| Optional Tests |  asflicense  shadedclient  |
| uname | Linux 439f1fcbe884 4.4.0-89-generic #112-Ubuntu SMP Mon Jul 31 
19:38:41 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 7d06806 |
| maven | version: Apache Maven 3.3.9 |
| Max. process+thread count | 474 (vs. ulimit of 1) |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/20406/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> [UI2] Proxy- Clicking on nodes under Nodes HeatMap gives 401 error
> --
>
> Key: YARN-8182
> URL: https://issues.apache.org/jira/browse/YARN-8182
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Sumana Sathish
>Assignee: Sunil G
>Priority: Critical
> Attachments: YARN-8182.001.patch
>
>
> 1. Click on 'Nodes' Tab in the RM UI
> 2. Click on 'Nodes HeatMap' tab under Nodes
> 3. Click on the nodes available. It gives 401 error



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8183) yClient for Kill Application stuck in infinite loop with message "Waiting for Application to be killed"

2018-04-19 Thread Sumana Sathish (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sumana Sathish updated YARN-8183:
-
Priority: Critical  (was: Major)

> yClient for Kill Application stuck in infinite loop with message "Waiting for 
> Application to be killed"
> ---
>
> Key: YARN-8183
> URL: https://issues.apache.org/jira/browse/YARN-8183
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Reporter: Sumana Sathish
>Assignee: Suma Shivaprasad
>Priority: Critical
>
> yclient gets stuck in killing application with repeatedly printing following 
> message
> {code}
> INFO impl.YarnClientImpl: Waiting for application 
> application_1523604760756_0001 to be killed.{code}
> RM shows following exception
> {code}
>  ERROR resourcemanager.ResourceManager (ResourceManager.java:handle(995)) - 
> Error in handling event type APP_UPDATE_SAVED for application application_ID
> java.util.ConcurrentModificationException
> at java.util.HashMap$HashIterator.nextNode(HashMap.java:1442)
> at java.util.HashMap$EntryIterator.next(HashMap.java:1476)
> at java.util.HashMap$EntryIterator.next(HashMap.java:1474)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptMetrics.convertAtomicLongMaptoLongMap(RMAppAttemptMetrics.java:212)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptMetrics.getAggregateAppResourceUsage(RMAppAttemptMetrics.java:133)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl.getRMAppMetrics(RMAppImpl.java:1660)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.metrics.TimelineServiceV2Publisher.appFinished(TimelineServiceV2Publisher.java:178)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.metrics.CombinedSystemMetricsPublisher.appFinished(CombinedSystemMetricsPublisher.java:73)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl$FinalTransition.transition(RMAppImpl.java:1470)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl$AppKilledTransition.transition(RMAppImpl.java:1408)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl$AppKilledTransition.transition(RMAppImpl.java:1400)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl$FinalStateSavedTransition.transition(RMAppImpl.java:1177)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl$FinalStateSavedTransition.transition(RMAppImpl.java:1164)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$MultipleInternalArc.doTransition(StateMachineFactory.java:385)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:302)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.access$500(StateMachineFactory.java:46)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:487)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl.handle(RMAppImpl.java:898)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl.handle(RMAppImpl.java:118)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationEventDispatcher.handle(ResourceManager.java:993)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationEventDispatcher.handle(ResourceManager.java:977)
> at 
> org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:197)
> at 
> org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:126)
> at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-8183) yClient for Kill Application stuck in infinite loop with message "Waiting for Application to be killed"

2018-04-19 Thread Sumana Sathish (JIRA)
Sumana Sathish created YARN-8183:


 Summary: yClient for Kill Application stuck in infinite loop with 
message "Waiting for Application to be killed"
 Key: YARN-8183
 URL: https://issues.apache.org/jira/browse/YARN-8183
 Project: Hadoop YARN
  Issue Type: Bug
  Components: yarn
Reporter: Sumana Sathish
Assignee: Suma Shivaprasad


yclient gets stuck in killing application with repeatedly printing following 
message
{code}
INFO impl.YarnClientImpl: Waiting for application 
application_1523604760756_0001 to be killed.{code}

RM shows following exception
{code}
 ERROR resourcemanager.ResourceManager (ResourceManager.java:handle(995)) - 
Error in handling event type APP_UPDATE_SAVED for application application_ID
java.util.ConcurrentModificationException
at java.util.HashMap$HashIterator.nextNode(HashMap.java:1442)
at java.util.HashMap$EntryIterator.next(HashMap.java:1476)
at java.util.HashMap$EntryIterator.next(HashMap.java:1474)
at 
org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptMetrics.convertAtomicLongMaptoLongMap(RMAppAttemptMetrics.java:212)
at 
org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptMetrics.getAggregateAppResourceUsage(RMAppAttemptMetrics.java:133)
at 
org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl.getRMAppMetrics(RMAppImpl.java:1660)
at 
org.apache.hadoop.yarn.server.resourcemanager.metrics.TimelineServiceV2Publisher.appFinished(TimelineServiceV2Publisher.java:178)
at 
org.apache.hadoop.yarn.server.resourcemanager.metrics.CombinedSystemMetricsPublisher.appFinished(CombinedSystemMetricsPublisher.java:73)
at 
org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl$FinalTransition.transition(RMAppImpl.java:1470)
at 
org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl$AppKilledTransition.transition(RMAppImpl.java:1408)
at 
org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl$AppKilledTransition.transition(RMAppImpl.java:1400)
at 
org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl$FinalStateSavedTransition.transition(RMAppImpl.java:1177)
at 
org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl$FinalStateSavedTransition.transition(RMAppImpl.java:1164)
at 
org.apache.hadoop.yarn.state.StateMachineFactory$MultipleInternalArc.doTransition(StateMachineFactory.java:385)
at 
org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:302)
at 
org.apache.hadoop.yarn.state.StateMachineFactory.access$500(StateMachineFactory.java:46)
at 
org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:487)
at 
org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl.handle(RMAppImpl.java:898)
at 
org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl.handle(RMAppImpl.java:118)
at 
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationEventDispatcher.handle(ResourceManager.java:993)
at 
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationEventDispatcher.handle(ResourceManager.java:977)
at 
org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:197)
at 
org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:126)
at java.lang.Thread.run(Thread.java:748)
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8064) Docker ".cmd" files should not be put in hadoop.tmp.dir

2018-04-19 Thread Eric Badger (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16444669#comment-16444669
 ] 

Eric Badger commented on YARN-8064:
---

Alright, patch 008 has to have fixed all of the checkstyle right? I need to get 
a better way of testing this locally. I don't have a good tool for diff-ing 
checkstyle errors. I just get all of them at once and go find which ones look 
relevant. Lots of noise. If anyone has a better way to do this, please let me 
know. Ok, back to our regularly scheduled patch.

> Docker ".cmd" files should not be put in hadoop.tmp.dir
> ---
>
> Key: YARN-8064
> URL: https://issues.apache.org/jira/browse/YARN-8064
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Badger
>Assignee: Eric Badger
>Priority: Critical
> Attachments: YARN-8064.001.patch, YARN-8064.002.patch, 
> YARN-8064.003.patch, YARN-8064.004.patch, YARN-8064.005.patch, 
> YARN-8064.006.patch, YARN-8064.007.patch, YARN-8064.008.patch
>
>
> Currently all of the docker command files are being put into 
> {{hadoop.tmp.dir}}, which doesn't get cleaned up. So, eventually all of the 
> inodes will fill up and no more tasks will be able to run



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8064) Docker ".cmd" files should not be put in hadoop.tmp.dir

2018-04-19 Thread Eric Badger (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Badger updated YARN-8064:
--
Attachment: YARN-8064.008.patch

> Docker ".cmd" files should not be put in hadoop.tmp.dir
> ---
>
> Key: YARN-8064
> URL: https://issues.apache.org/jira/browse/YARN-8064
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Badger
>Assignee: Eric Badger
>Priority: Critical
> Attachments: YARN-8064.001.patch, YARN-8064.002.patch, 
> YARN-8064.003.patch, YARN-8064.004.patch, YARN-8064.005.patch, 
> YARN-8064.006.patch, YARN-8064.007.patch, YARN-8064.008.patch
>
>
> Currently all of the docker command files are being put into 
> {{hadoop.tmp.dir}}, which doesn't get cleaned up. So, eventually all of the 
> inodes will fill up and no more tasks will be able to run



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7900) [AMRMProxy] AMRMClientRelayer for stateful FederationInterceptor

2018-04-19 Thread Giovanni Matteo Fumarola (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16444660#comment-16444660
 ] 

Giovanni Matteo Fumarola commented on YARN-7900:


Thanks, [~botong] for putting this together and for showing me your code on 
some running machines.

 

I took a quick look at it.
Please add Javadoc for the 2 constructors in ResourceRequestSet. Just to 
highlight the difference.
In general, I would add more Javadoc for the public methods around this patch 
for easy understanding. 

> [AMRMProxy] AMRMClientRelayer for stateful FederationInterceptor
> 
>
> Key: YARN-7900
> URL: https://issues.apache.org/jira/browse/YARN-7900
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Botong Huang
>Assignee: Botong Huang
>Priority: Major
> Attachments: YARN-7900.v1.patch, YARN-7900.v2.patch, 
> YARN-7900.v3.patch, YARN-7900.v4.patch, YARN-7900.v5.patch, YARN-7900.v6.patch
>
>
> Inside stateful FederationInterceptor (YARN-7899), we need a component 
> similar to AMRMClient that remembers all pending (outstands) requests we've 
> sent to YarnRM, auto re-register and do full pending resend when YarnRM fails 
> over and throws ApplicationMasterNotRegisteredException back. This JIRA adds 
> this component as AMRMClientRelayer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8064) Docker ".cmd" files should not be put in hadoop.tmp.dir

2018-04-19 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16444658#comment-16444658
 ] 

genericqa commented on YARN-8064:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
55s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 47s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 21s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 The patch generated 3 new + 26 unchanged - 0 fixed = 29 total (was 26) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 34s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 19m 41s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 78m 29s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.nodemanager.containermanager.scheduler.TestContainerSchedulerQueuing
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | YARN-8064 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12919851/YARN-8064.007.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux e2392dc69db8 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / c6d7d3e |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/20404/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/20404/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn

[jira] [Commented] (YARN-8004) Add unit tests for inter queue preemption for dominant resource calculator

2018-04-19 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16444586#comment-16444586
 ] 

Sunil G commented on YARN-8004:
---

+1 Committing tomorrow if no objections.

> Add unit tests for inter queue preemption for dominant resource calculator
> --
>
> Key: YARN-8004
> URL: https://issues.apache.org/jira/browse/YARN-8004
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Reporter: Sumana Sathish
>Assignee: Zian Chen
>Priority: Critical
> Attachments: YARN-8004.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8151) Yarn RM Epoch should wrap around

2018-04-19 Thread Giovanni Matteo Fumarola (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16444584#comment-16444584
 ] 

Giovanni Matteo Fumarola commented on YARN-8151:


Thanks [~youchen].

1) RM_EPOCH_RANGE should be RM_EPOCH + "range"

2) Add a comment why "Assert.assertEquals(epoch + 3, wrappedEpoch);"

> Yarn RM Epoch should wrap around
> 
>
> Key: YARN-8151
> URL: https://issues.apache.org/jira/browse/YARN-8151
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Young Chen
>Assignee: Young Chen
>Priority: Major
> Attachments: YARN-8151.01.patch, YARN-8151.01.patch, 
> YARN-8151.02.patch, YARN-8151.03.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6827) [ATS1/1.5] NPE exception while publishing recovering applications into ATS during RM restart.

2018-04-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16444580#comment-16444580
 ] 

Hudson commented on YARN-6827:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #14029 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14029/])
YARN-6827. [ATS1/1.5] NPE exception while publishing recovering (sunilg: rev 
7d06806dfdeb3252ac0defe23e8c468eabfa8b5e)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ResourceManager.java


> [ATS1/1.5] NPE exception while publishing recovering applications into ATS 
> during RM restart.
> -
>
> Key: YARN-6827
> URL: https://issues.apache.org/jira/browse/YARN-6827
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Major
> Attachments: YARN-6827.01.patch
>
>
> While recovering application, it is observed that NPE exception is thrown as 
> below.
> {noformat}
> 017-07-13 14:08:12,476 ERROR 
> org.apache.hadoop.yarn.server.resourcemanager.metrics.TimelineServiceV1Publisher:
>  Error when publishing entity 
> [YARN_APPLICATION,application_1499929227397_0001]
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl.putEntities(TimelineClientImpl.java:178)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.metrics.TimelineServiceV1Publisher.putEntity(TimelineServiceV1Publisher.java:368)
> {noformat}
> This is because in RM service start, active services are started first in Non 
> HA case and later ATSv1 services are started. In HA case, tansitionToActive 
> event has come first before ATS service are started.
> This gives sufficient time to active services recover the applications which 
> tries to publish into ATSv1 while recovering. Since ATS services are not 
> started yet, it throws NPE.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8182) [UI2] Proxy- Clicking on nodes under Nodes HeatMap gives 401 error

2018-04-19 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-8182:
--
Attachment: YARN-8182.001.patch

> [UI2] Proxy- Clicking on nodes under Nodes HeatMap gives 401 error
> --
>
> Key: YARN-8182
> URL: https://issues.apache.org/jira/browse/YARN-8182
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Sumana Sathish
>Assignee: Sunil G
>Priority: Critical
> Attachments: YARN-8182.001.patch
>
>
> 1. Click on 'Nodes' Tab in the RM UI
> 2. Click on 'Nodes HeatMap' tab under Nodes
> 3. Click on the nodes available. It gives 401 error



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8182) [UI2] Proxy- Clicking on nodes under Nodes HeatMap gives 401 error

2018-04-19 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16444575#comment-16444575
 ] 

Sunil G commented on YARN-8182:
---

cc/ [~rohithsharma] for review.

> [UI2] Proxy- Clicking on nodes under Nodes HeatMap gives 401 error
> --
>
> Key: YARN-8182
> URL: https://issues.apache.org/jira/browse/YARN-8182
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Sumana Sathish
>Assignee: Sunil G
>Priority: Critical
> Attachments: YARN-8182.001.patch
>
>
> 1. Click on 'Nodes' Tab in the RM UI
> 2. Click on 'Nodes HeatMap' tab under Nodes
> 3. Click on the nodes available. It gives 401 error



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8137) Parallelize node addition in SLS

2018-04-19 Thread Giovanni Matteo Fumarola (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16444574#comment-16444574
 ] 

Giovanni Matteo Fumarola commented on YARN-8137:


Thanks [~abmodi] for the explanation.

LGTM +1.

> Parallelize node addition in SLS
> 
>
> Key: YARN-8137
> URL: https://issues.apache.org/jira/browse/YARN-8137
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Abhishek Modi
>Assignee: Abhishek Modi
>Priority: Major
> Attachments: YARN-8137.001.patch, YARN-8137.002.patch
>
>
> Right now, nodes are added sequentially and it can take a long time if there 
> are large number of nodes. With this change nodes will be added in parallel 
> and thus reduce the node addition time.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-8182) [UI2] Proxy- Clicking on nodes under Nodes HeatMap gives 401 error

2018-04-19 Thread Sumana Sathish (JIRA)
Sumana Sathish created YARN-8182:


 Summary: [UI2] Proxy- Clicking on nodes under Nodes HeatMap gives 
401 error
 Key: YARN-8182
 URL: https://issues.apache.org/jira/browse/YARN-8182
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Sumana Sathish
Assignee: Sunil G


1. Click on 'Nodes' Tab in the RM UI
2. Click on 'Nodes HeatMap' tab under Nodes
3. Click on the nodes available. It gives 401 error



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7974) Allow updating application tracking url after registration

2018-04-19 Thread Jonathan Hung (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=1693#comment-1693
 ] 

Jonathan Hung commented on YARN-7974:
-

Hi [~leftnoteasy], do you mind looking at the latest patch? (005) I don't think 
the failed unit tests are related. Thanks!

> Allow updating application tracking url after registration
> --
>
> Key: YARN-7974
> URL: https://issues.apache.org/jira/browse/YARN-7974
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
>Priority: Major
> Attachments: YARN-7974.001.patch, YARN-7974.002.patch, 
> YARN-7974.003.patch, YARN-7974.004.patch, YARN-7974.005.patch
>
>
> Normally an application's tracking url is set on AM registration. We have a 
> use case for updating the tracking url after registration (e.g. the UI is 
> hosted on one of the containers).
> Approach is for AM to update tracking url on heartbeat to RM, and add related 
> API in AMRMClient.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8181) Docker container run_time

2018-04-19 Thread Shane Kumpf (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=1691#comment-1691
 ] 

Shane Kumpf commented on YARN-8181:
---

[~sajavadi] - Thanks for the report and your interest in this feature! The 
documentation is available here: 
[http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/DockerContainers.html]

Regarding the behavior above, the container completed successfully and very 
quickly. I expect the image isn't privileged/trusted (and the ENTRYPOINT/CMD in 
your Dockerfile is something like {{bash}}).

As a result of being a non-privileged/untrusted image, the MR launcher script 
is not executed in the container so the PI mapper/reducers never actually run 
here. Instead, whatever is set in the Dockerfile will be executed in the 
container. If the Dockerfile is setup to use a command that will not keep the 
container alive, the container completes very quickly, as you saw.

Can you try the following to add this image as privileged/trusted and rerun the 
pi job?
 # Add {{docker.privileged-containers.registries}} to 
{{container-executor.cfg}} under the {{[docker]}} section with the value 
{{local}} (if the configuration already exists, append {{local}} to the list).
 # Tag the {{hadoop-ubuntu}} image with so that it is in the {{local}} 
namespace with {{docker tag hadoop-ubuntu:latest local/hadoop-ubuntu:latest}}.
 # Change {{YARN_CONTAINER_RUNTIME_DOCKER_IMAGE}}'s value to 
{{local/hadoop-ubuntu:latest}}. 

Let me know if that works and I'll open an issue to update the documentation 
with similar pointers.

> Docker container run_time
> -
>
> Key: YARN-8181
> URL: https://issues.apache.org/jira/browse/YARN-8181
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Seyyed Ahmad Javadi
>Priority: Major
>
> Hi All,
> I want to use docker container run time but could not solve the facing 
> problem. I am following the guide below and the NM log is as follows. I can 
> not see any docker containers to be created. It works when I use default LCE. 
> Please also find how I submit a job at the end as well.
> Do you have any guide on how can I make Docker rum_time works?
> May you please let me know how can use LCE binary to make sure my docker 
> setup is correct?
> I confirmed that "docker run" works fine. I really like this developing 
> feature and would like to contribute to it. Many thanks in advance.
> [https://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/DockerContainers.html]
> {code:java}
> NM LOG:
> ...
> 2018-04-19 11:49:24,568 INFO SecurityLogger.org.apache.hadoop.ipc.Server: 
> Auth successful for appattempt_1524151293356_0005_01 (auth:SIMPLE)
> 2018-04-19 11:49:24,580 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl:
>  Start request for container_1524151293356_0005_01_01 by user ubuntu
> 2018-04-19 11:49:24,584 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl:
>  Creating a new application reference for app application_1524151293356_0005
> 2018-04-19 11:49:24,584 INFO 
> org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=ubuntu    
> IP=130.245.127.176    OPERATION=Start Container Request    
> TARGET=ContainerManageImpl    RESULT=SUCCESS    
> APPID=application_1524151293356_0005    
> CONTAINERID=container_1524151293356_0005_01_01
> 2018-04-19 11:49:24,585 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl:
>  Application application_1524151293356_0005 transitioned from NEW to INITING
> 2018-04-19 11:49:24,585 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl:
>  Adding container_1524151293356_0005_01_01 to application 
> application_1524151293356_0005
> 2018-04-19 11:49:24,585 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl:
>  Application application_1524151293356_0005 transitioned from INITING to 
> RUNNING
> 2018-04-19 11:49:24,588 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl:
>  Container container_1524151293356_0005_01_01 transitioned from NEW to 
> LOCALIZING
> 2018-04-19 11:49:24,588 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got 
> event CONTAINER_INIT for appId application_1524151293356_0005
> 2018-04-19 11:49:24,589 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService:
>  Created localizer for container_1524151293356_0005_01_01
> 2018-04-19 11:49:24,616 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService:
>  Writing credentials to the nmPrivate file 
> /tmp/hadoop-ubuntu/nm-local-dir/nmPrivate/container_

[jira] [Commented] (YARN-8108) RM metrics rest API throws GSSException in kerberized environment

2018-04-19 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=1683#comment-1683
 ] 

Eric Yang commented on YARN-8108:
-

There are 3 registrations of RMAuthenticationFilter to context cluster, logs, 
and static.  There is 1 registration of SpnegoFilter to logs, static.  /ws, 
/proxy, /app path spec are blanket by the default SpnegoFilter when embedded 
proxyserver is enabled (because proxyserver filter is initialized after cluster 
context).   I tried to reduce RMAuthenticationFilter to 1, and discover that it 
still has conflict between /proxy and /cluster.  I tried to disable 
SpnegoFilter, then /proxy become insecure.

In YARN-1553, webproxy was converted to use HttpServer2.Builder.  This change 
picked up webproxy initSpnego filter.  
In YARN-1482, code was modified to allow WebAppProxy to run in RM.  In 
HADOOP-10075 + HADOOP-10703 are written to apply handlers > context > filter > 
servlet globally.  Existing code which doesn't have handler applied, 
defineFilter applies to context.  This is all working fine, if there is only 
one context that enclosing all hadoop logic.  However, if multiple webapps are 
put on the same server, older code written pre-dated HADOOP-10703 may have 
separated context initialized with separated AuthenticationFilter that would 
result in request is a replay error.  

A couple problems with Hadoop code misuse of web application:
- Servlet code are setup to be context.  Context should be RM, webproxy, 
timelineserver.  Today, it is cluster, logs, static.
- YARN servlet logic are written as Filter.
- Handler logic are written as Filter.
- Filter are applied to wildcard path and assuming it is applied globally, 
which may not be true.

This problem appears to have existed since creation of HttpServer2 and getting 
more complex to manage with introduction of jetty 9 upgrade.  A few JIRA like 
YARN-2397 are attempts to band-aid the general code reuse and specialized 
AuthenticationFilter.  Each of the pieces were evolved independently, and there 
was no conflicts until everything is put into RM.  One possible solution is to 
rewrite RMAuthenticationFilter, and AuthenticationFilter to become 
AuthenticationHandler, and it is applied globally.  This would be the same as 
turning RMAuthenticationFilter into a global Filter.

I am not 100% sure if RMAuthenticationFilter should oversee all Kerberos login 
activity when proxyserver is in embedded mode, but I am more inclined to make 
it so after analyze the code.  I post here to see if anyone who are more 
familiar with YARN code base can shed some lights on best approach to address 
this issue.




> RM metrics rest API throws GSSException in kerberized environment
> -
>
> Key: YARN-8108
> URL: https://issues.apache.org/jira/browse/YARN-8108
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Kshitij Badani
>Priority: Major
> Attachments: YARN-8108.001.patch
>
>
> Test is trying to pull up metrics data from SHS after kiniting as 'test_user'
> It is throwing GSSException as follows
> {code:java}
> b2b460b80713|RUNNING: curl --silent -k -X GET -D 
> /hwqe/hadoopqe/artifacts/tmp-94845 --negotiate -u : 
> http://rm_host:8088/proxy/application_1518674952153_0070/metrics/json2018-02-15
>  07:15:48,757|INFO|MainThread|machine.py:194 - 
> run()||GUID=fc5a3266-28f8-4eed-bae2-b2b460b80713|Exit Code: 0
> 2018-02-15 07:15:48,758|INFO|MainThread|spark.py:1757 - 
> getMetricsJsonData()|metrics:
> 
> 
> 
> Error 403 GSSException: Failure unspecified at GSS-API level 
> (Mechanism level: Request is a replay (34))
> 
> HTTP ERROR 403
> Problem accessing /proxy/application_1518674952153_0070/metrics/json. 
> Reason:
>  GSSException: Failure unspecified at GSS-API level (Mechanism level: 
> Request is a replay (34))
> 
> 
> {code}
> Rootcausing : proxyserver on RM can't be supported for Kerberos enabled 
> cluster because AuthenticationFilter is applied twice in Hadoop code (once in 
> httpServer2 for RM, and another instance from AmFilterInitializer for proxy 
> server). This will require code changes to hadoop-yarn-server-web-proxy 
> project



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8064) Docker ".cmd" files should not be put in hadoop.tmp.dir

2018-04-19 Thread Eric Badger (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=1660#comment-1660
 ] 

Eric Badger commented on YARN-8064:
---

Patch 007 addresses checkstyle issues.

> Docker ".cmd" files should not be put in hadoop.tmp.dir
> ---
>
> Key: YARN-8064
> URL: https://issues.apache.org/jira/browse/YARN-8064
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Badger
>Assignee: Eric Badger
>Priority: Critical
> Attachments: YARN-8064.001.patch, YARN-8064.002.patch, 
> YARN-8064.003.patch, YARN-8064.004.patch, YARN-8064.005.patch, 
> YARN-8064.006.patch, YARN-8064.007.patch
>
>
> Currently all of the docker command files are being put into 
> {{hadoop.tmp.dir}}, which doesn't get cleaned up. So, eventually all of the 
> inodes will fill up and no more tasks will be able to run



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8064) Docker ".cmd" files should not be put in hadoop.tmp.dir

2018-04-19 Thread Eric Badger (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Badger updated YARN-8064:
--
Attachment: YARN-8064.007.patch

> Docker ".cmd" files should not be put in hadoop.tmp.dir
> ---
>
> Key: YARN-8064
> URL: https://issues.apache.org/jira/browse/YARN-8064
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Badger
>Assignee: Eric Badger
>Priority: Critical
> Attachments: YARN-8064.001.patch, YARN-8064.002.patch, 
> YARN-8064.003.patch, YARN-8064.004.patch, YARN-8064.005.patch, 
> YARN-8064.006.patch, YARN-8064.007.patch
>
>
> Currently all of the docker command files are being put into 
> {{hadoop.tmp.dir}}, which doesn't get cleaned up. So, eventually all of the 
> inodes will fill up and no more tasks will be able to run



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8005) Add unit tests for queue priority with dominant resource calculator  

2018-04-19 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=1644#comment-1644
 ] 

Sunil G commented on YARN-8005:
---

[~Zian Chen] Test case failures are related?

> Add unit tests for queue priority with dominant resource calculator  
> -
>
> Key: YARN-8005
> URL: https://issues.apache.org/jira/browse/YARN-8005
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Sumana Sathish
>Assignee: Zian Chen
>Priority: Critical
> Attachments: YARN-8005.001.patch, YARN-8005.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8064) Docker ".cmd" files should not be put in hadoop.tmp.dir

2018-04-19 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=1640#comment-1640
 ] 

genericqa commented on YARN-8064:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
33s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 20s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 23s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 The patch generated 22 new + 26 unchanged - 0 fixed = 48 total (was 26) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 37s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 19m 19s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 76m 17s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.nodemanager.containermanager.scheduler.TestContainerSchedulerQueuing
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | YARN-8064 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12919839/YARN-8064.006.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux eacc21f84702 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / c6d7d3e |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/20403/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/20403/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yar

[jira] [Commented] (YARN-8179) Preemption does not happen due to natural_termination_factor when DRF is used

2018-04-19 Thread Eric Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=1638#comment-1638
 ] 

Eric Payne commented on YARN-8179:
--

bq. I want to make sure that it doesn't cause unnecessary preemption,
I'm convinced that this patch will not cause unnecessary preemption.

> Preemption does not happen due to natural_termination_factor when DRF is used
> -
>
> Key: YARN-8179
> URL: https://issues.apache.org/jira/browse/YARN-8179
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: kyungwan nam
>Assignee: kyungwan nam
>Priority: Major
> Attachments: YARN-8179.001.patch
>
>
> cluster
> * DominantResourceCalculator
> * QueueA : 50 (capacity) ~ 100 (max capacity)
> * QueueB : 50 (capacity) ~ 50 (max capacity)
> all of resources have been allocated to QueueA. (all Vcores are allocated to 
> QueueA)
> if App1 is submitted to QueueB, over-utilized QueueA should be preempted.
> but, I’ve met the problem, which preemption does not happen. it caused that 
> App1 AM can not allocated.
> when App1 is submitted, pending resources for asking App1 AM would be 
> 
> so, Vcores which need to be preempted from QueueB should be 1.
> but, it can be 0 due to natural_termination_factor (default is 0.2)
> we should guarantee that resources not to be 0 even though applying 
> natural_termination_factor



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8151) Yarn RM Epoch should wrap around

2018-04-19 Thread Young Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Young Chen updated YARN-8151:
-
Attachment: YARN-8151.03.patch

> Yarn RM Epoch should wrap around
> 
>
> Key: YARN-8151
> URL: https://issues.apache.org/jira/browse/YARN-8151
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Young Chen
>Assignee: Young Chen
>Priority: Major
> Attachments: YARN-8151.01.patch, YARN-8151.01.patch, 
> YARN-8151.02.patch, YARN-8151.03.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8181) Docker container run_time

2018-04-19 Thread Seyyed Ahmad Javadi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Seyyed Ahmad Javadi updated YARN-8181:
--
Description: 
Hi All,

I want to use docker container run time but could not solve the facing problem. 
I am following the guide below and the NM log is as follows. I can not see any 
docker containers to be created. It works when I use default LCE. Please also 
find how I submit a job at the end as well.

Do you have any guide on how can I make Docker rum_time works?

May you please let me know how can use LCE binary to make sure my docker setup 
is correct?

I confirmed that "docker run" works fine. I really like this developing feature 
and would like to contribute to it. Many thanks in advance.

[https://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/DockerContainers.html]
{code:java}
NM LOG:
...
2018-04-19 11:49:24,568 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth 
successful for appattempt_1524151293356_0005_01 (auth:SIMPLE)
2018-04-19 11:49:24,580 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl:
 Start request for container_1524151293356_0005_01_01 by user ubuntu
2018-04-19 11:49:24,584 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl:
 Creating a new application reference for app application_1524151293356_0005
2018-04-19 11:49:24,584 INFO 
org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=ubuntu    
IP=130.245.127.176    OPERATION=Start Container Request    
TARGET=ContainerManageImpl    RESULT=SUCCESS    
APPID=application_1524151293356_0005    
CONTAINERID=container_1524151293356_0005_01_01
2018-04-19 11:49:24,585 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl:
 Application application_1524151293356_0005 transitioned from NEW to INITING
2018-04-19 11:49:24,585 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl:
 Adding container_1524151293356_0005_01_01 to application 
application_1524151293356_0005
2018-04-19 11:49:24,585 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl:
 Application application_1524151293356_0005 transitioned from INITING to RUNNING
2018-04-19 11:49:24,588 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl:
 Container container_1524151293356_0005_01_01 transitioned from NEW to 
LOCALIZING
2018-04-19 11:49:24,588 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got 
event CONTAINER_INIT for appId application_1524151293356_0005
2018-04-19 11:49:24,589 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService:
 Created localizer for container_1524151293356_0005_01_01
2018-04-19 11:49:24,616 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService:
 Writing credentials to the nmPrivate file 
/tmp/hadoop-ubuntu/nm-local-dir/nmPrivate/container_1524151293356_0005_01_01.tokens
2018-04-19 11:49:28,090 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl:
 Container container_1524151293356_0005_01_01 transitioned from LOCALIZING 
to SCHEDULED
2018-04-19 11:49:28,090 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.scheduler.ContainerScheduler:
 Starting container [container_1524151293356_0005_01_01]
2018-04-19 11:49:28,212 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl:
 Container container_1524151293356_0005_01_01 transitioned from SCHEDULED 
to RUNNING
2018-04-19 11:49:28,212 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl:
 Starting resource-monitoring for container_1524151293356_0005_01_01
2018-04-19 11:49:29,401 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch:
 Container container_1524151293356_0005_01_01 succeeded
2018-04-19 11:49:29,401 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl:
 Container container_1524151293356_0005_01_01 transitioned from RUNNING to 
EXITED_WITH_SUCCESS
2018-04-19 11:49:29,401 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch:
 Cleaning up container container_1524151293356_0005_01_01
2018-04-19 11:49:29,520 INFO 
org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor: Removing 
Docker container : container_1524151293356_0005_01_01
2018-04-19 11:49:34,517 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch:
 Could not get pid for container_1524151293356_0005_01_01. Waited for 5000 
ms.
2018-04-19 11:49:34,517 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.Containe

[jira] [Created] (YARN-8181) Docker container run_time

2018-04-19 Thread Seyyed Ahmad Javadi (JIRA)
Seyyed Ahmad Javadi created YARN-8181:
-

 Summary: Docker container run_time
 Key: YARN-8181
 URL: https://issues.apache.org/jira/browse/YARN-8181
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Seyyed Ahmad Javadi


Hi All,

I want to use docker container run time but could not solve the facing problem. 
I am following the guide below and the NM log is as follows. I can not see any 
docker containers to be created. It works when I use default LCE. Please also 
find how I submit a job at the end as well.

May you please let me know how can use LCE binary to make sure my docker setup 
is correct?

I confirmed that "docker run" works fine. I really like this developing feature 
and would like to contribute to it. Many thanks in advance.

https://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/DockerContainers.html
{code:java}
NM LOG:
...
2018-04-19 11:49:24,568 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth 
successful for appattempt_1524151293356_0005_01 (auth:SIMPLE)
2018-04-19 11:49:24,580 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl:
 Start request for container_1524151293356_0005_01_01 by user ubuntu
2018-04-19 11:49:24,584 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl:
 Creating a new application reference for app application_1524151293356_0005
2018-04-19 11:49:24,584 INFO 
org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=ubuntu    
IP=130.245.127.176    OPERATION=Start Container Request    
TARGET=ContainerManageImpl    RESULT=SUCCESS    
APPID=application_1524151293356_0005    
CONTAINERID=container_1524151293356_0005_01_01
2018-04-19 11:49:24,585 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl:
 Application application_1524151293356_0005 transitioned from NEW to INITING
2018-04-19 11:49:24,585 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl:
 Adding container_1524151293356_0005_01_01 to application 
application_1524151293356_0005
2018-04-19 11:49:24,585 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl:
 Application application_1524151293356_0005 transitioned from INITING to RUNNING
2018-04-19 11:49:24,588 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl:
 Container container_1524151293356_0005_01_01 transitioned from NEW to 
LOCALIZING
2018-04-19 11:49:24,588 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got 
event CONTAINER_INIT for appId application_1524151293356_0005
2018-04-19 11:49:24,589 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService:
 Created localizer for container_1524151293356_0005_01_01
2018-04-19 11:49:24,616 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService:
 Writing credentials to the nmPrivate file 
/tmp/hadoop-ubuntu/nm-local-dir/nmPrivate/container_1524151293356_0005_01_01.tokens
2018-04-19 11:49:28,090 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl:
 Container container_1524151293356_0005_01_01 transitioned from LOCALIZING 
to SCHEDULED
2018-04-19 11:49:28,090 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.scheduler.ContainerScheduler:
 Starting container [container_1524151293356_0005_01_01]
2018-04-19 11:49:28,212 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl:
 Container container_1524151293356_0005_01_01 transitioned from SCHEDULED 
to RUNNING
2018-04-19 11:49:28,212 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl:
 Starting resource-monitoring for container_1524151293356_0005_01_01
2018-04-19 11:49:29,401 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch:
 Container container_1524151293356_0005_01_01 succeeded
2018-04-19 11:49:29,401 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl:
 Container container_1524151293356_0005_01_01 transitioned from RUNNING to 
EXITED_WITH_SUCCESS
2018-04-19 11:49:29,401 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch:
 Cleaning up container container_1524151293356_0005_01_01
2018-04-19 11:49:29,520 INFO 
org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor: Removing 
Docker container : container_1524151293356_0005_01_01
2018-04-19 11:49:34,517 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch:
 Could not get pid for container_1524151293356_0005_01_01. Waited for 5000 
ms.
2018-04-19 11:49:34,517 INFO 
org.apache.hadoop.yarn.server

[jira] [Commented] (YARN-5888) [UI2] Improve unit tests for new YARN UI

2018-04-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16444343#comment-16444343
 ] 

Hudson commented on YARN-5888:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14028 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14028/])
YARN-5888. [UI2] Improve unit tests for new YARN UI. Contributed by (sunilg: 
rev c6d7d3eb059c7539db7d00586e181ec44da13557)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/tests/unit/adapters/yarn-container-log-test.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/tests/unit/models/yarn-node-test.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/tests/unit/routes/yarn-nodes-test.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/tests/unit/models/yarn-container-test.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/tests/unit/routes/yarn-queues-test.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/tests/unit/initializers/hosts-test.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/tests/unit/models/yarn-container-log-test.js
* (delete) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/tests/unit/initializers/jquery-test.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/tests/unit/adapters/yarn-container-test.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/tests/index.html
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/tests/unit/serializers/yarn-app-attempt-test.js
* (delete) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/helpers/node-name.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/tests/unit/adapters/yarn-rm-node-test.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/tests/unit/routes/yarn-app-test.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/tests/unit/routes/yarn-node-test.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/tests/unit/models/yarn-rm-node-test.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/tests/unit/serializers/cluster-info-test.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/tests/unit/serializers/yarn-node-container-test.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/tests/integration/components/breadcrumb-bar-test.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/tests/unit/models/cluster-info-test.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/tests/unit/models/yarn-user-test.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/tests/unit/adapters/yarn-node-test.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/tests/unit/initializers/env-test.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/models/yarn-queue/capacity-queue.js
* (delete) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/tests/unit/helpers/node-name-test.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/tests/unit/serializers/yarn-app-test.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/tests/unit/controllers/yarn-app-test.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/tests/unit/models/yarn-app-attempt-test.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/tests/unit/routes/yarn-apps-test.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/tests/unit/serializers/yarn-node-app-test.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/tests/helpers/resolver.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/tests/unit/adapters/yarn-app-attempt-test.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/tests/unit/serializers/cluster-metric-test.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/tests/unit/serializers/yarn-container-test.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/tests/unit/adapters/yarn-node-app-test.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/tests/unit/serializers/yarn-rm-node-test.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/tests/unit/adapters/yarn-app-test.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/tests/unit/models/cluster-metric-test.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/tests/unit/models/yarn-node-app-test.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/tests/unit/models/yarn-node-container-test.js

[jira] [Commented] (YARN-8064) Docker ".cmd" files should not be put in hadoop.tmp.dir

2018-04-19 Thread Eric Badger (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16444330#comment-16444330
 ] 

Eric Badger commented on YARN-8064:
---

[~shaneku...@gmail.com], patch 006 addressed your comments. I also fixed some 
checkstyle issues. I think this is ready for another round or review.

> Docker ".cmd" files should not be put in hadoop.tmp.dir
> ---
>
> Key: YARN-8064
> URL: https://issues.apache.org/jira/browse/YARN-8064
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Badger
>Assignee: Eric Badger
>Priority: Critical
> Attachments: YARN-8064.001.patch, YARN-8064.002.patch, 
> YARN-8064.003.patch, YARN-8064.004.patch, YARN-8064.005.patch, 
> YARN-8064.006.patch
>
>
> Currently all of the docker command files are being put into 
> {{hadoop.tmp.dir}}, which doesn't get cleaned up. So, eventually all of the 
> inodes will fill up and no more tasks will be able to run



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



  1   2   >