[jira] [Commented] (YARN-8018) Yarn service: Add support for initiating service upgrade

2018-03-23 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16412438#comment-16412438
 ] 

genericqa commented on YARN-8018:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
25s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
40s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 27s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  7m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m  
8s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 13s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 2 new + 138 unchanged - 3 fixed = 140 total (was 141) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  5s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 34m 
32s{color} | {color:green} hadoop-yarn-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  6m 
46s{color} | {color:green} hadoop-yarn-services-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
28s{color} | {color:green} hadoop-yarn-services-api in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}118m  4s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | YARN-8018 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12916023/YARN-8018.006.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  cc  |
| uname | Linux 99ee56e26c76 4.4.0-116-generic #140-Ubuntu SMP Mon Feb 12 
21:23:04 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build 

[jira] [Commented] (YARN-7794) SLSRunner is not loading timeline service jars causing failure

2018-03-23 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16412428#comment-16412428
 ] 

Yufei Gu commented on YARN-7794:


Thanks [~rohithsharma]. Could you help to review the patch?

> SLSRunner is not loading timeline service jars causing failure
> --
>
> Key: YARN-7794
> URL: https://issues.apache.org/jira/browse/YARN-7794
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: scheduler-load-simulator
>Affects Versions: 3.1.0
>Reporter: Sunil G
>Assignee: Yufei Gu
>Priority: Blocker
> Attachments: YARN-7794.001.patch
>
>
> {code:java}
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.hadoop.yarn.server.timelineservice.collector.TimelineCollector
>         at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
>         at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>         at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
>         at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>         ... 13 more
> Exception in thread "pool-2-thread-390" java.lang.NoClassDefFoundError: 
> org/apache/hadoop/yarn/server/timelineservice/collector/TimelineCollector
>         at 
> org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.createAndPopulateNewRMApp(RMAppManager.java:443)
>         at 
> org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.submitApplication(RMAppManager.java:321)
>         at 
> org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.submitApplication(ClientRMService.java:641){code}
> We are getting this error while running SLS. new patch of timelineservice 
> under share/hadoop/yarn is not loaded in SLS jvm (verified from slsrunner 
> classpath)
> cc/ [~rohithsharma]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6495) check docker container's exit code when writing to cgroup task files

2018-03-23 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16412374#comment-16412374
 ] 

genericqa commented on YARN-6495:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
32m 41s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 31s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 18m 
22s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 63m 15s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | YARN-6495 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12864001/YARN-6495.001.patch |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux cc539b76ac8c 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 24f75e0 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/20072/testReport/ |
| Max. process+thread count | 411 (vs. ulimit of 1) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/20072/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> check docker container's exit code when writing to cgroup task files
> 
>
> Key: YARN-6495
> URL: https://issues.apache.org/jira/browse/YARN-6495
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Reporter: Jaeboo Jeong
>Assignee: Jaeboo Jeong
>Priority: Major
> Attachments: YARN-6495.001.patch
>
>
> If I execute simple command like date on docker container, the application 
> failed to complete successfully.
> for example, 
> {code}
> $ yarn  jar 
> 

[jira] [Updated] (YARN-8018) Yarn service: Add support for initiating service upgrade

2018-03-23 Thread Chandni Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chandni Singh updated YARN-8018:

Attachment: YARN-8018.006.patch

> Yarn service: Add support for initiating service upgrade
> 
>
> Key: YARN-8018
> URL: https://issues.apache.org/jira/browse/YARN-8018
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Chandni Singh
>Assignee: Chandni Singh
>Priority: Major
> Attachments: YARN-8018.001.patch, YARN-8018.002.patch, 
> YARN-8018.003.patch, YARN-8018.004.patch, YARN-8018.005.patch, 
> YARN-8018.006.patch
>
>
> Add support for initiating service upgrade which includes the following main 
> changes:
>  # Service API to initiate upgrade
>  # Persist service version on hdfs
>  # Start the upgraded version of service



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8072) RM log is getting flooded with MemoryPlacementConstraintManager info logs

2018-03-23 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16412367#comment-16412367
 ] 

genericqa commented on YARN-8072:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 10m  
9s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 59s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 36s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 61m  
9s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}125m 13s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | YARN-8072 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12916012/YARN-8072.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 8fee82cd87eb 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 647058e |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/20070/testReport/ |
| Max. process+thread count | 848 (vs. ulimit of 1) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/20070/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   

[jira] [Commented] (YARN-8018) Yarn service: Add support for initiating service upgrade

2018-03-23 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16412362#comment-16412362
 ] 

genericqa commented on YARN-8018:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
12s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 21s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
11s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  6m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
52s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 20s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 3 new + 136 unchanged - 3 fixed = 139 total (was 139) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  2s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 28m 
10s{color} | {color:green} hadoop-yarn-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  5m 
18s{color} | {color:green} hadoop-yarn-services-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
34s{color} | {color:green} hadoop-yarn-services-api in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}114m 39s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | YARN-8018 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12916011/YARN-8018.005.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  cc  |
| uname | Linux 1f0af8bca4f6 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build 

[jira] [Commented] (YARN-7707) [GPG] Policy generator framework

2018-03-23 Thread Botong Huang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16412351#comment-16412351
 ] 

Botong Huang commented on YARN-7707:


Committed to YARN-7402. Thanks [~youchen] for the patch and 
[~giovanni.fumarola] for the review! 

> [GPG] Policy generator framework
> 
>
> Key: YARN-7707
> URL: https://issues.apache.org/jira/browse/YARN-7707
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Carlo Curino
>Assignee: Young Chen
>Priority: Major
>  Labels: federation, gpg
> Attachments: YARN-7707-YARN-7402.01.patch, 
> YARN-7707-YARN-7402.02.patch, YARN-7707-YARN-7402.03.patch, 
> YARN-7707-YARN-7402.04.patch, YARN-7707-YARN-7402.05.patch, 
> YARN-7707-YARN-7402.06.patch, YARN-7707-YARN-7402.07.patch, 
> YARN-7707-YARN-7402.08.patch, YARN-7707-YARN-7402.09.patch, 
> YARN-7707-YARN-7402.10.patch, YARN-7707-YARN-7402.11.patch
>
>
> This JIRA tracks the development of a generic framework for querying 
> sub-clusters for metrics, running policies, and updating them in the 
> FederationStateStore.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8070) Yarn Service API site doc broken due to unwanted character in YarnServiceAPI.md

2018-03-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16412349#comment-16412349
 ] 

Hudson commented on YARN-8070:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13873 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13873/])
YARN-8070. Yarn Service API site doc broken due to unwanted character in 
(wangda: rev 24f75e097a67ae90d2ff382bb2f2559caa02f32f)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/yarn-service/YarnServiceAPI.md


> Yarn Service API site doc broken due to unwanted character in 
> YarnServiceAPI.md
> ---
>
> Key: YARN-8070
> URL: https://issues.apache.org/jira/browse/YARN-8070
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: site
>Affects Versions: 3.1.0
>Reporter: Gour Saha
>Assignee: Gour Saha
>Priority: Blocker
> Fix For: 3.2.0, 3.1.1
>
> Attachments: YARN-8070.001.patch
>
>
> The YARN Service API html page is not rendering properly in the yarn site 
> documentation due to unnecessary # character in YarnServiceAPI.md. If 
> possible, this should be fixed before we release 3.1.0 since it is the first 
> release for YARN Service.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8016) Refine PlacementRule interface and add a app-name queue mapping rule as an example

2018-03-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16412348#comment-16412348
 ] 

Hudson commented on YARN-8016:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13873 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13873/])
YARN-8016. Refine PlacementRule interface and add a app-name queue (wangda: rev 
a90471b3e65326cc18ed31fe21aef654833b5883)
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/placement/QueuePath.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacitySchedulerConfiguration.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/placement/QueuePlacementRuleUtils.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/placement/TestPlacementManager.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/placement/QueueMappingEntity.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacitySchedulerQueueMappingFactory.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/placement/AppNameMappingPlacementRule.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/placement/UserGroupMappingPlacementRule.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/CapacityScheduler.md
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/placement/TestAppNameMappingPlacementRule.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/placement/PlacementRule.java


> Refine PlacementRule interface and add a app-name queue mapping rule as an 
> example
> --
>
> Key: YARN-8016
> URL: https://issues.apache.org/jira/browse/YARN-8016
> Project: Hadoop YARN
>  Issue Type: Task
>Reporter: Zian Chen
>Assignee: Zian Chen
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: YARN-8016.001.patch, YARN-8016.002.patch, 
> YARN-8016.003.patch, YARN-8016.004.patch, YARN-8016.005.patch
>
>
> After YARN-3635/YARN-6689, PlacementRule becomes a common interface which can 
> be used by scheduler and can be dynamically updated by scheduler according to 
> configs. There're some other works. 
> - There's no way to initialize PlacementRule.
> - No example of PlacementRule except the user-group mapping one.
> This JIRA is targeted to refine PlacementRule interfaces and add another 
> PlacementRule example.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6434) When setting environment variables, can't use comma for a list of value in key = value pairs.

2018-03-23 Thread Jaeboo Jeong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6434?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jaeboo Jeong updated YARN-6434:
---
Attachment: YARN-6434-trunk.001.patch

> When setting environment variables, can't use comma for a list of value in 
> key = value pairs.
> -
>
> Key: YARN-6434
> URL: https://issues.apache.org/jira/browse/YARN-6434
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Jaeboo Jeong
>Priority: Major
> Attachments: YARN-6434-trunk.001.patch, YARN-6434.001.patch
>
>
> We can set environment variables using yarn.app.mapreduce.am.env, 
> mapreduce.map.env, mapreduce.reduce.env.
> There is no problem if we use key=value pairs like X=Y, X=$Y.
> However If we want to set key=a list of value pair(e.g. X=Y,Z), we can’t.
> This is related to YARN-4595.
> The attached patch is based on YARN-3768.
> We can set environment variables like below.
> {code}
> mapreduce.map.env="YARN_CONTAINER_RUNTIME_TYPE=docker,YARN_CONTAINER_RUNTIME_DOCKER_IMAGE=hadoop-docker,YARN_CONTAINER_RUNTIME_DOCKER_LOCAL_RESOURCE_MOUNTS=\"/dir1:/targetdir1,/dir2:/targetdir2\""
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6495) check docker container's exit code when writing to cgroup task files

2018-03-23 Thread Jaeboo Jeong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16412333#comment-16412333
 ] 

Jaeboo Jeong commented on YARN-6495:


It seems to be a problem because the execution of the docker command and the 
writing of the cgroup are executed independently. However, since both tasks are 
not independent, I think it would be better to check the command exit code 
during writing cgroup.

> check docker container's exit code when writing to cgroup task files
> 
>
> Key: YARN-6495
> URL: https://issues.apache.org/jira/browse/YARN-6495
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Reporter: Jaeboo Jeong
>Assignee: Jaeboo Jeong
>Priority: Major
> Attachments: YARN-6495.001.patch
>
>
> If I execute simple command like date on docker container, the application 
> failed to complete successfully.
> for example, 
> {code}
> $ yarn  jar 
> $HADOOP_HOME/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.7.1.jar
>  -shell_env YARN_CONTAINER_RUNTIME_TYPE=docker -shell_env 
> YARN_CONTAINER_RUNTIME_DOCKER_IMAGE=hadoop-docker -shell_command "date" -jar 
> $HADOOP_HOME/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.7.1.jar
>  -num_containers 1 -timeout 360
> …
> 17/04/12 00:16:40 INFO distributedshell.Client: Application did finished 
> unsuccessfully. YarnState=FINISHED, DSFinalStatus=FAILED. Breaking monitoring 
> loop
> 17/04/12 00:16:40 ERROR distributedshell.Client: Application failed to 
> complete successfully
> {code}
> The error log is like below.
> {code}
> ...
> Failed to write pid to file 
> /cgroup_parent/cpu/hadoop-yarn/container_/tasks - No such process
> ...
> {code}
> When writing pid to cgroup tasks, container-executor doesn’t check docker 
> container’s status.
> If the container finished very quickly, we can’t write pid to cgroup tasks, 
> and it is not problem.
> So container-executor needs to check docker container’s exit code during 
> writing pid to cgroup tasks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8072) RM log is getting flooded with MemoryPlacementConstraintManager info logs

2018-03-23 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16412322#comment-16412322
 ] 

Wangda Tan commented on YARN-8072:
--

Thanks [~Zian Chen], +1, pending Jenkins.

> RM log is getting flooded with MemoryPlacementConstraintManager info logs
> -
>
> Key: YARN-8072
> URL: https://issues.apache.org/jira/browse/YARN-8072
> Project: Hadoop YARN
>  Issue Type: Task
>Reporter: Zian Chen
>Assignee: Zian Chen
>Priority: Major
> Attachments: YARN-8072.001.patch
>
>
> {quote}Below logs are printed every few seconds or so in RM log
>  
> 2018-03-07 05:31:11,858 INFO  constraint.MemoryPlacementConstraintManager 
> (MemoryPlacementConstraintManager.java:getConstraint(216)) - Application 
> application_1520376039316_0001 is not registered in the Placement Constraint 
> Manager.
> 2018-03-07 05:31:11,858 INFO  constraint.MemoryPlacementConstraintManager 
> (MemoryPlacementConstraintManager.java:getConstraint(216)) - Application 
> application_1520376039316_0001 is not registered in the Placement Constraint 
> Manager.
> 2018-03-07 05:31:11,858 INFO  constraint.MemoryPlacementConstraintManager 
> (MemoryPlacementConstraintManager.java:getConstraint(216)) - Application 
> application_1520376039316_0001 is not registered in the Placement Constraint 
> Manager.
> 2018-03-07 05:31:11,859 INFO  constraint.MemoryPlacementConstraintManager 
> (MemoryPlacementConstraintManager.java:getConstraint(216)) - Application 
> application_1520376039316_0001 is not registered in the Placement Constraint 
> Manager.
> 2018-03-07 05:31:11,859 INFO  constraint.MemoryPlacementConstraintManager 
> (MemoryPlacementConstraintManager.java:getConstraint(216)) - Application 
> application_1520376039316_0001 is not registered in the Placement Constraint 
> Manager.
> 2018-03-07 05:31:11,859 INFO  constraint.MemoryPlacementConstraintManager 
> (MemoryPlacementConstraintManager.java:getConstraint(216)) - Application 
> application_1520376039316_0001 is not registered in the Placement Constraint 
> Ma{quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8072) RM log is getting flooded with MemoryPlacementConstraintManager info logs

2018-03-23 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8072?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-8072:
-
Target Version/s: 3.1.0
Priority: Critical  (was: Major)

> RM log is getting flooded with MemoryPlacementConstraintManager info logs
> -
>
> Key: YARN-8072
> URL: https://issues.apache.org/jira/browse/YARN-8072
> Project: Hadoop YARN
>  Issue Type: Task
>Reporter: Zian Chen
>Assignee: Zian Chen
>Priority: Critical
> Attachments: YARN-8072.001.patch
>
>
> {quote}Below logs are printed every few seconds or so in RM log
>  
> 2018-03-07 05:31:11,858 INFO  constraint.MemoryPlacementConstraintManager 
> (MemoryPlacementConstraintManager.java:getConstraint(216)) - Application 
> application_1520376039316_0001 is not registered in the Placement Constraint 
> Manager.
> 2018-03-07 05:31:11,858 INFO  constraint.MemoryPlacementConstraintManager 
> (MemoryPlacementConstraintManager.java:getConstraint(216)) - Application 
> application_1520376039316_0001 is not registered in the Placement Constraint 
> Manager.
> 2018-03-07 05:31:11,858 INFO  constraint.MemoryPlacementConstraintManager 
> (MemoryPlacementConstraintManager.java:getConstraint(216)) - Application 
> application_1520376039316_0001 is not registered in the Placement Constraint 
> Manager.
> 2018-03-07 05:31:11,859 INFO  constraint.MemoryPlacementConstraintManager 
> (MemoryPlacementConstraintManager.java:getConstraint(216)) - Application 
> application_1520376039316_0001 is not registered in the Placement Constraint 
> Manager.
> 2018-03-07 05:31:11,859 INFO  constraint.MemoryPlacementConstraintManager 
> (MemoryPlacementConstraintManager.java:getConstraint(216)) - Application 
> application_1520376039316_0001 is not registered in the Placement Constraint 
> Manager.
> 2018-03-07 05:31:11,859 INFO  constraint.MemoryPlacementConstraintManager 
> (MemoryPlacementConstraintManager.java:getConstraint(216)) - Application 
> application_1520376039316_0001 is not registered in the Placement Constraint 
> Ma{quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8072) RM log is getting flooded with MemoryPlacementConstraintManager info logs

2018-03-23 Thread Zian Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16412269#comment-16412269
 ] 

Zian Chen commented on YARN-8072:
-

Hi [~gsaha], [~leftnoteasy] , this issue is basically changing the logging 
level from info to debug to avoid logging flood. Could you help review the 
patch and give comments? Thanks!

> RM log is getting flooded with MemoryPlacementConstraintManager info logs
> -
>
> Key: YARN-8072
> URL: https://issues.apache.org/jira/browse/YARN-8072
> Project: Hadoop YARN
>  Issue Type: Task
>Reporter: Zian Chen
>Assignee: Zian Chen
>Priority: Major
> Attachments: YARN-8072.001.patch
>
>
> {quote}Below logs are printed every few seconds or so in RM log
>  
> 2018-03-07 05:31:11,858 INFO  constraint.MemoryPlacementConstraintManager 
> (MemoryPlacementConstraintManager.java:getConstraint(216)) - Application 
> application_1520376039316_0001 is not registered in the Placement Constraint 
> Manager.
> 2018-03-07 05:31:11,858 INFO  constraint.MemoryPlacementConstraintManager 
> (MemoryPlacementConstraintManager.java:getConstraint(216)) - Application 
> application_1520376039316_0001 is not registered in the Placement Constraint 
> Manager.
> 2018-03-07 05:31:11,858 INFO  constraint.MemoryPlacementConstraintManager 
> (MemoryPlacementConstraintManager.java:getConstraint(216)) - Application 
> application_1520376039316_0001 is not registered in the Placement Constraint 
> Manager.
> 2018-03-07 05:31:11,859 INFO  constraint.MemoryPlacementConstraintManager 
> (MemoryPlacementConstraintManager.java:getConstraint(216)) - Application 
> application_1520376039316_0001 is not registered in the Placement Constraint 
> Manager.
> 2018-03-07 05:31:11,859 INFO  constraint.MemoryPlacementConstraintManager 
> (MemoryPlacementConstraintManager.java:getConstraint(216)) - Application 
> application_1520376039316_0001 is not registered in the Placement Constraint 
> Manager.
> 2018-03-07 05:31:11,859 INFO  constraint.MemoryPlacementConstraintManager 
> (MemoryPlacementConstraintManager.java:getConstraint(216)) - Application 
> application_1520376039316_0001 is not registered in the Placement Constraint 
> Ma{quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8072) RM log is getting flooded with MemoryPlacementConstraintManager info logs

2018-03-23 Thread Zian Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8072?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zian Chen updated YARN-8072:

Attachment: YARN-8072.001.patch

> RM log is getting flooded with MemoryPlacementConstraintManager info logs
> -
>
> Key: YARN-8072
> URL: https://issues.apache.org/jira/browse/YARN-8072
> Project: Hadoop YARN
>  Issue Type: Task
>Reporter: Zian Chen
>Assignee: Zian Chen
>Priority: Major
> Attachments: YARN-8072.001.patch
>
>
> {quote}Below logs are printed every few seconds or so in RM log
>  
> 2018-03-07 05:31:11,858 INFO  constraint.MemoryPlacementConstraintManager 
> (MemoryPlacementConstraintManager.java:getConstraint(216)) - Application 
> application_1520376039316_0001 is not registered in the Placement Constraint 
> Manager.
> 2018-03-07 05:31:11,858 INFO  constraint.MemoryPlacementConstraintManager 
> (MemoryPlacementConstraintManager.java:getConstraint(216)) - Application 
> application_1520376039316_0001 is not registered in the Placement Constraint 
> Manager.
> 2018-03-07 05:31:11,858 INFO  constraint.MemoryPlacementConstraintManager 
> (MemoryPlacementConstraintManager.java:getConstraint(216)) - Application 
> application_1520376039316_0001 is not registered in the Placement Constraint 
> Manager.
> 2018-03-07 05:31:11,859 INFO  constraint.MemoryPlacementConstraintManager 
> (MemoryPlacementConstraintManager.java:getConstraint(216)) - Application 
> application_1520376039316_0001 is not registered in the Placement Constraint 
> Manager.
> 2018-03-07 05:31:11,859 INFO  constraint.MemoryPlacementConstraintManager 
> (MemoryPlacementConstraintManager.java:getConstraint(216)) - Application 
> application_1520376039316_0001 is not registered in the Placement Constraint 
> Manager.
> 2018-03-07 05:31:11,859 INFO  constraint.MemoryPlacementConstraintManager 
> (MemoryPlacementConstraintManager.java:getConstraint(216)) - Application 
> application_1520376039316_0001 is not registered in the Placement Constraint 
> Ma{quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-8072) RM log is getting flooded with MemoryPlacementConstraintManager info logs

2018-03-23 Thread Zian Chen (JIRA)
Zian Chen created YARN-8072:
---

 Summary: RM log is getting flooded with 
MemoryPlacementConstraintManager info logs
 Key: YARN-8072
 URL: https://issues.apache.org/jira/browse/YARN-8072
 Project: Hadoop YARN
  Issue Type: Task
Reporter: Zian Chen
Assignee: Zian Chen


{quote}Below logs are printed every few seconds or so in RM log
 
2018-03-07 05:31:11,858 INFO  constraint.MemoryPlacementConstraintManager 
(MemoryPlacementConstraintManager.java:getConstraint(216)) - Application 
application_1520376039316_0001 is not registered in the Placement Constraint 
Manager.
2018-03-07 05:31:11,858 INFO  constraint.MemoryPlacementConstraintManager 
(MemoryPlacementConstraintManager.java:getConstraint(216)) - Application 
application_1520376039316_0001 is not registered in the Placement Constraint 
Manager.
2018-03-07 05:31:11,858 INFO  constraint.MemoryPlacementConstraintManager 
(MemoryPlacementConstraintManager.java:getConstraint(216)) - Application 
application_1520376039316_0001 is not registered in the Placement Constraint 
Manager.
2018-03-07 05:31:11,859 INFO  constraint.MemoryPlacementConstraintManager 
(MemoryPlacementConstraintManager.java:getConstraint(216)) - Application 
application_1520376039316_0001 is not registered in the Placement Constraint 
Manager.
2018-03-07 05:31:11,859 INFO  constraint.MemoryPlacementConstraintManager 
(MemoryPlacementConstraintManager.java:getConstraint(216)) - Application 
application_1520376039316_0001 is not registered in the Placement Constraint 
Manager.
2018-03-07 05:31:11,859 INFO  constraint.MemoryPlacementConstraintManager 
(MemoryPlacementConstraintManager.java:getConstraint(216)) - Application 
application_1520376039316_0001 is not registered in the Placement Constraint 
Ma{quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8018) Yarn service: Add support for initiating service upgrade

2018-03-23 Thread Chandni Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chandni Singh updated YARN-8018:

Attachment: YARN-8018.005.patch

> Yarn service: Add support for initiating service upgrade
> 
>
> Key: YARN-8018
> URL: https://issues.apache.org/jira/browse/YARN-8018
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Chandni Singh
>Assignee: Chandni Singh
>Priority: Major
> Attachments: YARN-8018.001.patch, YARN-8018.002.patch, 
> YARN-8018.003.patch, YARN-8018.004.patch, YARN-8018.005.patch
>
>
> Add support for initiating service upgrade which includes the following main 
> changes:
>  # Service API to initiate upgrade
>  # Persist service version on hdfs
>  # Start the upgraded version of service



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7767) Excessive logging in scheduler

2018-03-23 Thread Zian Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16412257#comment-16412257
 ] 

Zian Chen commented on YARN-7767:
-

Closed since the fix is already committed by Sunil.

> Excessive logging in scheduler 
> ---
>
> Key: YARN-7767
> URL: https://issues.apache.org/jira/browse/YARN-7767
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jian He
>Assignee: Zian Chen
>Priority: Major
>
> Below logs are printed every few seconds or so in RM log 
> {code}
> 2018-01-17 21:17:57,076 INFO  
> capacity.QueuePriorityContainerCandidateSelector 
> (QueuePriorityContainerCandidateSelector.java:intializePriorityDigraph(121)) 
> - Initializing priority preemption directed graph:
> 2018-01-17 21:18:12,076 INFO  
> capacity.QueuePriorityContainerCandidateSelector 
> (QueuePriorityContainerCandidateSelector.java:intializePriorityDigraph(121)) 
> - Initializing priority preemption directed graph:
> 2018-01-17 21:18:27,076 INFO  
> capacity.QueuePriorityContainerCandidateSelector 
> (QueuePriorityContainerCandidateSelector.java:intializePriorityDigraph(121)) 
> - Initializing priority preemption directed graph:
> 2018-01-17 21:18:42,077 INFO  
> capacity.QueuePriorityContainerCandidateSelector 
> (QueuePriorityContainerCandidateSelector.java:intializePriorityDigraph(121)) 
> - Initializing priority preemption directed graph:
> 2018-01-17 21:18:57,076 INFO  
> capacity.QueuePriorityContainerCandidateSelector 
> (QueuePriorityContainerCandidateSelector.java:intializePriorityDigraph(121)) 
> - Initializing priority preemption directed graph:
> 2018-01-17 21:19:12,076 INFO  
> capacity.QueuePriorityContainerCandidateSelector 
> (QueuePriorityContainerCandidateSelector.java:intializePriorityDigraph(121)) 
> - Initializing priority preemption directed graph:
> 2018-01-17 21:19:27,076 INFO  
> capacity.QueuePriorityContainerCandidateSelector 
> (QueuePriorityContainerCandidateSelector.java:intializePriorityDigraph(121)) 
> - Initializing priority preemption directed graph:
> 2018-01-17 21:19:42,076 INFO  
> capacity.QueuePriorityContainerCandidateSelector 
> (QueuePriorityContainerCandidateSelector.java:intializePriorityDigraph(121)) 
> - Initializing priority preemption directed graph:
> 2018-01-17 21:19:57,077 INFO  
> capacity.QueuePriorityContainerCandidateSelector 
> (QueuePriorityContainerCandidateSelector.java:intializePriorityDigraph(121)) 
> - Initializing priority preemption directed graph:
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-7767) Excessive logging in scheduler

2018-03-23 Thread Zian Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7767?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zian Chen resolved YARN-7767.
-
Resolution: Fixed

> Excessive logging in scheduler 
> ---
>
> Key: YARN-7767
> URL: https://issues.apache.org/jira/browse/YARN-7767
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jian He
>Assignee: Zian Chen
>Priority: Major
>
> Below logs are printed every few seconds or so in RM log 
> {code}
> 2018-01-17 21:17:57,076 INFO  
> capacity.QueuePriorityContainerCandidateSelector 
> (QueuePriorityContainerCandidateSelector.java:intializePriorityDigraph(121)) 
> - Initializing priority preemption directed graph:
> 2018-01-17 21:18:12,076 INFO  
> capacity.QueuePriorityContainerCandidateSelector 
> (QueuePriorityContainerCandidateSelector.java:intializePriorityDigraph(121)) 
> - Initializing priority preemption directed graph:
> 2018-01-17 21:18:27,076 INFO  
> capacity.QueuePriorityContainerCandidateSelector 
> (QueuePriorityContainerCandidateSelector.java:intializePriorityDigraph(121)) 
> - Initializing priority preemption directed graph:
> 2018-01-17 21:18:42,077 INFO  
> capacity.QueuePriorityContainerCandidateSelector 
> (QueuePriorityContainerCandidateSelector.java:intializePriorityDigraph(121)) 
> - Initializing priority preemption directed graph:
> 2018-01-17 21:18:57,076 INFO  
> capacity.QueuePriorityContainerCandidateSelector 
> (QueuePriorityContainerCandidateSelector.java:intializePriorityDigraph(121)) 
> - Initializing priority preemption directed graph:
> 2018-01-17 21:19:12,076 INFO  
> capacity.QueuePriorityContainerCandidateSelector 
> (QueuePriorityContainerCandidateSelector.java:intializePriorityDigraph(121)) 
> - Initializing priority preemption directed graph:
> 2018-01-17 21:19:27,076 INFO  
> capacity.QueuePriorityContainerCandidateSelector 
> (QueuePriorityContainerCandidateSelector.java:intializePriorityDigraph(121)) 
> - Initializing priority preemption directed graph:
> 2018-01-17 21:19:42,076 INFO  
> capacity.QueuePriorityContainerCandidateSelector 
> (QueuePriorityContainerCandidateSelector.java:intializePriorityDigraph(121)) 
> - Initializing priority preemption directed graph:
> 2018-01-17 21:19:57,077 INFO  
> capacity.QueuePriorityContainerCandidateSelector 
> (QueuePriorityContainerCandidateSelector.java:intializePriorityDigraph(121)) 
> - Initializing priority preemption directed graph:
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-1151) Ability to configure auxiliary services from HDFS-based JAR files

2018-03-23 Thread Vinod Kumar Vavilapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16412253#comment-16412253
 ] 

Vinod Kumar Vavilapalli commented on YARN-1151:
---

>From the POC patch, it looks like NM will repeatedly download the tar again 
>and again on every restart, am I reading that right? We shouldn't be doing 
>that. May be we should get file-status, verify the checksum and skip if it is 
>the same. Exactly like dist-cache. These are some of the reasons why we should 
>simply reuse the core ResourceLocalizationService for localizing this.

We should also figure out the right behavior if HDFS is down. What should NM do 
on a fresh start if HDFS is down? What it should do on a restart if HDFS is 
down?

> Ability to configure auxiliary services from HDFS-based JAR files
> -
>
> Key: YARN-1151
> URL: https://issues.apache.org/jira/browse/YARN-1151
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Affects Versions: 2.1.0-beta, 2.9.0
>Reporter: john lilley
>Assignee: Xuan Gong
>Priority: Major
>  Labels: auxiliary-service, yarn
> Attachments: YARN-1151.1.patch, YARN-1151.branch-2.poc.patch, 
> [YARN-1151] [Design] Configure auxiliary services from HDFS-based JAR 
> files.pdf
>
>
> I would like to install an auxiliary service in Hadoop YARN without actually 
> installing files/services on every node in the system.  Discussions on the 
> user@ list indicate that this is not easily done.  The reason we want an 
> auxiliary service is that our application has some persistent-data components 
> that are not appropriate for HDFS.  In fact, they are somewhat analogous to 
> the mapper output of MapReduce's shuffle, which is what led me to 
> auxiliary-services in the first place.  It would be much easier if we could 
> just place our service's JARs in HDFS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8068) Upgrading apps from Hadoop 2.7 based clients to 2.8+ cause NPE in app timeline publish

2018-03-23 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16412201#comment-16412201
 ] 

genericqa commented on YARN-8068:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  2s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 58s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 63m 
49s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}116m 17s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | YARN-8068 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12915963/YARN-8068.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 41a7781117d9 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 647058e |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/20067/testReport/ |
| Max. process+thread count | 827 (vs. ulimit of 1) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/20067/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   

[jira] [Commented] (YARN-8018) Yarn service: Add support for initiating service upgrade

2018-03-23 Thread Chandni Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16412179#comment-16412179
 ] 

Chandni Singh commented on YARN-8018:
-

{quote}How come we are not using the clusterName as a folder in the created 
Path?
{quote}
Made a mistake. I will add a unit test for this as well which will verify the 
path.

> Yarn service: Add support for initiating service upgrade
> 
>
> Key: YARN-8018
> URL: https://issues.apache.org/jira/browse/YARN-8018
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Chandni Singh
>Assignee: Chandni Singh
>Priority: Major
> Attachments: YARN-8018.001.patch, YARN-8018.002.patch, 
> YARN-8018.003.patch, YARN-8018.004.patch
>
>
> Add support for initiating service upgrade which includes the following main 
> changes:
>  # Service API to initiate upgrade
>  # Persist service version on hdfs
>  # Start the upgraded version of service



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8071) Provide Spark-like API for setting Environment Variables to enable vars with commas

2018-03-23 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16412177#comment-16412177
 ] 

Jason Lowe commented on YARN-8071:
--

Is this a YARN JIRA?  I would think the changes would be in MapReduce to fix 
the handling of mapreduce.map.env (maybe also Common if we have a utility 
method on Configuration to collect properties with a specified prefix).


> Provide Spark-like API for setting Environment Variables to enable vars with 
> commas
> ---
>
> Key: YARN-8071
> URL: https://issues.apache.org/jira/browse/YARN-8071
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Affects Versions: 3.0.0
>Reporter: Jim Brennan
>Assignee: Jim Brennan
>Priority: Major
>
> YARN-6830 describes a problem where environment variables that contain commas 
> cannot be specified via {{-Dmapreduce.map.env}}.
> For example:
> {{-Dmapreduce.map.env="MODE=bar,IMAGE_NAME=foo,MOUNTS=/tmp/foo,/tmp/bar"}}
> will set {{MOUNTS}} to {{/tmp/foo}}
> In that Jira, [~aw] suggested that we change the API to provide a way to 
> specify environment variables individually, the same way that Spark does.
> {quote}Rather than fight with a regex why not redefine the API instead?
>  
> -Dmapreduce.map.env.MODE=bar
>  -Dmapreduce.map.env.IMAGE_NAME=foo
>  -Dmapreduce.map.env.MOUNTS=/tmp/foo,/tmp/bar
> ...
> e.g, mapreduce.map.env.[foo]=bar gets turned into foo=bar
> This greatly simplifies the input validation needed and makes it clear what 
> is actually being defined.
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6830) Support quoted strings for environment variables

2018-03-23 Thread Jim Brennan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16412171#comment-16412171
 ] 

Jim Brennan commented on YARN-6830:
---

I filed [YARN-8071] to track the work on the [~aw] solution.


> Support quoted strings for environment variables
> 
>
> Key: YARN-6830
> URL: https://issues.apache.org/jira/browse/YARN-6830
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Shane Kumpf
>Assignee: Jim Brennan
>Priority: Major
> Attachments: YARN-6830.001.patch, YARN-6830.002.patch, 
> YARN-6830.003.patch, YARN-6830.004.patch
>
>
> There are cases where it is necessary to allow for quoted string literals 
> within environment variables values when passed via the yarn command line 
> interface.
> For example, consider the follow environment variables for a MR map task.
> {{MODE=bar}}
> {{IMAGE_NAME=foo}}
> {{MOUNTS=/tmp/foo,/tmp/bar}}
> When running the MR job, these environment variables are supplied as a comma 
> delimited string.
> {{-Dmapreduce.map.env="MODE=bar,IMAGE_NAME=foo,MOUNTS=/tmp/foo,/tmp/bar"}}
> In this case, {{MOUNTS}} will be parsed and added to the task environment as 
> {{MOUNTS=/tmp/foo}}. Any attempts to quote the embedded comma separated value 
> results in quote characters becoming part of the value, and parsing still 
> breaks down at the comma.
> This issue is to allow for quoting the comma separated value (escaped double 
> or single quote). This was mentioned on YARN-4595 and will impact YARN-5534 
> as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8070) Yarn Service API site doc broken due to unwanted character in YarnServiceAPI.md

2018-03-23 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16412144#comment-16412144
 ] 

genericqa commented on YARN-8070:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
27s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
 0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
36m 59s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 16s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 49m 45s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | YARN-8070 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12915965/YARN-8070.001.patch |
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux b7e9b2d636ab 4.4.0-116-generic #140-Ubuntu SMP Mon Feb 12 
21:23:04 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 647058e |
| maven | version: Apache Maven 3.3.9 |
| Max. process+thread count | 418 (vs. ulimit of 1) |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/20068/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Yarn Service API site doc broken due to unwanted character in 
> YarnServiceAPI.md
> ---
>
> Key: YARN-8070
> URL: https://issues.apache.org/jira/browse/YARN-8070
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: site
>Affects Versions: 3.1.0
>Reporter: Gour Saha
>Assignee: Gour Saha
>Priority: Blocker
> Attachments: YARN-8070.001.patch
>
>
> The YARN Service API html page is not rendering properly in the yarn site 
> documentation due to unnecessary # character in YarnServiceAPI.md. If 
> possible, this should be fixed before we release 3.1.0 since it is the first 
> release for YARN Service.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8018) Yarn service: Add support for initiating service upgrade

2018-03-23 Thread Gour Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16412141#comment-16412141
 ] 

Gour Saha commented on YARN-8018:
-

Thanks [~csingh] for the new patch. Please address the checkstyle issues 
identified by jenkins. Few more comments below.
{quote}None of the existing ServiceState are in present continuous including 
FLEX so I called it UPGRADE. I don't have a strong opinion here. I can change 
it to UPGRADING.
{quote}
We are actually going to rename ServiceState.FLEX to FLEXING. See last few 
comments in YARN-7781. So let’s call it UPGRADING. I know what you are 
thinking. A better (non present continuous) word might have been IN_UPGRADE (or 
something even better). But let’s avoid using 2 words for a single enum. Is 
there a one-word non present continuous word we can use? If it makes you feel 
better - before STABLE we had the state RUNNING. We moved from RUNNING to 
STABLE not because we wanted to avoid present continuous words, but because we 
needed a state which meant more than just containers are RUNNING.
h5. ServiceScheduler.java

Add a space after //
{code:java}
  //This encapsulates the app with methods to upgrade the app.
{code}
There are few other places where I found no space after //. Can you scan and 
fix those too.
h5. CoreFileSystem.java
{code:java}
  public Path buildClusterUpgradeDirPath(String clusterName, String version) {
Preconditions.checkNotNull(clusterName);
Preconditions.checkNotNull(version);
return new Path(getBaseApplicationPath(),
YarnServiceConstants.UPGRADE_DIR + "/" +
YarnServiceConstants.SERVICES_DIRECTORY + "/" + version);
  }
{code}
How come we are not using the clusterName as a folder in the created Path?

> Yarn service: Add support for initiating service upgrade
> 
>
> Key: YARN-8018
> URL: https://issues.apache.org/jira/browse/YARN-8018
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Chandni Singh
>Assignee: Chandni Singh
>Priority: Major
> Attachments: YARN-8018.001.patch, YARN-8018.002.patch, 
> YARN-8018.003.patch, YARN-8018.004.patch
>
>
> Add support for initiating service upgrade which includes the following main 
> changes:
>  # Service API to initiate upgrade
>  # Persist service version on hdfs
>  # Start the upgraded version of service



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8032) Yarn service should expose failuresValidityInterval to users and use it for launching containers

2018-03-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16412112#comment-16412112
 ] 

Hudson commented on YARN-8032:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13872 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13872/])
YARN-8032.  Added ability to configure failure validity interval for (eyang: 
rev 647058efc0c7a4442b3e64b4d743df1a589f26bc)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/yarn-service/Configurations.md
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/conf/YarnServiceConf.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/containerlaunch/AbstractLauncher.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/provider/AbstractProviderService.java


> Yarn service should expose failuresValidityInterval to users and use it for 
> launching containers
> 
>
> Key: YARN-8032
> URL: https://issues.apache.org/jira/browse/YARN-8032
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.1.0
>Reporter: Chandni Singh
>Assignee: Chandni Singh
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: YARN-8032.001.patch, YARN-8032.002.patch, 
> YARN-8032.003.patch
>
>
> With YARN-5015 the support for sliding window retry policy was added. Yarn 
> service should expose it via the api for the users to take advantage of it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8020) when DRF is used, preemption does not trigger due to incorrect idealAssigned

2018-03-23 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16412106#comment-16412106
 ] 

Wangda Tan commented on YARN-8020:
--

Thanks [~eepayne]/[~kyungwan nam] for comments. 

Haven't checked much detailed of above comments. I believe we have some issues 
in existing DRF preemption logic. I plan to spend some time to add unit tests 
to YARN-8004 in the next several weeks.

> when DRF is used, preemption does not trigger due to incorrect idealAssigned
> 
>
> Key: YARN-8020
> URL: https://issues.apache.org/jira/browse/YARN-8020
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: kyungwan nam
>Priority: Major
>
> I’ve met that Inter Queue Preemption does not work.
> It happens when DRF is used and submitting application with a large number of 
> vcores.
> IMHO, idealAssigned can be set incorrectly by following code.
> {code}
> // This function "accepts" all the resources it can (pending) and return
> // the unused ones
> Resource offer(Resource avail, ResourceCalculator rc,
> Resource clusterResource, boolean considersReservedResource) {
>   Resource absMaxCapIdealAssignedDelta = Resources.componentwiseMax(
>   Resources.subtract(getMax(), idealAssigned),
>   Resource.newInstance(0, 0));
>   // accepted = min{avail,
>   //   max - assigned,
>   //   current + pending - assigned,
>   //   # Make sure a queue will not get more than max of its
>   //   # used/guaranteed, this is to make sure preemption won't
>   //   # happen if all active queues are beyond their guaranteed
>   //   # This is for leaf queue only.
>   //   max(guaranteed, used) - assigned}
>   // remain = avail - accepted
>   Resource accepted = Resources.min(rc, clusterResource,
>   absMaxCapIdealAssignedDelta,
>   Resources.min(rc, clusterResource, avail, Resources
>   /*
>* When we're using FifoPreemptionSelector (considerReservedResource
>* = false).
>*
>* We should deduct reserved resource from pending to avoid 
> excessive
>* preemption:
>*
>* For example, if an under-utilized queue has used = reserved = 20.
>* Preemption policy will try to preempt 20 containers (which is not
>* satisfied) from different hosts.
>*
>* In FifoPreemptionSelector, there's no guarantee that preempted
>* resource can be used by pending request, so policy will preempt
>* resources repeatly.
>*/
>   .subtract(Resources.add(getUsed(),
>   (considersReservedResource ? pending : pendingDeductReserved)),
>   idealAssigned)));
> {code}
> let’s say,
> * cluster resource : 
> * idealAssigned(assigned): 
> * avail: 
> * current: 
> * pending: 
> current + pending - assigned: 
> min ( avail, (current + pending - assigned) ) : 
> accepted: 
> as a result, idealAssigned will be , which does not 
> trigger preemption.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5590) Add support for increase and decrease of container resources with resource profiles

2018-03-23 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16412075#comment-16412075
 ] 

genericqa commented on YARN-5590:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
 9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  8s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
29s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 32 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
4s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  6s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 64m 
49s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 29m 
41s{color} | {color:green} hadoop-yarn-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
38s{color} | {color:red} The patch generated 3 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}174m 56s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | YARN-5590 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12915943/YARN-5590.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux 3607a2c2b324 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 6e31a09 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| whitespace | 

[jira] [Commented] (YARN-6629) NPE occurred when container allocation proposal is applied but its resource requests are removed before

2018-03-23 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16412067#comment-16412067
 ] 

Wangda Tan commented on YARN-6629:
--

Thanks [~Tao Yang], 

It might be nicer to return a boolean for FiCaSchedulerApp#apply as well. Since 
currently in CS. it checks:
{code:java}
if (app.accept(cluster, request, updatePending)) {
  app.apply(cluster, request, updatePending);
  LOG.info("Allocation proposal accepted");
  isSuccess = true;
} else{
  LOG.info("Failed to accept allocation proposal");
}
{code}

For this case, it will print "proposal accepted" but silently reject the 
proposal. so we can change the check to:
{code}
if (app.accept && app.apply) {
}  else {
}
{code} 

> NPE occurred when container allocation proposal is applied but its resource 
> requests are removed before
> ---
>
> Key: YARN-6629
> URL: https://issues.apache.org/jira/browse/YARN-6629
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.9.0, 3.0.0-alpha2
>Reporter: Tao Yang
>Assignee: Tao Yang
>Priority: Critical
> Attachments: YARN-6629.001.patch, YARN-6629.002.patch, 
> YARN-6629.003.patch, YARN-6629.004.patch
>
>
> I wrote a test case to reproduce another problem for branch-2 and found new 
> NPE error,  log: 
> {code}
> FATAL event.EventDispatcher (EventDispatcher.java:run(75)) - Error in 
> handling event type NODE_UPDATE to the Event Dispatcher
> java.lang.NullPointerException
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.AppSchedulingInfo.allocate(AppSchedulingInfo.java:446)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp.apply(FiCaSchedulerApp.java:516)
> at 
> org.apache.hadoop.yarn.client.TestNegativePendingResource$1.answer(TestNegativePendingResource.java:225)
> at 
> org.mockito.internal.stubbing.StubbedInvocationMatcher.answer(StubbedInvocationMatcher.java:31)
> at org.mockito.internal.MockHandler.handle(MockHandler.java:97)
> at 
> org.mockito.internal.creation.MethodInterceptorFilter.intercept(MethodInterceptorFilter.java:47)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp$$EnhancerByMockitoWithCGLIB$$29eb8afc.apply()
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.tryCommit(CapacityScheduler.java:2396)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.submitResourceCommitRequest(CapacityScheduler.java:2281)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.allocateOrReserveNewContainers(CapacityScheduler.java:1247)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.allocateContainerOnSingleNode(CapacityScheduler.java:1236)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.allocateContainersToNode(CapacityScheduler.java:1325)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.allocateContainersToNode(CapacityScheduler.java:1112)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.nodeUpdate(CapacityScheduler.java:987)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.handle(CapacityScheduler.java:1367)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.handle(CapacityScheduler.java:143)
> at 
> org.apache.hadoop.yarn.event.EventDispatcher$EventProcessor.run(EventDispatcher.java:66)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> Reproduce this error in chronological order:
> 1. AM started and requested 1 container with schedulerRequestKey#1 : 
> ApplicationMasterService#allocate -->  CapacityScheduler#allocate --> 
> SchedulerApplicationAttempt#updateResourceRequests --> 
> AppSchedulingInfo#updateResourceRequests 
> Added schedulerRequestKey#1 into schedulerKeyToPlacementSets
> 2. Scheduler allocatd 1 container for this request and accepted the proposal
> 3. AM removed this request
> ApplicationMasterService#allocate -->  CapacityScheduler#allocate --> 
> SchedulerApplicationAttempt#updateResourceRequests --> 
> AppSchedulingInfo#updateResourceRequests --> 
> AppSchedulingInfo#addToPlacementSets --> 
> AppSchedulingInfo#updatePendingResources
> Removed schedulerRequestKey#1 from schedulerKeyToPlacementSets)
> 4. Scheduler applied this proposal
> CapacityScheduler#tryCommit --> FiCaSchedulerApp#apply --> 
> AppSchedulingInfo#allocate 
> Throw NPE when called 
> 

[jira] [Commented] (YARN-8071) Provide Spark-like API for setting Environment Variables to enable vars with commas

2018-03-23 Thread Jim Brennan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16412063#comment-16412063
 ] 

Jim Brennan commented on YARN-8071:
---

Alternate solutions to this problem (with patches) are provided in the 
following Jiras:

[YARN-8029] - addresses the specific issue for docker yarn mount paths by 
changing the delimiter for those environment variables from comma to semicolon.

[YARN-6830] - Modifies the regex used by Apps.setEnvFromInputString() to allow 
commas in a quoted string.  This requires that users of those variables remove 
the surrounding strings before using them.

See discussion in [YARN-6830] about the alternatives.

> Provide Spark-like API for setting Environment Variables to enable vars with 
> commas
> ---
>
> Key: YARN-8071
> URL: https://issues.apache.org/jira/browse/YARN-8071
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Affects Versions: 3.0.0
>Reporter: Jim Brennan
>Assignee: Jim Brennan
>Priority: Major
>
> YARN-6830 describes a problem where environment variables that contain commas 
> cannot be specified via {{-Dmapreduce.map.env}}.
> For example:
> {{-Dmapreduce.map.env="MODE=bar,IMAGE_NAME=foo,MOUNTS=/tmp/foo,/tmp/bar"}}
> will set {{MOUNTS}} to {{/tmp/foo}}
> In that Jira, [~aw] suggested that we change the API to provide a way to 
> specify environment variables individually, the same way that Spark does.
> {quote}Rather than fight with a regex why not redefine the API instead?
>  
> -Dmapreduce.map.env.MODE=bar
>  -Dmapreduce.map.env.IMAGE_NAME=foo
>  -Dmapreduce.map.env.MOUNTS=/tmp/foo,/tmp/bar
> ...
> e.g, mapreduce.map.env.[foo]=bar gets turned into foo=bar
> This greatly simplifies the input validation needed and makes it clear what 
> is actually being defined.
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7574) Add support for Node Labels on Auto Created Leaf Queue Template

2018-03-23 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16412060#comment-16412060
 ] 

Wangda Tan commented on YARN-7574:
--

Thanks [~suma.shivaprasad], mind to check the javadocs issue?

> Add support for Node Labels on Auto Created Leaf Queue Template
> ---
>
> Key: YARN-7574
> URL: https://issues.apache.org/jira/browse/YARN-7574
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
>Priority: Major
> Attachments: YARN-7574.1.patch, YARN-7574.2.patch, YARN-7574.3.patch, 
> YARN-7574.4.patch, YARN-7574.5.patch, YARN-7574.6.patch
>
>
> YARN-7473 adds support for auto created leaf queues to inherit node labels 
> capacities from parent queues. Howebver there is no support for leaf queue 
> template to allow different configured capacities for different node labels. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8032) Yarn service should expose failuresValidityInterval to users and use it for launching containers

2018-03-23 Thread Eric Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-8032:

Affects Version/s: 3.1.0
 Target Version/s: 3.2.0
Fix Version/s: 3.2.0

> Yarn service should expose failuresValidityInterval to users and use it for 
> launching containers
> 
>
> Key: YARN-8032
> URL: https://issues.apache.org/jira/browse/YARN-8032
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.1.0
>Reporter: Chandni Singh
>Assignee: Chandni Singh
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: YARN-8032.001.patch, YARN-8032.002.patch, 
> YARN-8032.003.patch
>
>
> With YARN-5015 the support for sliding window retry policy was added. Yarn 
> service should expose it via the api for the users to take advantage of it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8070) Yarn Service API site doc broken due to unwanted character in YarnServiceAPI.md

2018-03-23 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16412055#comment-16412055
 ] 

Wangda Tan commented on YARN-8070:
--

Thanks [~gsaha], will pick it up if we have an RC1. 

+1 to the patch, will commit it shortly.

> Yarn Service API site doc broken due to unwanted character in 
> YarnServiceAPI.md
> ---
>
> Key: YARN-8070
> URL: https://issues.apache.org/jira/browse/YARN-8070
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: site
>Affects Versions: 3.1.0
>Reporter: Gour Saha
>Assignee: Gour Saha
>Priority: Blocker
> Attachments: YARN-8070.001.patch
>
>
> The YARN Service API html page is not rendering properly in the yarn site 
> documentation due to unnecessary # character in YarnServiceAPI.md. If 
> possible, this should be fixed before we release 3.1.0 since it is the first 
> release for YARN Service.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-8071) Provide Spark-like API for setting Environment Variables to enable vars with commas

2018-03-23 Thread Jim Brennan (JIRA)
Jim Brennan created YARN-8071:
-

 Summary: Provide Spark-like API for setting Environment Variables 
to enable vars with commas
 Key: YARN-8071
 URL: https://issues.apache.org/jira/browse/YARN-8071
 Project: Hadoop YARN
  Issue Type: Bug
  Components: yarn
Affects Versions: 3.0.0
Reporter: Jim Brennan
Assignee: Jim Brennan


YARN-6830 describes a problem where environment variables that contain commas 
cannot be specified via {{-Dmapreduce.map.env}}.

For example:

{{-Dmapreduce.map.env="MODE=bar,IMAGE_NAME=foo,MOUNTS=/tmp/foo,/tmp/bar"}}

will set {{MOUNTS}} to {{/tmp/foo}}

In that Jira, [~aw] suggested that we change the API to provide a way to 
specify environment variables individually, the same way that Spark does.
{quote}Rather than fight with a regex why not redefine the API instead?

 

-Dmapreduce.map.env.MODE=bar
 -Dmapreduce.map.env.IMAGE_NAME=foo
 -Dmapreduce.map.env.MOUNTS=/tmp/foo,/tmp/bar

...

e.g, mapreduce.map.env.[foo]=bar gets turned into foo=bar

This greatly simplifies the input validation needed and makes it clear what is 
actually being defined.
{quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7574) Add support for Node Labels on Auto Created Leaf Queue Template

2018-03-23 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16412033#comment-16412033
 ] 

genericqa commented on YARN-7574:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 12s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 31s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 60 new + 147 unchanged - 21 fixed = 207 total (was 168) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 25s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
23s{color} | {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager
 generated 1 new + 4 unchanged - 0 fixed = 5 total (was 4) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 61m 
51s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}113m 40s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | YARN-7574 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12915947/YARN-7574.6.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 51e94dd470ce 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 6e31a09 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/20066/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| javadoc | 

[jira] [Commented] (YARN-8032) Yarn service should expose failuresValidityInterval to users and use it for launching containers

2018-03-23 Thread Chandni Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16412032#comment-16412032
 ] 

Chandni Singh commented on YARN-8032:
-

Thanks [~eyang] for reviewing and committing this patch.

> Yarn service should expose failuresValidityInterval to users and use it for 
> launching containers
> 
>
> Key: YARN-8032
> URL: https://issues.apache.org/jira/browse/YARN-8032
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Chandni Singh
>Assignee: Chandni Singh
>Priority: Major
> Attachments: YARN-8032.001.patch, YARN-8032.002.patch, 
> YARN-8032.003.patch
>
>
> With YARN-5015 the support for sliding window retry policy was added. Yarn 
> service should expose it via the api for the users to take advantage of it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8016) Refine PlacementRule interface and add a app-name queue mapping rule as an example

2018-03-23 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16412020#comment-16412020
 ] 

Wangda Tan commented on YARN-8016:
--

+1, thanks [~Zian Chen], will commit the patch shortly if no objections.

> Refine PlacementRule interface and add a app-name queue mapping rule as an 
> example
> --
>
> Key: YARN-8016
> URL: https://issues.apache.org/jira/browse/YARN-8016
> Project: Hadoop YARN
>  Issue Type: Task
>Reporter: Zian Chen
>Assignee: Zian Chen
>Priority: Major
> Attachments: YARN-8016.001.patch, YARN-8016.002.patch, 
> YARN-8016.003.patch, YARN-8016.004.patch, YARN-8016.005.patch
>
>
> After YARN-3635/YARN-6689, PlacementRule becomes a common interface which can 
> be used by scheduler and can be dynamically updated by scheduler according to 
> configs. There're some other works. 
> - There's no way to initialize PlacementRule.
> - No example of PlacementRule except the user-group mapping one.
> This JIRA is targeted to refine PlacementRule interfaces and add another 
> PlacementRule example.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6830) Support quoted strings for environment variables

2018-03-23 Thread Jim Brennan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16412003#comment-16412003
 ] 

Jim Brennan commented on YARN-6830:
---

[~jlowe], sounds good.  I will file a new Jira for the new approach and link it 
here. Going to leave this one as-is on the off-chance we decide to come back to 
it at some point.

 

 

> Support quoted strings for environment variables
> 
>
> Key: YARN-6830
> URL: https://issues.apache.org/jira/browse/YARN-6830
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Shane Kumpf
>Assignee: Jim Brennan
>Priority: Major
> Attachments: YARN-6830.001.patch, YARN-6830.002.patch, 
> YARN-6830.003.patch, YARN-6830.004.patch
>
>
> There are cases where it is necessary to allow for quoted string literals 
> within environment variables values when passed via the yarn command line 
> interface.
> For example, consider the follow environment variables for a MR map task.
> {{MODE=bar}}
> {{IMAGE_NAME=foo}}
> {{MOUNTS=/tmp/foo,/tmp/bar}}
> When running the MR job, these environment variables are supplied as a comma 
> delimited string.
> {{-Dmapreduce.map.env="MODE=bar,IMAGE_NAME=foo,MOUNTS=/tmp/foo,/tmp/bar"}}
> In this case, {{MOUNTS}} will be parsed and added to the task environment as 
> {{MOUNTS=/tmp/foo}}. Any attempts to quote the embedded comma separated value 
> results in quote characters becoming part of the value, and parsing still 
> breaks down at the comma.
> This issue is to allow for quoting the comma separated value (escaped double 
> or single quote). This was mentioned on YARN-4595 and will impact YARN-5534 
> as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8053) Add hadoop-distcp in exclusion in hbase-server dependencies for timelineservice-hbase packages.

2018-03-23 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16412001#comment-16412001
 ] 

Steve Loughran commented on YARN-8053:
--

we have a loop. Hadoop depends on HBase, HBase depends on Hadoop. So HBase is 
being built against an older version of Hadoop & its transitive versions 
(guava). It's less visible if you build everything yourself in one go, but the 
problem exists

w.r.t unwinding, does the hbase connector actually need to be built into 
hadoop: is there a way to create an extra project which pulls in both

> Add hadoop-distcp in exclusion in hbase-server dependencies for 
> timelineservice-hbase packages.
> ---
>
> Key: YARN-8053
> URL: https://issues.apache.org/jira/browse/YARN-8053
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Major
> Fix For: 3.1.0, yarn-7055, 3.2.0
>
> Attachments: YARN-8053-YARN-7055.01.patch, YARN-8053.01.patch
>
>
> It is observed that when we change the version number of hadoop leading build 
> failure because of dependency resolution conflicts for HBase-2 compilation. 
> We see below error which tells that hbase-server has dependency on 
> hadoop-distcp. We also need to exclude hadoop-distcp from exclusion list. 
> {code}
> 07:42:36 2018/03/19 14:42:36 INFO: [ERROR] Failed to execute goal on 
> project hadoop-yarn-server-timelineservice-hbase-client: Could not resolve 
> dependencies for project 
> org.apache.hadoop:hadoop-yarn-server-timelineservice-hbase-client:jar:3.3.0-SNAPSHOT:
>  Could not find artifact org.apache.hadoop:hadoop-distcp:jar:3.3.0-SNAPSHOT 
> in public 
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8062) yarn rmadmin -getGroups returns group from which the user has been removed

2018-03-23 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16412000#comment-16412000
 ] 

Wangda Tan commented on YARN-8062:
--

+1, thanks [~sunilg], will commit by this afternoon if no objections.

> yarn rmadmin -getGroups returns group from which the user has been removed
> --
>
> Key: YARN-8062
> URL: https://issues.apache.org/jira/browse/YARN-8062
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Sumana Sathish
>Assignee: Sunil G
>Priority: Critical
> Attachments: YARN-8062.001.patch, YARN-8062.002.patch, 
> YARN-8062.003.patch, YARN-8062.004.patch
>
>
> {code:title= adding group hrt_yarn_rmadmin_test}
> sudo su - -c "groupadd hrt_yarn_rmadmin_test" root
> {code}
> {Code:title=adding user hrt_yarn_rmadmin_test to group hrt_yarn_rmadmin_test}
> sudo su - -c "useradd hrt_yarn_rmadmin_test -g hrt_yarn_rmadmin_test" root
> {Code}
> {Code:title= adding group hrt_yarn_rmadmin_test_group2 }
> sudo su - -c "groupadd hrt_yarn_rmadmin_test_group2" root
> {Code}
> {Code:title=adding user hrt_yarn_rmadmin_test to group 
> hrt_yarn_rmadmin_test_group2}
> sudo su - -c "usermod -a -G hrt_yarn_rmadmin_test_group2 
> hrt_yarn_rmadmin_test" root
> {Code}
> Refresh and getGroups
> {code}
> yarn rmadmin -refreshUserToGroupsMappings
> /usr/hdp/current/hadoop-yarn-client/bin/yarn rmadmin -getGroups 
> hrt_yarn_rmadmin_test
> hrt_yarn_rmadmin_test : hrt_yarn_rmadmin_test hrt_yarn_rmadmin_test_group2
> {code}
> Delete group hrt_yarn_rmadmin_test_group2 from user hrt_yarn_rmadmin_test  
> and refresh and do getGroups.
> We can still see group hrt_yarn_rmadmin_test_group2
> {code}
> sudo su - -c "gpasswd -d hrt_yarn_rmadmin_test hrt_yarn_rmadmin_test_group2" 
> root
> {code}
> Removing user hrt_yarn_rmadmin_test from group hrt_yarn_rmadmin_test_group2
> {code}
> bash-4.2$  /usr/hdp/current/hadoop-yarn-client/bin/yarn rmadmin 
> -refreshUserToGroupsMappings
> /usr/hdp/current/hadoop-yarn-client/bin/yarn rmadmin -getGroups 
> hrt_yarn_rmadmin_test
> hrt_yarn_rmadmin_test : hrt_yarn_rmadmin_test hrt_yarn_rmadmin_test_group2
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8032) Yarn service should expose failuresValidityInterval to users and use it for launching containers

2018-03-23 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16411961#comment-16411961
 ] 

Eric Yang commented on YARN-8032:
-

The property needs to be specified at top level instead of at component level.

{code}
{
  "name": "sleeper-service",
  "kerberos_principal" : {
"principal_name" : "hbase/_h...@example.com",
"keytab" : "file:///etc/security/keytabs/hbase.service.keytab"
  },
  "version": "1",
  "configuration": {
"properties": {
  "yarn.service.container-failure.validity-interval-ms": 3
}
  },
  "components" :
  [
{
  "name": "ping",
  "number_of_containers": 2,
  "artifact": {
"id": "hadoop/centos:latest",
"type": "DOCKER"
  },
  "launch_command": "sleep,90",
  "resource": {
"cpus": 1,
"memory": "256"
  },
  "configuration": {
"env": {
},
"properties": {
  "docker.network": "host",
}
  }
}
  ]
}
{code}

Code and document seems to be in place.  I will commit this shortly.

> Yarn service should expose failuresValidityInterval to users and use it for 
> launching containers
> 
>
> Key: YARN-8032
> URL: https://issues.apache.org/jira/browse/YARN-8032
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Chandni Singh
>Assignee: Chandni Singh
>Priority: Major
> Attachments: YARN-8032.001.patch, YARN-8032.002.patch, 
> YARN-8032.003.patch
>
>
> With YARN-5015 the support for sliding window retry policy was added. Yarn 
> service should expose it via the api for the users to take advantage of it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8062) yarn rmadmin -getGroups returns group from which the user has been removed

2018-03-23 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16411927#comment-16411927
 ] 

genericqa commented on YARN-8062:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
28s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  9s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 31s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 1 new + 36 unchanged - 0 fixed = 37 total (was 36) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  2s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 69m 
39s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}128m 51s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | YARN-8062 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12915933/YARN-8062.004.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux c723168a76ec 4.4.0-116-generic #140-Ubuntu SMP Mon Feb 12 
21:23:04 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 6e31a09 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/20064/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/20064/testReport/ |
| Max. process+thread count | 894 (vs. 

[jira] [Commented] (YARN-8032) Yarn service should expose failuresValidityInterval to users and use it for launching containers

2018-03-23 Thread Chandni Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16411925#comment-16411925
 ] 

Chandni Singh commented on YARN-8032:
-

[~eyang] I can see the property is being set in my tests. Here are log 
statements:
{code}
2018-03-23 19:10:13,076 [pool-6-thread-1] INFO containerlaunch.AbstractLauncher 
- launcher retry context validity interval 3 2018-03-23 19:10:13,086 
[org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl #0] INFO 
impl.NMClientAsyncImpl - Processing Event EventType: START_CONTAINER for 
Container container_e11_1521831276303_0002_01_02 2018-03-23 19:10:13,147 
[pool-6-thread-2] INFO containerlaunch.AbstractLauncher - launcher retry 
context validity interval 3
{code}




> Yarn service should expose failuresValidityInterval to users and use it for 
> launching containers
> 
>
> Key: YARN-8032
> URL: https://issues.apache.org/jira/browse/YARN-8032
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Chandni Singh
>Assignee: Chandni Singh
>Priority: Major
> Attachments: YARN-8032.001.patch, YARN-8032.002.patch, 
> YARN-8032.003.patch
>
>
> With YARN-5015 the support for sliding window retry policy was added. Yarn 
> service should expose it via the api for the users to take advantage of it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8070) Yarn Service API site doc broken due to unwanted character in YarnServiceAPI.md

2018-03-23 Thread Gour Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16411916#comment-16411916
 ] 

Gour Saha commented on YARN-8070:
-

[~leftnoteasy], if possible please incorporate this patch for 3.1.0 release.

> Yarn Service API site doc broken due to unwanted character in 
> YarnServiceAPI.md
> ---
>
> Key: YARN-8070
> URL: https://issues.apache.org/jira/browse/YARN-8070
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: site
>Affects Versions: 3.1.0
>Reporter: Gour Saha
>Assignee: Gour Saha
>Priority: Blocker
> Attachments: YARN-8070.001.patch
>
>
> The YARN Service API html page is not rendering properly in the yarn site 
> documentation due to unnecessary # character in YarnServiceAPI.md. If 
> possible, this should be fixed before we release 3.1.0 since it is the first 
> release for YARN Service.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8070) Yarn Service API site doc broken due to unwanted character in YarnServiceAPI.md

2018-03-23 Thread Gour Saha (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gour Saha updated YARN-8070:

Attachment: YARN-8070.001.patch

> Yarn Service API site doc broken due to unwanted character in 
> YarnServiceAPI.md
> ---
>
> Key: YARN-8070
> URL: https://issues.apache.org/jira/browse/YARN-8070
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: site
>Affects Versions: 3.1.0
>Reporter: Gour Saha
>Assignee: Gour Saha
>Priority: Blocker
> Attachments: YARN-8070.001.patch
>
>
> The YARN Service API html page is not rendering properly in the yarn site 
> documentation due to unnecessary # character in YarnServiceAPI.md. If 
> possible, this should be fixed before we release 3.1.0 since it is the first 
> release for YARN Service.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-8070) Yarn Service API site doc broken due to unwanted character in YarnServiceAPI.md

2018-03-23 Thread Gour Saha (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gour Saha reassigned YARN-8070:
---

Assignee: Gour Saha

> Yarn Service API site doc broken due to unwanted character in 
> YarnServiceAPI.md
> ---
>
> Key: YARN-8070
> URL: https://issues.apache.org/jira/browse/YARN-8070
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: site
>Affects Versions: 3.1.0
>Reporter: Gour Saha
>Assignee: Gour Saha
>Priority: Blocker
>
> The YARN Service API html page is not rendering properly in the yarn site 
> documentation due to unnecessary # character in YarnServiceAPI.md. If 
> possible, this should be fixed before we release 3.1.0 since it is the first 
> release for YARN Service.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-8070) Yarn Service API site doc broken due to unwanted character in YarnServiceAPI.md

2018-03-23 Thread Gour Saha (JIRA)
Gour Saha created YARN-8070:
---

 Summary: Yarn Service API site doc broken due to unwanted 
character in YarnServiceAPI.md
 Key: YARN-8070
 URL: https://issues.apache.org/jira/browse/YARN-8070
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: site
Affects Versions: 3.1.0
Reporter: Gour Saha


The YARN Service API html page is not rendering properly in the yarn site 
documentation due to unnecessary # character in YarnServiceAPI.md. If possible, 
this should be fixed before we release 3.1.0 since it is the first release for 
YARN Service.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8066) RM/UI2: doughnut chart mis-rendering

2018-03-23 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16411893#comment-16411893
 ] 

Sunil G commented on YARN-8066:
---

This looks like browser dependent. cc [~Sreenath] Thoughts?

> RM/UI2: doughnut chart mis-rendering
> 
>
> Key: YARN-8066
> URL: https://issues.apache.org/jira/browse/YARN-8066
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Reporter: Zoltan Haindrich
>Priority: Major
> Attachments: more_applications.png, screenshot.png
>
>
> on the overview page; 
> http://ctr-e138-1518143905142-140659-01-02.hwx.site:8088/ui2/#/cluster-overview
> * when there are 2 subqueues to the root (llap and default); 
> * the llap queue is running 1 application
> * there are no other active application
> the doughtut chart is rendered incorrectly...like something have eaten some 
> part of it
> the same thing can be observed when there are other applications are active 
> while watching the animation during refresh - in that case the incorrect 
> drawing appears a number of time during animation - but at the end it 
> stabilizes as a correct..



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8068) Upgrading apps from Hadoop 2.7 based clients to 2.8+ cause NPE in app timeline publish

2018-03-23 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16411876#comment-16411876
 ] 

Sunil G commented on YARN-8068:
---

[~leftnoteasy], could u pls help to check this.

Usually for app which are not submitted w/o any priority, RMAppManager ensures 
it to be set with 0. However when we have app which are pre-2.8 versions, 
submissionContext.getPriority will be null and RMApp will have null reference 
for applicationPriority. This could cause issue in ATS event publishing , UI 
etc. Hence we can assume priority will be 0 from RMAppImpl if submission 
context doesnt carry app priority.

cc/ [~rohithsharma]

> Upgrading apps from Hadoop 2.7 based clients to 2.8+ cause NPE in app 
> timeline publish
> --
>
> Key: YARN-8068
> URL: https://issues.apache.org/jira/browse/YARN-8068
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Affects Versions: 2.8.3
>Reporter: Sunil G
>Assignee: Sunil G
>Priority: Blocker
> Attachments: YARN-8068.001.patch
>
>
> [TimelineServiceV1Publisher|eclipse-javadoc:%E2%98%82=hadoop-yarn-server-resourcemanager/src%5C/main%5C/java%3Corg.apache.hadoop.yarn.server.resourcemanager.metrics%7BTimelineServiceV1Publisher.java%E2%98%83TimelineServiceV1Publisher].appCreated
>  will cause NPE as we use like below
> {code:java}
> entityInfo.put(ApplicationMetricsConstants.APPLICATION_PRIORITY_INFO, 
> app.getApplicationPriority().getPriority());{code}
> We have to handle this case while recovery.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8068) Upgrading apps from Hadoop 2.7 based clients to 2.8+ cause NPE in app timeline publish

2018-03-23 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-8068:
--
Priority: Blocker  (was: Major)

> Upgrading apps from Hadoop 2.7 based clients to 2.8+ cause NPE in app 
> timeline publish
> --
>
> Key: YARN-8068
> URL: https://issues.apache.org/jira/browse/YARN-8068
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Affects Versions: 2.8.3
>Reporter: Sunil G
>Assignee: Sunil G
>Priority: Blocker
>
> [TimelineServiceV1Publisher|eclipse-javadoc:%E2%98%82=hadoop-yarn-server-resourcemanager/src%5C/main%5C/java%3Corg.apache.hadoop.yarn.server.resourcemanager.metrics%7BTimelineServiceV1Publisher.java%E2%98%83TimelineServiceV1Publisher].appCreated
>  will cause NPE as we use like below
> {code:java}
> entityInfo.put(ApplicationMetricsConstants.APPLICATION_PRIORITY_INFO, 
> app.getApplicationPriority().getPriority());{code}
> We have to handle this case while recovery.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8068) Upgrading apps from Hadoop 2.7 based clients to 2.8+ cause NPE in app timeline publish

2018-03-23 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-8068:
--
Target Version/s: 3.1.1  (was: 3.1.0)

> Upgrading apps from Hadoop 2.7 based clients to 2.8+ cause NPE in app 
> timeline publish
> --
>
> Key: YARN-8068
> URL: https://issues.apache.org/jira/browse/YARN-8068
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Affects Versions: 2.8.3
>Reporter: Sunil G
>Assignee: Sunil G
>Priority: Blocker
> Attachments: YARN-8068.001.patch
>
>
> [TimelineServiceV1Publisher|eclipse-javadoc:%E2%98%82=hadoop-yarn-server-resourcemanager/src%5C/main%5C/java%3Corg.apache.hadoop.yarn.server.resourcemanager.metrics%7BTimelineServiceV1Publisher.java%E2%98%83TimelineServiceV1Publisher].appCreated
>  will cause NPE as we use like below
> {code:java}
> entityInfo.put(ApplicationMetricsConstants.APPLICATION_PRIORITY_INFO, 
> app.getApplicationPriority().getPriority());{code}
> We have to handle this case while recovery.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8068) Upgrading apps from Hadoop 2.7 based clients to 2.8+ cause NPE in app timeline publish

2018-03-23 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-8068:
--
Attachment: YARN-8068.001.patch

> Upgrading apps from Hadoop 2.7 based clients to 2.8+ cause NPE in app 
> timeline publish
> --
>
> Key: YARN-8068
> URL: https://issues.apache.org/jira/browse/YARN-8068
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Affects Versions: 2.8.3
>Reporter: Sunil G
>Assignee: Sunil G
>Priority: Blocker
> Attachments: YARN-8068.001.patch
>
>
> [TimelineServiceV1Publisher|eclipse-javadoc:%E2%98%82=hadoop-yarn-server-resourcemanager/src%5C/main%5C/java%3Corg.apache.hadoop.yarn.server.resourcemanager.metrics%7BTimelineServiceV1Publisher.java%E2%98%83TimelineServiceV1Publisher].appCreated
>  will cause NPE as we use like below
> {code:java}
> entityInfo.put(ApplicationMetricsConstants.APPLICATION_PRIORITY_INFO, 
> app.getApplicationPriority().getPriority());{code}
> We have to handle this case while recovery.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8032) Yarn service should expose failuresValidityInterval to users and use it for launching containers

2018-03-23 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16411869#comment-16411869
 ] 

Eric Yang commented on YARN-8032:
-

The value is set to -1 regardless what value that is passed through the CLI 
during my test. I change the code like this:
{code:java}
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/containerlaunch/AbstractLau
@@ -169,10 +169,12 @@ public ContainerLaunchContext completeContainerLaunch() 
throws IOException {
 return containerLaunchContext;
   }
 
-  public void setRetryContext(int maxRetries, int retryInterval) {
+  public void setRetryContext(int maxRetries, int retryInterval,
+  long failuresValidityInterval) {
+log.info("failure validity interval {}", failuresValidityInterval);
 ContainerRetryContext retryContext = ContainerRetryContext
-.newInstance(ContainerRetryPolicy.RETRY_ON_ALL_ERRORS, null, 
maxRetries,
-retryInterval);
+.newInstance(ContainerRetryPolicy.RETRY_ON_ALL_ERRORS, null,
+maxRetries, retryInterval, failuresValidityInterval);
 containerLaunchContext.setContainerRetryContext(retryContext);
   }
{code}
I think there is a gap that not all properties are set in AM environment as 
intended.

> Yarn service should expose failuresValidityInterval to users and use it for 
> launching containers
> 
>
> Key: YARN-8032
> URL: https://issues.apache.org/jira/browse/YARN-8032
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Chandni Singh
>Assignee: Chandni Singh
>Priority: Major
> Attachments: YARN-8032.001.patch, YARN-8032.002.patch, 
> YARN-8032.003.patch
>
>
> With YARN-5015 the support for sliding window retry policy was added. Yarn 
> service should expose it via the api for the users to take advantage of it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8066) RM/UI2: doughnut chart mis-rendering

2018-03-23 Thread Zoltan Haindrich (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Haindrich updated YARN-8066:
---
Attachment: more_applications.png

> RM/UI2: doughnut chart mis-rendering
> 
>
> Key: YARN-8066
> URL: https://issues.apache.org/jira/browse/YARN-8066
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Reporter: Zoltan Haindrich
>Priority: Major
> Attachments: more_applications.png, screenshot.png
>
>
> on the overview page; 
> http://ctr-e138-1518143905142-140659-01-02.hwx.site:8088/ui2/#/cluster-overview
> * when there are 2 subqueues to the root (llap and default); 
> * the llap queue is running 1 application
> * there are no other active application
> the doughtut chart is rendered incorrectly...like something have eaten some 
> part of it
> the same thing can be observed when there are other applications are active 
> while watching the animation during refresh - in that case the incorrect 
> drawing appears a number of time during animation - but at the end it 
> stabilizes as a correct..



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8041) Federation: Implement multiple interfaces(14 interfaces), routing REST invocations transparently to multiple RMs

2018-03-23 Thread Yiran Wu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiran Wu updated YARN-8041:
---
Description: 
Implement routing 
getAppStatistics/getAppState/getNodeToLabels/getLabelsOnNode/updateApplicationPriority/getAppQueue/updateAppQueue/getAppTimeout/getAppTimeouts/updateApplicationTimeout/getAppAttempts/getAppAttempt/getContainers/getContainer
 REST invocations transparently to multiple RMs 


I think we need add a new Web Protocol for Router, like is 

{code:java}
public interface RouterWebServiceProtocol extends RMWebServiceProtocol {
List getAllSubClusterInfo();
ClusterInfo  getSubClusterInfo(clusterId);
   SchedulerInfoType getSchedulerInfo(subClusterId);
}
{code}


cause the Router needed some protocol, such is getAllSubClusterInfo(): 
List 、 getSubClusterInfo(clusterId): ClusterInfo 
、getSchedulerInfo(subClusterId): SchedulerInfo  。

  was:
Implement routing 
getAppStatistics/getAppState/getNodeToLabels/getLabelsOnNode/updateApplicationPriority/getAppQueue/updateAppQueue/getAppTimeout/getAppTimeouts/updateApplicationTimeout/getAppAttempts/getAppAttempt/getContainers/getContainer
 REST invocations transparently to multiple RMs 


I think we need add a new Web Protocol for Router, like is 

{code:java}
public interface RouterWebServiceProtocol extends RMWebServiceProtocol {
List getAllSubClusterInfo();
ClusterInfo  getSubClusterInfo(clusterId);
   SchedulerInfoType getSchedulerInfo(subClusterId);
}
{code}


cause the Router needed some protocol, such is getAllSubClusterInfo(): 
List 、 getSubClusterInfo(clusterId): ClusterInfo 
、getSchedulerInfo(subClusterId): SchedulerInfo  。 if needed i can do it.


> Federation: Implement multiple interfaces(14 interfaces), routing REST 
> invocations transparently to multiple RMs 
> -
>
> Key: YARN-8041
> URL: https://issues.apache.org/jira/browse/YARN-8041
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: federation
>Affects Versions: 2.9.0, 3.0.0
>Reporter: Yiran Wu
>Priority: Major
>  Labels: patch
> Attachments: YARN-8041.001.patch, YARN-8041.002.patch, 
> YARN-8041.003.patch
>
>
> Implement routing 
> getAppStatistics/getAppState/getNodeToLabels/getLabelsOnNode/updateApplicationPriority/getAppQueue/updateAppQueue/getAppTimeout/getAppTimeouts/updateApplicationTimeout/getAppAttempts/getAppAttempt/getContainers/getContainer
>  REST invocations transparently to multiple RMs 
> I think we need add a new Web Protocol for Router, like is 
> {code:java}
> public interface RouterWebServiceProtocol extends RMWebServiceProtocol {
> List getAllSubClusterInfo();
> ClusterInfo  getSubClusterInfo(clusterId);
>SchedulerInfoType getSchedulerInfo(subClusterId);
> }
> {code}
> cause the Router needed some protocol, such is getAllSubClusterInfo(): 
> List 、 getSubClusterInfo(clusterId): ClusterInfo 
> 、getSchedulerInfo(subClusterId): SchedulerInfo  。



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-8069) Clean up example hostnames

2018-03-23 Thread Billie Rinaldi (JIRA)
Billie Rinaldi created YARN-8069:


 Summary: Clean up example hostnames
 Key: YARN-8069
 URL: https://issues.apache.org/jira/browse/YARN-8069
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Billie Rinaldi
Assignee: Billie Rinaldi


The hostnames used in the documentation and registry DNS testing could use some 
cleaning up.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-8068) Upgrading apps from Hadoop 2.7 based clients to 2.8+ cause NPE in app timeline publish

2018-03-23 Thread Sunil G (JIRA)
Sunil G created YARN-8068:
-

 Summary: Upgrading apps from Hadoop 2.7 based clients to 2.8+ 
cause NPE in app timeline publish
 Key: YARN-8068
 URL: https://issues.apache.org/jira/browse/YARN-8068
 Project: Hadoop YARN
  Issue Type: Bug
  Components: yarn
Affects Versions: 2.8.3
Reporter: Sunil G
Assignee: Sunil G


[TimelineServiceV1Publisher|eclipse-javadoc:%E2%98%82=hadoop-yarn-server-resourcemanager/src%5C/main%5C/java%3Corg.apache.hadoop.yarn.server.resourcemanager.metrics%7BTimelineServiceV1Publisher.java%E2%98%83TimelineServiceV1Publisher].appCreated
 will cause NPE as we use like below
{code:java}
entityInfo.put(ApplicationMetricsConstants.APPLICATION_PRIORITY_INFO, 
app.getApplicationPriority().getPriority());{code}
We have to handle this case while recovery.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8032) Yarn service should expose failuresValidityInterval to users and use it for launching containers

2018-03-23 Thread Chandni Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16411806#comment-16411806
 ] 

Chandni Singh commented on YARN-8032:
-

[~eyang]
{quote}For passing properties from CLI or REST to backend. The settings needs 
to be forwarded from hadoop-yarn-service-api project ApiServer.java to 
hadoop-yarn-service-core project's ServiceClient.java. updateLifetime is an 
example to forwarding the setting to YARN. submitApp method in ServiceClient 
contains example of passing the parameter to submissionContext. You might need 
to add logic in submitApp method to bridge the gap.
{quote}
Extra logic should not be needed. There are similar existing configurations 
that are being used to populate the launch context. There value is read from 
the service object. Please see the existing code below in 
AbstractProviderService:
{code:java}
// By default retry forever every 30 seconds
launcher.setRetryContext(YarnServiceConf
.getInt(CONTAINER_RETRY_MAX, -1, service.getConfiguration(),
yarnConf), YarnServiceConf
.getInt(CONTAINER_RETRY_INTERVAL, 3, service.getConfiguration(),
yarnConf){code}
I will share my cluster with you with these changes. 

> Yarn service should expose failuresValidityInterval to users and use it for 
> launching containers
> 
>
> Key: YARN-8032
> URL: https://issues.apache.org/jira/browse/YARN-8032
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Chandni Singh
>Assignee: Chandni Singh
>Priority: Major
> Attachments: YARN-8032.001.patch, YARN-8032.002.patch, 
> YARN-8032.003.patch
>
>
> With YARN-5015 the support for sliding window retry policy was added. Yarn 
> service should expose it via the api for the users to take advantage of it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7574) Add support for Node Labels on Auto Created Leaf Queue Template

2018-03-23 Thread Suma Shivaprasad (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16411802#comment-16411802
 ] 

Suma Shivaprasad commented on YARN-7574:


Thanks [~leftnoteasy] Fixed review comments and a UT failure

> Add support for Node Labels on Auto Created Leaf Queue Template
> ---
>
> Key: YARN-7574
> URL: https://issues.apache.org/jira/browse/YARN-7574
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
>Priority: Major
> Attachments: YARN-7574.1.patch, YARN-7574.2.patch, YARN-7574.3.patch, 
> YARN-7574.4.patch, YARN-7574.5.patch, YARN-7574.6.patch
>
>
> YARN-7473 adds support for auto created leaf queues to inherit node labels 
> capacities from parent queues. Howebver there is no support for leaf queue 
> template to allow different configured capacities for different node labels. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7574) Add support for Node Labels on Auto Created Leaf Queue Template

2018-03-23 Thread Suma Shivaprasad (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suma Shivaprasad updated YARN-7574:
---
Attachment: YARN-7574.6.patch

> Add support for Node Labels on Auto Created Leaf Queue Template
> ---
>
> Key: YARN-7574
> URL: https://issues.apache.org/jira/browse/YARN-7574
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
>Priority: Major
> Attachments: YARN-7574.1.patch, YARN-7574.2.patch, YARN-7574.3.patch, 
> YARN-7574.4.patch, YARN-7574.5.patch, YARN-7574.6.patch
>
>
> YARN-7473 adds support for auto created leaf queues to inherit node labels 
> capacities from parent queues. Howebver there is no support for leaf queue 
> template to allow different configured capacities for different node labels. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5590) Add support for increase and decrease of container resources with resource profiles

2018-03-23 Thread Manikandan R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16411743#comment-16411743
 ] 

Manikandan R commented on YARN-5590:


Fixed checkstyle and whitespace issues.

> Add support for increase and decrease of container resources with resource 
> profiles
> ---
>
> Key: YARN-5590
> URL: https://issues.apache.org/jira/browse/YARN-5590
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Varun Vasudev
>Assignee: Manikandan R
>Priority: Major
> Attachments: YARN-5590.001.patch, YARN-5590.002.patch, 
> YARN-5590.003.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5590) Add support for increase and decrease of container resources with resource profiles

2018-03-23 Thread Manikandan R (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikandan R updated YARN-5590:
---
Attachment: YARN-5590.003.patch

> Add support for increase and decrease of container resources with resource 
> profiles
> ---
>
> Key: YARN-5590
> URL: https://issues.apache.org/jira/browse/YARN-5590
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Varun Vasudev
>Assignee: Manikandan R
>Priority: Major
> Attachments: YARN-5590.001.patch, YARN-5590.002.patch, 
> YARN-5590.003.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8032) Yarn service should expose failuresValidityInterval to users and use it for launching containers

2018-03-23 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16411724#comment-16411724
 ] 

Eric Yang commented on YARN-8032:
-

[~csingh] I added a debug statement to see the value is being set by 
AbstractLauncher class, and it shows up as:

{code}
2018-03-23 16:54:50,855 [pool-6-thread-2] INFO  
containerlaunch.AbstractLauncher - failure validity interval -1
{code}

Instead of 30 seconds that I passed from CLI.

For passing properties from CLI or REST to backend.  The settings needs to be 
forwarded from hadoop-yarn-service-api project ApiServer.java to 
hadoop-yarn-service-core project's ServiceClient.java.  {{updateLifetime}} is 
an example to forwarding the setting to YARN.  submitApp method in 
ServiceClient contains example of passing the parameter to submissionContext.  
You might need to add logic in submitApp method to bridge the gap.

> Yarn service should expose failuresValidityInterval to users and use it for 
> launching containers
> 
>
> Key: YARN-8032
> URL: https://issues.apache.org/jira/browse/YARN-8032
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Chandni Singh
>Assignee: Chandni Singh
>Priority: Major
> Attachments: YARN-8032.001.patch, YARN-8032.002.patch, 
> YARN-8032.003.patch
>
>
> With YARN-5015 the support for sliding window retry policy was added. Yarn 
> service should expose it via the api for the users to take advantage of it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8067) RM/UI2: queues views ; unintended scrollbars

2018-03-23 Thread Zoltan Haindrich (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Haindrich updated YARN-8067:
---
Attachment: screenshot.png

> RM/UI2: queues views ; unintended scrollbars
> 
>
> Key: YARN-8067
> URL: https://issues.apache.org/jira/browse/YARN-8067
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Zoltan Haindrich
>Priority: Major
> Attachments: screenshot.png
>
>
> I've horizontal/vertical scrollbars; they don't seem to be usefull



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-8067) RM/UI2: queues views ; unintended scrollbars

2018-03-23 Thread Zoltan Haindrich (JIRA)
Zoltan Haindrich created YARN-8067:
--

 Summary: RM/UI2: queues views ; unintended scrollbars
 Key: YARN-8067
 URL: https://issues.apache.org/jira/browse/YARN-8067
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Zoltan Haindrich


I've horizontal/vertical scrollbars; they don't seem to be usefull



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8066) RM/UI2: doughnut chart mis-rendering

2018-03-23 Thread Zoltan Haindrich (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16411688#comment-16411688
 ] 

Zoltan Haindrich commented on YARN-8066:


this might be environment dependent:

* linux/debian9/amd64
* chromium 64.0.3282.119-1~deb9u1

firefox seems to be doing fine

> RM/UI2: doughnut chart mis-rendering
> 
>
> Key: YARN-8066
> URL: https://issues.apache.org/jira/browse/YARN-8066
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Reporter: Zoltan Haindrich
>Priority: Major
> Attachments: screenshot.png
>
>
> on the overview page; 
> http://ctr-e138-1518143905142-140659-01-02.hwx.site:8088/ui2/#/cluster-overview
> * when there are 2 subqueues to the root (llap and default); 
> * the llap queue is running 1 application
> * there are no other active application
> the doughtut chart is rendered incorrectly...like something have eaten some 
> part of it
> the same thing can be observed when there are other applications are active 
> while watching the animation during refresh - in that case the incorrect 
> drawing appears a number of time during animation - but at the end it 
> stabilizes as a correct..



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8066) RM/UI2: doughnut chart mis-rendering

2018-03-23 Thread Zoltan Haindrich (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Haindrich updated YARN-8066:
---
Attachment: screenshot.png

> RM/UI2: doughnut chart mis-rendering
> 
>
> Key: YARN-8066
> URL: https://issues.apache.org/jira/browse/YARN-8066
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Reporter: Zoltan Haindrich
>Priority: Major
> Attachments: screenshot.png
>
>
> on the overview page; 
> http://ctr-e138-1518143905142-140659-01-02.hwx.site:8088/ui2/#/cluster-overview
> * when there are 2 subqueues to the root (llap and default); 
> * the llap queue is running 1 application
> * there are no other active application
> the doughtut chart is rendered incorrectly...like something have eaten some 
> part of it
> the same thing can be observed when there are other applications are active 
> while watching the animation during refresh - in that case the incorrect 
> drawing appears a number of time during animation - but at the end it 
> stabilizes as a correct..



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-8066) RM/UI2: doughnut chart mis-rendering

2018-03-23 Thread Zoltan Haindrich (JIRA)
Zoltan Haindrich created YARN-8066:
--

 Summary: RM/UI2: doughnut chart mis-rendering
 Key: YARN-8066
 URL: https://issues.apache.org/jira/browse/YARN-8066
 Project: Hadoop YARN
  Issue Type: Bug
  Components: yarn-ui-v2
Reporter: Zoltan Haindrich


on the overview page; 
http://ctr-e138-1518143905142-140659-01-02.hwx.site:8088/ui2/#/cluster-overview

* when there are 2 subqueues to the root (llap and default); 
* the llap queue is running 1 application
* there are no other active application

the doughtut chart is rendered incorrectly...like something have eaten some 
part of it

the same thing can be observed when there are other applications are active 
while watching the animation during refresh - in that case the incorrect 
drawing appears a number of time during animation - but at the end it 
stabilizes as a correct..








--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-8062) yarn rmadmin -getGroups returns group from which the user has been removed

2018-03-23 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16411650#comment-16411650
 ] 

Sunil G edited comment on YARN-8062 at 3/23/18 4:29 PM:


In the latest patch, we will still do refresh from RM#init, but with conf 
objected loaded only with core-site.xml. When used with complete object, some 
extra config has disabled refresh option.

Test cases are passing with this patch in a real cluster.

Thanks Rohith. Could you please help to review latest patch [~rohithsharma] 
[~leftnoteasy] . Thanks for the feedback.


was (Author: sunilg):
In the latest patch, we will still do refresh from RM#init, but with conf 
objected loaded only with core-site.xml. When used with complete object, some 
extra config has disabled refresh option.

Test cases are passing with this patch in a real cluster.

[~rohithsharma] [~leftnoteasy] Could you please help to review. Thanks for the 
feedback.

> yarn rmadmin -getGroups returns group from which the user has been removed
> --
>
> Key: YARN-8062
> URL: https://issues.apache.org/jira/browse/YARN-8062
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Sumana Sathish
>Assignee: Sunil G
>Priority: Critical
> Attachments: YARN-8062.001.patch, YARN-8062.002.patch, 
> YARN-8062.003.patch, YARN-8062.004.patch
>
>
> {code:title= adding group hrt_yarn_rmadmin_test}
> sudo su - -c "groupadd hrt_yarn_rmadmin_test" root
> {code}
> {Code:title=adding user hrt_yarn_rmadmin_test to group hrt_yarn_rmadmin_test}
> sudo su - -c "useradd hrt_yarn_rmadmin_test -g hrt_yarn_rmadmin_test" root
> {Code}
> {Code:title= adding group hrt_yarn_rmadmin_test_group2 }
> sudo su - -c "groupadd hrt_yarn_rmadmin_test_group2" root
> {Code}
> {Code:title=adding user hrt_yarn_rmadmin_test to group 
> hrt_yarn_rmadmin_test_group2}
> sudo su - -c "usermod -a -G hrt_yarn_rmadmin_test_group2 
> hrt_yarn_rmadmin_test" root
> {Code}
> Refresh and getGroups
> {code}
> yarn rmadmin -refreshUserToGroupsMappings
> /usr/hdp/current/hadoop-yarn-client/bin/yarn rmadmin -getGroups 
> hrt_yarn_rmadmin_test
> hrt_yarn_rmadmin_test : hrt_yarn_rmadmin_test hrt_yarn_rmadmin_test_group2
> {code}
> Delete group hrt_yarn_rmadmin_test_group2 from user hrt_yarn_rmadmin_test  
> and refresh and do getGroups.
> We can still see group hrt_yarn_rmadmin_test_group2
> {code}
> sudo su - -c "gpasswd -d hrt_yarn_rmadmin_test hrt_yarn_rmadmin_test_group2" 
> root
> {code}
> Removing user hrt_yarn_rmadmin_test from group hrt_yarn_rmadmin_test_group2
> {code}
> bash-4.2$  /usr/hdp/current/hadoop-yarn-client/bin/yarn rmadmin 
> -refreshUserToGroupsMappings
> /usr/hdp/current/hadoop-yarn-client/bin/yarn rmadmin -getGroups 
> hrt_yarn_rmadmin_test
> hrt_yarn_rmadmin_test : hrt_yarn_rmadmin_test hrt_yarn_rmadmin_test_group2
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8062) yarn rmadmin -getGroups returns group from which the user has been removed

2018-03-23 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16411650#comment-16411650
 ] 

Sunil G commented on YARN-8062:
---

In the latest patch, we will still do refresh from RM#init, but with conf 
objected loaded only with core-site.xml. When used with complete object, some 
extra config has disabled refresh option.

Test cases are passing with this patch in a real cluster.

[~rohithsharma] [~leftnoteasy] Could you please help to review. Thanks for the 
feedback.

> yarn rmadmin -getGroups returns group from which the user has been removed
> --
>
> Key: YARN-8062
> URL: https://issues.apache.org/jira/browse/YARN-8062
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Sumana Sathish
>Assignee: Sunil G
>Priority: Critical
> Attachments: YARN-8062.001.patch, YARN-8062.002.patch, 
> YARN-8062.003.patch, YARN-8062.004.patch
>
>
> {code:title= adding group hrt_yarn_rmadmin_test}
> sudo su - -c "groupadd hrt_yarn_rmadmin_test" root
> {code}
> {Code:title=adding user hrt_yarn_rmadmin_test to group hrt_yarn_rmadmin_test}
> sudo su - -c "useradd hrt_yarn_rmadmin_test -g hrt_yarn_rmadmin_test" root
> {Code}
> {Code:title= adding group hrt_yarn_rmadmin_test_group2 }
> sudo su - -c "groupadd hrt_yarn_rmadmin_test_group2" root
> {Code}
> {Code:title=adding user hrt_yarn_rmadmin_test to group 
> hrt_yarn_rmadmin_test_group2}
> sudo su - -c "usermod -a -G hrt_yarn_rmadmin_test_group2 
> hrt_yarn_rmadmin_test" root
> {Code}
> Refresh and getGroups
> {code}
> yarn rmadmin -refreshUserToGroupsMappings
> /usr/hdp/current/hadoop-yarn-client/bin/yarn rmadmin -getGroups 
> hrt_yarn_rmadmin_test
> hrt_yarn_rmadmin_test : hrt_yarn_rmadmin_test hrt_yarn_rmadmin_test_group2
> {code}
> Delete group hrt_yarn_rmadmin_test_group2 from user hrt_yarn_rmadmin_test  
> and refresh and do getGroups.
> We can still see group hrt_yarn_rmadmin_test_group2
> {code}
> sudo su - -c "gpasswd -d hrt_yarn_rmadmin_test hrt_yarn_rmadmin_test_group2" 
> root
> {code}
> Removing user hrt_yarn_rmadmin_test from group hrt_yarn_rmadmin_test_group2
> {code}
> bash-4.2$  /usr/hdp/current/hadoop-yarn-client/bin/yarn rmadmin 
> -refreshUserToGroupsMappings
> /usr/hdp/current/hadoop-yarn-client/bin/yarn rmadmin -getGroups 
> hrt_yarn_rmadmin_test
> hrt_yarn_rmadmin_test : hrt_yarn_rmadmin_test hrt_yarn_rmadmin_test_group2
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8062) yarn rmadmin -getGroups returns group from which the user has been removed

2018-03-23 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-8062:
--
Attachment: YARN-8062.004.patch

> yarn rmadmin -getGroups returns group from which the user has been removed
> --
>
> Key: YARN-8062
> URL: https://issues.apache.org/jira/browse/YARN-8062
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Sumana Sathish
>Assignee: Sunil G
>Priority: Critical
> Attachments: YARN-8062.001.patch, YARN-8062.002.patch, 
> YARN-8062.003.patch, YARN-8062.004.patch
>
>
> {code:title= adding group hrt_yarn_rmadmin_test}
> sudo su - -c "groupadd hrt_yarn_rmadmin_test" root
> {code}
> {Code:title=adding user hrt_yarn_rmadmin_test to group hrt_yarn_rmadmin_test}
> sudo su - -c "useradd hrt_yarn_rmadmin_test -g hrt_yarn_rmadmin_test" root
> {Code}
> {Code:title= adding group hrt_yarn_rmadmin_test_group2 }
> sudo su - -c "groupadd hrt_yarn_rmadmin_test_group2" root
> {Code}
> {Code:title=adding user hrt_yarn_rmadmin_test to group 
> hrt_yarn_rmadmin_test_group2}
> sudo su - -c "usermod -a -G hrt_yarn_rmadmin_test_group2 
> hrt_yarn_rmadmin_test" root
> {Code}
> Refresh and getGroups
> {code}
> yarn rmadmin -refreshUserToGroupsMappings
> /usr/hdp/current/hadoop-yarn-client/bin/yarn rmadmin -getGroups 
> hrt_yarn_rmadmin_test
> hrt_yarn_rmadmin_test : hrt_yarn_rmadmin_test hrt_yarn_rmadmin_test_group2
> {code}
> Delete group hrt_yarn_rmadmin_test_group2 from user hrt_yarn_rmadmin_test  
> and refresh and do getGroups.
> We can still see group hrt_yarn_rmadmin_test_group2
> {code}
> sudo su - -c "gpasswd -d hrt_yarn_rmadmin_test hrt_yarn_rmadmin_test_group2" 
> root
> {code}
> Removing user hrt_yarn_rmadmin_test from group hrt_yarn_rmadmin_test_group2
> {code}
> bash-4.2$  /usr/hdp/current/hadoop-yarn-client/bin/yarn rmadmin 
> -refreshUserToGroupsMappings
> /usr/hdp/current/hadoop-yarn-client/bin/yarn rmadmin -getGroups 
> hrt_yarn_rmadmin_test
> hrt_yarn_rmadmin_test : hrt_yarn_rmadmin_test hrt_yarn_rmadmin_test_group2
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7221) Add security check for privileged docker container

2018-03-23 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16411594#comment-16411594
 ] 

Eric Yang commented on YARN-7221:
-

[~ebadger] The unit test failure is not related to this patch.  Can you review 
again?  Do we still need 
{{yarn.nodemanager.runtime.linux.docker.privileged-containers.acl}} acl check 
when this is implemented?  It seems redundant.

> Add security check for privileged docker container
> --
>
> Key: YARN-7221
> URL: https://issues.apache.org/jira/browse/YARN-7221
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 3.0.0, 3.1.0
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: YARN-7221.001.patch, YARN-7221.002.patch, 
> YARN-7221.003.patch, YARN-7221.004.patch, YARN-7221.005.patch, 
> YARN-7221.006.patch, YARN-7221.007.patch, YARN-7221.008.patch, 
> YARN-7221.009.patch, YARN-7221.010.patch
>
>
> When a docker is running with privileges, majority of the use case is to have 
> some program running with root then drop privileges to another user.  i.e. 
> httpd to start with privileged and bind to port 80, then drop privileges to 
> www user.  
> # We should add security check for submitting users, to verify they have 
> "sudo" access to run privileged container.  
> # We should remove --user=uid:gid for privileged containers.  
>  
> Docker can be launched with --privileged=true, and --user=uid:gid flag.  With 
> this parameter combinations, user will not have access to become root user.  
> All docker exec command will be drop to uid:gid user to run instead of 
> granting privileges.  User can gain root privileges if container file system 
> contains files that give user extra power, but this type of image is 
> considered as dangerous.  Non-privileged user can launch container with 
> special bits to acquire same level of root power.  Hence, we lose control of 
> which image should be run with --privileges, and who have sudo rights to use 
> privileged container images.  As the result, we should check for sudo access 
> then decide to parameterize --privileged=true OR --user=uid:gid.  This will 
> avoid leading developer down the wrong path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6830) Support quoted strings for environment variables

2018-03-23 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16411546#comment-16411546
 ] 

Jason Lowe commented on YARN-6830:
--

Yes, the downside with the key-prefix approach is it has to be implemented in 
each instance that wants to benefit.  Configuration could have a helper method, 
like collectPropertiesByPrefix(String keyPrefix), that returns a 
Map.  That method would return the collection of key,value pairs 
for all properties in the config that start with the specified prefix.  Each 
instance that needs to move to this approach can leverage that helper method to 
find all the properties with a particular prefix, e.g.: "mapreduce.map.env." 
for the MapReduce case or "yarn.nodemanager.admin.env." for the YARN NM case.  
I don't see a way to automatically fix existing use-cases without risking 
backward compatibility problems because we would have to change the semantics 
of the existing methods.  This approach avoids that by adding new methods, but 
code has to be updated to use the new methods.

> Support quoted strings for environment variables
> 
>
> Key: YARN-6830
> URL: https://issues.apache.org/jira/browse/YARN-6830
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Shane Kumpf
>Assignee: Jim Brennan
>Priority: Major
> Attachments: YARN-6830.001.patch, YARN-6830.002.patch, 
> YARN-6830.003.patch, YARN-6830.004.patch
>
>
> There are cases where it is necessary to allow for quoted string literals 
> within environment variables values when passed via the yarn command line 
> interface.
> For example, consider the follow environment variables for a MR map task.
> {{MODE=bar}}
> {{IMAGE_NAME=foo}}
> {{MOUNTS=/tmp/foo,/tmp/bar}}
> When running the MR job, these environment variables are supplied as a comma 
> delimited string.
> {{-Dmapreduce.map.env="MODE=bar,IMAGE_NAME=foo,MOUNTS=/tmp/foo,/tmp/bar"}}
> In this case, {{MOUNTS}} will be parsed and added to the task environment as 
> {{MOUNTS=/tmp/foo}}. Any attempts to quote the embedded comma separated value 
> results in quote characters becoming part of the value, and parsing still 
> breaks down at the comma.
> This issue is to allow for quoting the comma separated value (escaped double 
> or single quote). This was mentioned on YARN-4595 and will impact YARN-5534 
> as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8053) Add hadoop-distcp in exclusion in hbase-server dependencies for timelineservice-hbase packages.

2018-03-23 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16411412#comment-16411412
 ] 

Haibo Chen commented on YARN-8053:
--

[~ste...@apache.org] Can you elaborate on what you mean by unwind this 
dependency by having {{hadoop-yarn-server-timelineservice-hbase}} be something 
which is downstream of both?

> Add hadoop-distcp in exclusion in hbase-server dependencies for 
> timelineservice-hbase packages.
> ---
>
> Key: YARN-8053
> URL: https://issues.apache.org/jira/browse/YARN-8053
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Major
> Fix For: 3.1.0, yarn-7055, 3.2.0
>
> Attachments: YARN-8053-YARN-7055.01.patch, YARN-8053.01.patch
>
>
> It is observed that when we change the version number of hadoop leading build 
> failure because of dependency resolution conflicts for HBase-2 compilation. 
> We see below error which tells that hbase-server has dependency on 
> hadoop-distcp. We also need to exclude hadoop-distcp from exclusion list. 
> {code}
> 07:42:36 2018/03/19 14:42:36 INFO: [ERROR] Failed to execute goal on 
> project hadoop-yarn-server-timelineservice-hbase-client: Could not resolve 
> dependencies for project 
> org.apache.hadoop:hadoop-yarn-server-timelineservice-hbase-client:jar:3.3.0-SNAPSHOT:
>  Could not find artifact org.apache.hadoop:hadoop-distcp:jar:3.3.0-SNAPSHOT 
> in public 
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8053) Add hadoop-distcp in exclusion in hbase-server dependencies for timelineservice-hbase packages.

2018-03-23 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16411374#comment-16411374
 ] 

Steve Loughran commented on YARN-8053:
--

we seem to be suffering a losing battle by attempting to pull in a version of 
hbase built with an older version of hadoop. As well as hadoop artifacts 
getting in, we can't safely upgrade things. Is there any way to unwind this 
dependency by having {{hadoop-yarn-server-timelineservice-hbase}} be something 
which is downstream of both, because loops shouldn't be found in DAGs?

> Add hadoop-distcp in exclusion in hbase-server dependencies for 
> timelineservice-hbase packages.
> ---
>
> Key: YARN-8053
> URL: https://issues.apache.org/jira/browse/YARN-8053
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Major
> Fix For: 3.1.0, yarn-7055, 3.2.0
>
> Attachments: YARN-8053-YARN-7055.01.patch, YARN-8053.01.patch
>
>
> It is observed that when we change the version number of hadoop leading build 
> failure because of dependency resolution conflicts for HBase-2 compilation. 
> We see below error which tells that hbase-server has dependency on 
> hadoop-distcp. We also need to exclude hadoop-distcp from exclusion list. 
> {code}
> 07:42:36 2018/03/19 14:42:36 INFO: [ERROR] Failed to execute goal on 
> project hadoop-yarn-server-timelineservice-hbase-client: Could not resolve 
> dependencies for project 
> org.apache.hadoop:hadoop-yarn-server-timelineservice-hbase-client:jar:3.3.0-SNAPSHOT:
>  Could not find artifact org.apache.hadoop:hadoop-distcp:jar:3.3.0-SNAPSHOT 
> in public 
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8062) yarn rmadmin -getGroups returns group from which the user has been removed

2018-03-23 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16411349#comment-16411349
 ] 

genericqa commented on YARN-8062:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 23s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 27s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 1 new + 67 unchanged - 0 fixed = 68 total (was 67) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 15s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 62m 22s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}113m 37s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.resourcemanager.TestRMAdminService |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | YARN-8062 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12915848/YARN-8062.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 7eafb0f9d0ec 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 75fc05f |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/20063/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit | 

[jira] [Assigned] (YARN-8054) Improve robustness of the LocalDirsHandlerService MonitoringTimerTask thread

2018-03-23 Thread Kuhu Shukla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8054?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kuhu Shukla reassigned YARN-8054:
-

Assignee: Jonathan Eagles  (was: Jason Lowe)

> Improve robustness of the LocalDirsHandlerService MonitoringTimerTask thread
> 
>
> Key: YARN-8054
> URL: https://issues.apache.org/jira/browse/YARN-8054
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jonathan Eagles
>Assignee: Jonathan Eagles
>Priority: Major
> Fix For: 2.10.0, 2.9.1, 2.8.4, 3.0.2, 3.1.1
>
> Attachments: YARN-8054.001.patch, YARN-8054.002.patch
>
>
> The DeprecatedRawLocalFileStatus#loadPermissionInfo can throw a 
> RuntimeException which can kill the MonitoringTimerTask thread. This can 
> leave the node is a bad state where all NM local directories are marked "bad" 
> and there is no automatic recovery. In the below can the error was "too many 
> open files",  but could be a number of other recoverable states.
> {noformat}
> 2018-03-18 02:37:42,960 [DiskHealthMonitor-Timer] ERROR 
> yarn.YarnUncaughtExceptionHandler: Thread 
> Thread[DiskHealthMonitor-Timer,5,main] threw an Exception.
> java.lang.RuntimeException: Error while running command to get file 
> permissions : java.io.IOException: Cannot run program "ls": error=24, Too 
> many open files
> at java.lang.ProcessBuilder.start(ProcessBuilder.java:1048)
> at org.apache.hadoop.util.Shell.runCommand(Shell.java:942)
> at org.apache.hadoop.util.Shell.run(Shell.java:898)
> at 
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:1213)
> at org.apache.hadoop.util.Shell.execCommand(Shell.java:1307)
> at org.apache.hadoop.util.Shell.execCommand(Shell.java:1289)
> at org.apache.hadoop.fs.FileUtil.execCommand(FileUtil.java:1078)
> at 
> org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:697)
> at 
> org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.getPermission(RawLocalFileSystem.java:672)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService.checkLocalDir(ResourceLocalizationService.java:1556)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService.checkAndInitializeLocalDirs(ResourceLocalizationService.java:1521)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService$1.onDirsChanged(ResourceLocalizationService.java:271)
> at 
> org.apache.hadoop.yarn.server.nodemanager.DirectoryCollection.checkDirs(DirectoryCollection.java:381)
> at 
> org.apache.hadoop.yarn.server.nodemanager.LocalDirsHandlerService.checkDirs(LocalDirsHandlerService.java:449)
> at 
> org.apache.hadoop.yarn.server.nodemanager.LocalDirsHandlerService.access$500(LocalDirsHandlerService.java:52)
> at 
> org.apache.hadoop.yarn.server.nodemanager.LocalDirsHandlerService$MonitoringTimerTask.run(LocalDirsHandlerService.java:166)
> at java.util.TimerThread.mainLoop(Timer.java:555)
> at java.util.TimerThread.run(Timer.java:505)
> Caused by: java.io.IOException: error=24, Too many open files
> at java.lang.UNIXProcess.forkAndExec(Native Method)
> at java.lang.UNIXProcess.(UNIXProcess.java:247)
> at java.lang.ProcessImpl.start(ProcessImpl.java:134)
> at java.lang.ProcessBuilder.start(ProcessBuilder.java:1029)
> ... 17 more
> at 
> org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:737)
> at 
> org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.getPermission(RawLocalFileSystem.java:672)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService.checkLocalDir(ResourceLocalizationService.java:1556)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService.checkAndInitializeLocalDirs(ResourceLocalizationService.java:1521)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService$1.onDirsChanged(ResourceLocalizationService.java:271)
> at 
> org.apache.hadoop.yarn.server.nodemanager.DirectoryCollection.checkDirs(DirectoryCollection.java:381)
> at 
> org.apache.hadoop.yarn.server.nodemanager.LocalDirsHandlerService.checkDirs(LocalDirsHandlerService.java:449)
> at 
> org.apache.hadoop.yarn.server.nodemanager.LocalDirsHandlerService.access$500(LocalDirsHandlerService.java:52)
> at 
> 

[jira] [Commented] (YARN-6629) NPE occurred when container allocation proposal is applied but its resource requests are removed before

2018-03-23 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16411284#comment-16411284
 ] 

genericqa commented on YARN-6629:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
34s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  7s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 25s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 76m 23s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}128m 52s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.recovery.TestLeveldbRMStateStore |
|   | 
hadoop.yarn.server.resourcemanager.scheduler.constraint.TestPlacementProcessor |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | YARN-6629 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12915834/YARN-6629.004.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 129d0be4d0f6 4.4.0-116-generic #140-Ubuntu SMP Mon Feb 12 
21:23:04 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 75fc05f |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/20062/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/20062/testReport/ |
| Max. process+thread count | 812 (vs. ulimit of 1) |
| modules | C: 

[jira] [Commented] (YARN-8065) Application is failing when AM node is stopped

2018-03-23 Thread Bilwa S T (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16411077#comment-16411077
 ] 

Bilwa S T commented on YARN-8065:
-

Black Listing of nodes are not happening in the following scenarios 
 # RMAppattempt is in ALLOCATED and LAUNCH_FAILED event comes when NM is down.
 # RMAppattempt is in LAUNCHED nad EXPIRE event comes when NM is down.

In both these cases AppAttempt goes to FINAL_SAVING and eventually to FINAL 
state before CONTAINER_FINISHED event is triggered by RMContainerImpl and in 
the FINAL state CONTAINER_FINISHED event is ignored.

> Application is failing when AM node is stopped
> --
>
> Key: YARN-8065
> URL: https://issues.apache.org/jira/browse/YARN-8065
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bilwa S T
>Assignee: Bilwa S T
>Priority: Major
> Fix For: 3.0.0
>
>
> Configure yarn.scheduler.capacity.schedule-asynchronously.enable as *true* 
> and 
> yarn.resourcemanager.nodemanagers.heartbeat-interval-ms as *6* .Run 
> application and make *AM node* down. Application will fail.
> If same node is picked up to launch AM attempt again, application can 
> fail.More likely to occur with lesser number of nodes.
>  
>  
>  
>  
>  
>   



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8062) yarn rmadmin -getGroups returns group from which the user has been removed

2018-03-23 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16411044#comment-16411044
 ] 

Sunil G commented on YARN-8062:
---

Fixing test case.

> yarn rmadmin -getGroups returns group from which the user has been removed
> --
>
> Key: YARN-8062
> URL: https://issues.apache.org/jira/browse/YARN-8062
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Sumana Sathish
>Assignee: Sunil G
>Priority: Critical
> Attachments: YARN-8062.001.patch, YARN-8062.002.patch, 
> YARN-8062.003.patch
>
>
> {code:title= adding group hrt_yarn_rmadmin_test}
> sudo su - -c "groupadd hrt_yarn_rmadmin_test" root
> {code}
> {Code:title=adding user hrt_yarn_rmadmin_test to group hrt_yarn_rmadmin_test}
> sudo su - -c "useradd hrt_yarn_rmadmin_test -g hrt_yarn_rmadmin_test" root
> {Code}
> {Code:title= adding group hrt_yarn_rmadmin_test_group2 }
> sudo su - -c "groupadd hrt_yarn_rmadmin_test_group2" root
> {Code}
> {Code:title=adding user hrt_yarn_rmadmin_test to group 
> hrt_yarn_rmadmin_test_group2}
> sudo su - -c "usermod -a -G hrt_yarn_rmadmin_test_group2 
> hrt_yarn_rmadmin_test" root
> {Code}
> Refresh and getGroups
> {code}
> yarn rmadmin -refreshUserToGroupsMappings
> /usr/hdp/current/hadoop-yarn-client/bin/yarn rmadmin -getGroups 
> hrt_yarn_rmadmin_test
> hrt_yarn_rmadmin_test : hrt_yarn_rmadmin_test hrt_yarn_rmadmin_test_group2
> {code}
> Delete group hrt_yarn_rmadmin_test_group2 from user hrt_yarn_rmadmin_test  
> and refresh and do getGroups.
> We can still see group hrt_yarn_rmadmin_test_group2
> {code}
> sudo su - -c "gpasswd -d hrt_yarn_rmadmin_test hrt_yarn_rmadmin_test_group2" 
> root
> {code}
> Removing user hrt_yarn_rmadmin_test from group hrt_yarn_rmadmin_test_group2
> {code}
> bash-4.2$  /usr/hdp/current/hadoop-yarn-client/bin/yarn rmadmin 
> -refreshUserToGroupsMappings
> /usr/hdp/current/hadoop-yarn-client/bin/yarn rmadmin -getGroups 
> hrt_yarn_rmadmin_test
> hrt_yarn_rmadmin_test : hrt_yarn_rmadmin_test hrt_yarn_rmadmin_test_group2
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8062) yarn rmadmin -getGroups returns group from which the user has been removed

2018-03-23 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-8062:
--
Attachment: YARN-8062.003.patch

> yarn rmadmin -getGroups returns group from which the user has been removed
> --
>
> Key: YARN-8062
> URL: https://issues.apache.org/jira/browse/YARN-8062
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Sumana Sathish
>Assignee: Sunil G
>Priority: Critical
> Attachments: YARN-8062.001.patch, YARN-8062.002.patch, 
> YARN-8062.003.patch
>
>
> {code:title= adding group hrt_yarn_rmadmin_test}
> sudo su - -c "groupadd hrt_yarn_rmadmin_test" root
> {code}
> {Code:title=adding user hrt_yarn_rmadmin_test to group hrt_yarn_rmadmin_test}
> sudo su - -c "useradd hrt_yarn_rmadmin_test -g hrt_yarn_rmadmin_test" root
> {Code}
> {Code:title= adding group hrt_yarn_rmadmin_test_group2 }
> sudo su - -c "groupadd hrt_yarn_rmadmin_test_group2" root
> {Code}
> {Code:title=adding user hrt_yarn_rmadmin_test to group 
> hrt_yarn_rmadmin_test_group2}
> sudo su - -c "usermod -a -G hrt_yarn_rmadmin_test_group2 
> hrt_yarn_rmadmin_test" root
> {Code}
> Refresh and getGroups
> {code}
> yarn rmadmin -refreshUserToGroupsMappings
> /usr/hdp/current/hadoop-yarn-client/bin/yarn rmadmin -getGroups 
> hrt_yarn_rmadmin_test
> hrt_yarn_rmadmin_test : hrt_yarn_rmadmin_test hrt_yarn_rmadmin_test_group2
> {code}
> Delete group hrt_yarn_rmadmin_test_group2 from user hrt_yarn_rmadmin_test  
> and refresh and do getGroups.
> We can still see group hrt_yarn_rmadmin_test_group2
> {code}
> sudo su - -c "gpasswd -d hrt_yarn_rmadmin_test hrt_yarn_rmadmin_test_group2" 
> root
> {code}
> Removing user hrt_yarn_rmadmin_test from group hrt_yarn_rmadmin_test_group2
> {code}
> bash-4.2$  /usr/hdp/current/hadoop-yarn-client/bin/yarn rmadmin 
> -refreshUserToGroupsMappings
> /usr/hdp/current/hadoop-yarn-client/bin/yarn rmadmin -getGroups 
> hrt_yarn_rmadmin_test
> hrt_yarn_rmadmin_test : hrt_yarn_rmadmin_test hrt_yarn_rmadmin_test_group2
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8065) Application is failing when AM node is stopped

2018-03-23 Thread Bilwa S T (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8065?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bilwa S T updated YARN-8065:

Description: 
Configure yarn.scheduler.capacity.schedule-asynchronously.enable as *true* and 

yarn.resourcemanager.nodemanagers.heartbeat-interval-ms as *6* .Run 
application and make *AM node* down. Application will fail.

If same node is picked up to launch AM attempt again, application can fail.More 
likely to occur with lesser number of nodes.

 

 

 

 

 
  

  was:
Configure yarn.scheduler.capacity.schedule-asynchronously.enable as true and 
yarn.resourcemanager.nodemanagers.heartbeat-interval-ms as 6 .

Run application and make AM node down.Application will fail.

 

 

 

 

 
 


> Application is failing when AM node is stopped
> --
>
> Key: YARN-8065
> URL: https://issues.apache.org/jira/browse/YARN-8065
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bilwa S T
>Assignee: Bilwa S T
>Priority: Major
> Fix For: 3.0.0
>
>
> Configure yarn.scheduler.capacity.schedule-asynchronously.enable as *true* 
> and 
> yarn.resourcemanager.nodemanagers.heartbeat-interval-ms as *6* .Run 
> application and make *AM node* down. Application will fail.
> If same node is picked up to launch AM attempt again, application can 
> fail.More likely to occur with lesser number of nodes.
>  
>  
>  
>  
>  
>   



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-8065) Application is failing when AM node is stopped

2018-03-23 Thread Bilwa S T (JIRA)
Bilwa S T created YARN-8065:
---

 Summary: Application is failing when AM node is stopped
 Key: YARN-8065
 URL: https://issues.apache.org/jira/browse/YARN-8065
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Bilwa S T
Assignee: Bilwa S T
 Fix For: 3.0.0


Configure yarn.scheduler.capacity.schedule-asynchronously.enable as true and 
yarn.resourcemanager.nodemanagers.heartbeat-interval-ms as 6 .

Run application and make AM node down.Application will fail.

 

 

 

 

 
 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-8041) Federation: Implement multiple interfaces(14 interfaces), routing REST invocations transparently to multiple RMs

2018-03-23 Thread leiqiang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16410962#comment-16410962
 ] 

leiqiang edited comment on YARN-8041 at 3/23/18 8:11 AM:
-

I think should add  RouterWebServiceProtocol


was (Author: leiqiang):
+1 LGTM

I think should add  RouterWebServiceProtocol

> Federation: Implement multiple interfaces(14 interfaces), routing REST 
> invocations transparently to multiple RMs 
> -
>
> Key: YARN-8041
> URL: https://issues.apache.org/jira/browse/YARN-8041
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: federation
>Affects Versions: 2.9.0, 3.0.0
>Reporter: Yiran Wu
>Priority: Major
>  Labels: patch
> Attachments: YARN-8041.001.patch, YARN-8041.002.patch, 
> YARN-8041.003.patch
>
>
> Implement routing 
> getAppStatistics/getAppState/getNodeToLabels/getLabelsOnNode/updateApplicationPriority/getAppQueue/updateAppQueue/getAppTimeout/getAppTimeouts/updateApplicationTimeout/getAppAttempts/getAppAttempt/getContainers/getContainer
>  REST invocations transparently to multiple RMs 
> I think we need add a new Web Protocol for Router, like is 
> {code:java}
> public interface RouterWebServiceProtocol extends RMWebServiceProtocol {
> List getAllSubClusterInfo();
> ClusterInfo  getSubClusterInfo(clusterId);
>SchedulerInfoType getSchedulerInfo(subClusterId);
> }
> {code}
> cause the Router needed some protocol, such is getAllSubClusterInfo(): 
> List 、 getSubClusterInfo(clusterId): ClusterInfo 
> 、getSchedulerInfo(subClusterId): SchedulerInfo  。 if needed i can do it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8041) Federation: Implement multiple interfaces(14 interfaces), routing REST invocations transparently to multiple RMs

2018-03-23 Thread leiqiang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16410962#comment-16410962
 ] 

leiqiang commented on YARN-8041:


+1 LGTM

I think should add  RouterWebServiceProtocol

> Federation: Implement multiple interfaces(14 interfaces), routing REST 
> invocations transparently to multiple RMs 
> -
>
> Key: YARN-8041
> URL: https://issues.apache.org/jira/browse/YARN-8041
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: federation
>Affects Versions: 2.9.0, 3.0.0
>Reporter: Yiran Wu
>Priority: Major
>  Labels: patch
> Attachments: YARN-8041.001.patch, YARN-8041.002.patch, 
> YARN-8041.003.patch
>
>
> Implement routing 
> getAppStatistics/getAppState/getNodeToLabels/getLabelsOnNode/updateApplicationPriority/getAppQueue/updateAppQueue/getAppTimeout/getAppTimeouts/updateApplicationTimeout/getAppAttempts/getAppAttempt/getContainers/getContainer
>  REST invocations transparently to multiple RMs 
> I think we need add a new Web Protocol for Router, like is 
> {code:java}
> public interface RouterWebServiceProtocol extends RMWebServiceProtocol {
> List getAllSubClusterInfo();
> ClusterInfo  getSubClusterInfo(clusterId);
>SchedulerInfoType getSchedulerInfo(subClusterId);
> }
> {code}
> cause the Router needed some protocol, such is getAllSubClusterInfo(): 
> List 、 getSubClusterInfo(clusterId): ClusterInfo 
> 、getSchedulerInfo(subClusterId): SchedulerInfo  。 if needed i can do it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6629) NPE occurred when container allocation proposal is applied but its resource requests are removed before

2018-03-23 Thread Tao Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tao Yang updated YARN-6629:
---
Attachment: YARN-6629.004.patch

> NPE occurred when container allocation proposal is applied but its resource 
> requests are removed before
> ---
>
> Key: YARN-6629
> URL: https://issues.apache.org/jira/browse/YARN-6629
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.9.0, 3.0.0-alpha2
>Reporter: Tao Yang
>Assignee: Tao Yang
>Priority: Critical
> Attachments: YARN-6629.001.patch, YARN-6629.002.patch, 
> YARN-6629.003.patch, YARN-6629.004.patch
>
>
> I wrote a test case to reproduce another problem for branch-2 and found new 
> NPE error,  log: 
> {code}
> FATAL event.EventDispatcher (EventDispatcher.java:run(75)) - Error in 
> handling event type NODE_UPDATE to the Event Dispatcher
> java.lang.NullPointerException
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.AppSchedulingInfo.allocate(AppSchedulingInfo.java:446)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp.apply(FiCaSchedulerApp.java:516)
> at 
> org.apache.hadoop.yarn.client.TestNegativePendingResource$1.answer(TestNegativePendingResource.java:225)
> at 
> org.mockito.internal.stubbing.StubbedInvocationMatcher.answer(StubbedInvocationMatcher.java:31)
> at org.mockito.internal.MockHandler.handle(MockHandler.java:97)
> at 
> org.mockito.internal.creation.MethodInterceptorFilter.intercept(MethodInterceptorFilter.java:47)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp$$EnhancerByMockitoWithCGLIB$$29eb8afc.apply()
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.tryCommit(CapacityScheduler.java:2396)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.submitResourceCommitRequest(CapacityScheduler.java:2281)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.allocateOrReserveNewContainers(CapacityScheduler.java:1247)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.allocateContainerOnSingleNode(CapacityScheduler.java:1236)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.allocateContainersToNode(CapacityScheduler.java:1325)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.allocateContainersToNode(CapacityScheduler.java:1112)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.nodeUpdate(CapacityScheduler.java:987)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.handle(CapacityScheduler.java:1367)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.handle(CapacityScheduler.java:143)
> at 
> org.apache.hadoop.yarn.event.EventDispatcher$EventProcessor.run(EventDispatcher.java:66)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> Reproduce this error in chronological order:
> 1. AM started and requested 1 container with schedulerRequestKey#1 : 
> ApplicationMasterService#allocate -->  CapacityScheduler#allocate --> 
> SchedulerApplicationAttempt#updateResourceRequests --> 
> AppSchedulingInfo#updateResourceRequests 
> Added schedulerRequestKey#1 into schedulerKeyToPlacementSets
> 2. Scheduler allocatd 1 container for this request and accepted the proposal
> 3. AM removed this request
> ApplicationMasterService#allocate -->  CapacityScheduler#allocate --> 
> SchedulerApplicationAttempt#updateResourceRequests --> 
> AppSchedulingInfo#updateResourceRequests --> 
> AppSchedulingInfo#addToPlacementSets --> 
> AppSchedulingInfo#updatePendingResources
> Removed schedulerRequestKey#1 from schedulerKeyToPlacementSets)
> 4. Scheduler applied this proposal
> CapacityScheduler#tryCommit --> FiCaSchedulerApp#apply --> 
> AppSchedulingInfo#allocate 
> Throw NPE when called 
> schedulerKeyToPlacementSets.get(schedulerRequestKey).allocate(schedulerKey, 
> type, node);



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6629) NPE occurred when container allocation proposal is applied but its resource requests are removed before

2018-03-23 Thread Tao Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16410935#comment-16410935
 ] 

Tao Yang commented on YARN-6629:


Thanks [~leftnoteasy] for your comment.

I have checked other parts including FSAppAttempt#assignContainers, 
FifoAppAttempt#allocate and ContainerUpdateContext#cancelPreviousRequest which 
call AppSchedulingInfo#allocate, they can make sure that the request wasn't 
removed before call AppSchedulingInfo#allocate. So yes, I agree with your 
suggestion.

We can add the same check logic in FifoAppAttempt#allocate and 
FSAppAttempt#assignContainers to FiCaSchedulerApp#apply like this:
{code:java}
// Required sanity check - AM can call 'allocate' to update resource
// request without locking the scheduler, hence we need to check
if (getOutstandingAsksCount(schedulerContainer.getSchedulerRequestKey())
<= 0) {
  return;
}
{code}
I'll upload new patch after a while. 

> NPE occurred when container allocation proposal is applied but its resource 
> requests are removed before
> ---
>
> Key: YARN-6629
> URL: https://issues.apache.org/jira/browse/YARN-6629
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.9.0, 3.0.0-alpha2
>Reporter: Tao Yang
>Assignee: Tao Yang
>Priority: Critical
> Attachments: YARN-6629.001.patch, YARN-6629.002.patch, 
> YARN-6629.003.patch
>
>
> I wrote a test case to reproduce another problem for branch-2 and found new 
> NPE error,  log: 
> {code}
> FATAL event.EventDispatcher (EventDispatcher.java:run(75)) - Error in 
> handling event type NODE_UPDATE to the Event Dispatcher
> java.lang.NullPointerException
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.AppSchedulingInfo.allocate(AppSchedulingInfo.java:446)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp.apply(FiCaSchedulerApp.java:516)
> at 
> org.apache.hadoop.yarn.client.TestNegativePendingResource$1.answer(TestNegativePendingResource.java:225)
> at 
> org.mockito.internal.stubbing.StubbedInvocationMatcher.answer(StubbedInvocationMatcher.java:31)
> at org.mockito.internal.MockHandler.handle(MockHandler.java:97)
> at 
> org.mockito.internal.creation.MethodInterceptorFilter.intercept(MethodInterceptorFilter.java:47)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp$$EnhancerByMockitoWithCGLIB$$29eb8afc.apply()
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.tryCommit(CapacityScheduler.java:2396)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.submitResourceCommitRequest(CapacityScheduler.java:2281)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.allocateOrReserveNewContainers(CapacityScheduler.java:1247)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.allocateContainerOnSingleNode(CapacityScheduler.java:1236)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.allocateContainersToNode(CapacityScheduler.java:1325)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.allocateContainersToNode(CapacityScheduler.java:1112)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.nodeUpdate(CapacityScheduler.java:987)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.handle(CapacityScheduler.java:1367)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.handle(CapacityScheduler.java:143)
> at 
> org.apache.hadoop.yarn.event.EventDispatcher$EventProcessor.run(EventDispatcher.java:66)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> Reproduce this error in chronological order:
> 1. AM started and requested 1 container with schedulerRequestKey#1 : 
> ApplicationMasterService#allocate -->  CapacityScheduler#allocate --> 
> SchedulerApplicationAttempt#updateResourceRequests --> 
> AppSchedulingInfo#updateResourceRequests 
> Added schedulerRequestKey#1 into schedulerKeyToPlacementSets
> 2. Scheduler allocatd 1 container for this request and accepted the proposal
> 3. AM removed this request
> ApplicationMasterService#allocate -->  CapacityScheduler#allocate --> 
> SchedulerApplicationAttempt#updateResourceRequests --> 
> AppSchedulingInfo#updateResourceRequests --> 
> AppSchedulingInfo#addToPlacementSets --> 
> AppSchedulingInfo#updatePendingResources
> Removed schedulerRequestKey#1 from schedulerKeyToPlacementSets)

[jira] [Commented] (YARN-8062) yarn rmadmin -getGroups returns group from which the user has been removed

2018-03-23 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16410922#comment-16410922
 ] 

genericqa commented on YARN-8062:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
 4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 21s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 27s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 1 new + 36 unchanged - 0 fixed = 37 total (was 36) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 37s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 63m 27s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}114m  5s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.resourcemanager.TestRMAdminService |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | YARN-8062 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12915815/YARN-8062.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux b3fc3ffc1322 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 22c5ddb |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/20061/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit | 

[jira] [Commented] (YARN-7986) ATSv2 REST API queries do not return results for uppercase application tags

2018-03-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16410909#comment-16410909
 ] 

Hudson commented on YARN-7986:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13870 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13870/])
YARN-7986. ATSv2 REST API queries do not return results for uppercase 
(rohithsharmaks: rev 75fc05f369929db768b767d79351bca8c13ad9ba)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/TimelineServiceV2.md


> ATSv2 REST API queries do not return results for uppercase application tags
> ---
>
> Key: YARN-7986
> URL: https://issues.apache.org/jira/browse/YARN-7986
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Charan Hebri
>Assignee: Charan Hebri
>Priority: Critical
> Fix For: 2.10.0, 3.0.2, 3.2.0, 3.1.1
>
> Attachments: YARN-7986.001.patch
>
>
> When applications are submitted to YARN with application tags, the tags are 
> converted to lowercase. This can be seen on the old/new UI. But using the 
> original tags for ATSv2 REST API queries do not return results as they expect 
> the query url to have the tags in lowercase. 
> This is additional work for the client because each tag needs to be 
> lowercased before running a query.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8063) DistributedShellTimelinePlugin wrongly check for entityId instead of entityType

2018-03-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8063?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16410908#comment-16410908
 ] 

Hudson commented on YARN-8063:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13870 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13870/])
YARN-8063. DistributedShellTimelinePlugin wrongly check for entityId (sunilg: 
rev 22c5ddb7c4fb48d5bf5a7456d0b1b27d48c2a485)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/test/java/org/apache/hadoop/yarn/applications/distributedshell/TestDistributedShell.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/DistributedShellTimelinePlugin.java


> DistributedShellTimelinePlugin wrongly check for entityId instead of 
> entityType
> ---
>
> Key: YARN-8063
> URL: https://issues.apache.org/jira/browse/YARN-8063
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Major
> Fix For: 2.10.0, 3.0.2, 3.2.0, 3.1.1
>
> Attachments: YARN-8063.01.patch
>
>
> DistributedShellTimelinePlugin#getTimelineEntityGroupId compare with entityId 
> rather than entityType. This causes to fail to getTimelineEntityGroupId.  
> {code}
>  public Set getTimelineEntityGroupId(String entityId,
>   String entityType) {
> if (ApplicationMaster.DSEntity.DS_CONTAINER.toString().equals(entityId)) {
>   ContainerId containerId = ContainerId.fromString(entityId);
>   ApplicationId appId = containerId.getApplicationAttemptId()
>   .getApplicationId();
>   return toEntityGroupId(appId.toString());
> }
> return null;
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7581) HBase filters are not constructed correctly in ATSv2

2018-03-23 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16410869#comment-16410869
 ] 

Rohith Sharma K S commented on YARN-7581:
-

committing branch-2 patch shortly

> HBase filters are not constructed correctly in ATSv2
> 
>
> Key: YARN-7581
> URL: https://issues.apache.org/jira/browse/YARN-7581
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: ATSv2
>Affects Versions: 3.0.0-beta1
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>Priority: Major
> Fix For: 3.1.0, yarn-7055, 3.2.0
>
> Attachments: YARN-7581-YARN-7055.04.patch, 
> YARN-7581-branch-2.05.patch, YARN-7581.00.patch, YARN-7581.01.patch, 
> YARN-7581.02.patch, YARN-7581.03.patch, YARN-7581.04.patch, YARN-7581.05.patch
>
>
> Post YARN-7346,
> TestTimelineReaderWebServicesHBaseStorage.testGetEntitiesConfigFilters() and 
> TestTimelineReaderWebServicesHBaseStorage.testGetEntitiesMetricFilters() 
> start to fail when hbase.profile is set to 2.0)
> *Error Message*
>  [ERROR] Failures:
>  [ERROR] 
> TestTimelineReaderWebServicesHBaseStorage.testGetEntitiesConfigFilters:1266 
> expected:<2> but was:<0>
>  [ERROR] 
> TestTimelineReaderWebServicesHBaseStorage.testGetEntitiesMetricFilters:1523 
> expected:<1> but was:<0>



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8062) yarn rmadmin -getGroups returns group from which the user has been removed

2018-03-23 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16410853#comment-16410853
 ] 

Rohith Sharma K S commented on YARN-8062:
-

+1 lgtm

> yarn rmadmin -getGroups returns group from which the user has been removed
> --
>
> Key: YARN-8062
> URL: https://issues.apache.org/jira/browse/YARN-8062
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Sumana Sathish
>Assignee: Sunil G
>Priority: Critical
> Attachments: YARN-8062.001.patch, YARN-8062.002.patch
>
>
> {code:title= adding group hrt_yarn_rmadmin_test}
> sudo su - -c "groupadd hrt_yarn_rmadmin_test" root
> {code}
> {Code:title=adding user hrt_yarn_rmadmin_test to group hrt_yarn_rmadmin_test}
> sudo su - -c "useradd hrt_yarn_rmadmin_test -g hrt_yarn_rmadmin_test" root
> {Code}
> {Code:title= adding group hrt_yarn_rmadmin_test_group2 }
> sudo su - -c "groupadd hrt_yarn_rmadmin_test_group2" root
> {Code}
> {Code:title=adding user hrt_yarn_rmadmin_test to group 
> hrt_yarn_rmadmin_test_group2}
> sudo su - -c "usermod -a -G hrt_yarn_rmadmin_test_group2 
> hrt_yarn_rmadmin_test" root
> {Code}
> Refresh and getGroups
> {code}
> yarn rmadmin -refreshUserToGroupsMappings
> /usr/hdp/current/hadoop-yarn-client/bin/yarn rmadmin -getGroups 
> hrt_yarn_rmadmin_test
> hrt_yarn_rmadmin_test : hrt_yarn_rmadmin_test hrt_yarn_rmadmin_test_group2
> {code}
> Delete group hrt_yarn_rmadmin_test_group2 from user hrt_yarn_rmadmin_test  
> and refresh and do getGroups.
> We can still see group hrt_yarn_rmadmin_test_group2
> {code}
> sudo su - -c "gpasswd -d hrt_yarn_rmadmin_test hrt_yarn_rmadmin_test_group2" 
> root
> {code}
> Removing user hrt_yarn_rmadmin_test from group hrt_yarn_rmadmin_test_group2
> {code}
> bash-4.2$  /usr/hdp/current/hadoop-yarn-client/bin/yarn rmadmin 
> -refreshUserToGroupsMappings
> /usr/hdp/current/hadoop-yarn-client/bin/yarn rmadmin -getGroups 
> hrt_yarn_rmadmin_test
> hrt_yarn_rmadmin_test : hrt_yarn_rmadmin_test hrt_yarn_rmadmin_test_group2
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org