[jira] [Updated] (YARN-7786) NullPointerException while launching ApplicationMaster

2018-02-22 Thread lujie (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lujie updated YARN-7786:

Issue Type: Bug  (was: Improvement)

> NullPointerException while launching ApplicationMaster
> --
>
> Key: YARN-7786
> URL: https://issues.apache.org/jira/browse/YARN-7786
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0-beta1
>Reporter: lujie
>Assignee: lujie
>Priority: Minor
> Attachments: YARN-7786.patch, YARN-7786_1.patch, YARN-7786_2.patch, 
> YARN-7786_3.patch, resourcemanager.log
>
>
> Before launching the ApplicationMaster, send kill command to the job, then 
> some Null pointer appears:
> {code}
> 2017-11-25 21:27:25,333 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Error 
> launching appattempt_1511616410268_0001_01. Got exception: 
> java.lang.NullPointerException
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.setupTokens(AMLauncher.java:205)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.createAMContainerLaunchContext(AMLauncher.java:193)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.launch(AMLauncher.java:112)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.run(AMLauncher.java:304)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7929) SLS supports setting container execution

2018-02-22 Thread Jiandan Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16374021#comment-16374021
 ] 

Jiandan Yang  edited comment on YARN-7929 at 2/23/18 7:09 AM:
--

Hi [~yochen], thanks for your attention. I did encounter the issue of merging 
failed when I pull latest code in my local develop environment. I will upload a 
new patch based on latest code.

"water level" to the NMSimulator simulates actual resource utilization, the 
scheduling of OPPORTUNISTIC containers through the central RM need actual node 
utilization according to design doc in YARN-1011.


was (Author: yangjiandan):
Hi [~yochen], thanks for your attention. I did encounter the issue of merging 
failed when I pull latest code in my local develop environment. I will upload a 
new patch based latest code.

"water level" to the NMSimulator simulates actual resource utilization, the 
scheduling of OPPORTUNISTIC containers through the central RM need actual node 
utilization according to design doc in YARN-1011.

> SLS supports setting container execution
> 
>
> Key: YARN-7929
> URL: https://issues.apache.org/jira/browse/YARN-7929
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler-load-simulator
>Reporter: Jiandan Yang 
>Assignee: Jiandan Yang 
>Priority: Major
> Attachments: YARN-7929.001.patch
>
>
> SLS currently support three tracetype, SYNTH, SLS and RUMEN, but trace file 
> can not set execution type of container.
>  This jira will introduce execution type in SLS to help better simulation. 
> This will help the perf testing with regarding to the Opportunistic 
> Containers.
>  RUMEN has default execution type GUARANTEED
>  SYNTH set execution type by field map_execution_type and 
> reduce_execution_type
>  SLS set execution type by field container.execution_type
>  For compatibility set GUARANTEED as default value when not setting above 
> fields in trace file



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7929) SLS supports setting container execution

2018-02-22 Thread Jiandan Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16374021#comment-16374021
 ] 

Jiandan Yang  commented on YARN-7929:
-

Hi [~yochen], thanks for your attention. I did encounter the issue of merging 
failed when I pull latest code in my local develop environment. I will upload a 
new patch based latest code.

"water level" to the NMSimulator simulates actual resource utilization, the 
scheduling of OPPORTUNISTIC containers through the central RM need actual node 
utilization according to design doc in YARN-1011.

> SLS supports setting container execution
> 
>
> Key: YARN-7929
> URL: https://issues.apache.org/jira/browse/YARN-7929
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler-load-simulator
>Reporter: Jiandan Yang 
>Assignee: Jiandan Yang 
>Priority: Major
> Attachments: YARN-7929.001.patch
>
>
> SLS currently support three tracetype, SYNTH, SLS and RUMEN, but trace file 
> can not set execution type of container.
>  This jira will introduce execution type in SLS to help better simulation. 
> This will help the perf testing with regarding to the Opportunistic 
> Containers.
>  RUMEN has default execution type GUARANTEED
>  SYNTH set execution type by field map_execution_type and 
> reduce_execution_type
>  SLS set execution type by field container.execution_type
>  For compatibility set GUARANTEED as default value when not setting above 
> fields in trace file



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7346) Fix compilation errors against hbase2 beta release

2018-02-22 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16374000#comment-16374000
 ] 

Rohith Sharma K S commented on YARN-7346:
-

[~haibochen] To me, we should retain 05 patch itself and findbugs and java docs 
are irrelevant errors.  As I see that Yetus is trying to find findbug.xml file 
which doesn't exist. I think we should go ahead and commit 05 patch. 

> Fix compilation errors against hbase2 beta release
> --
>
> Key: YARN-7346
> URL: https://issues.apache.org/jira/browse/YARN-7346
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Ted Yu
>Assignee: Haibo Chen
>Priority: Major
> Attachments: YARN-7346.00.patch, YARN-7346.01.patch, 
> YARN-7346.02.patch, YARN-7346.03-incremental.patch, YARN-7346.03.patch, 
> YARN-7346.04-incremental.patch, YARN-7346.04.patch, YARN-7346.05.patch, 
> YARN-7346.06.patch, YARN-7346.prelim1.patch, YARN-7346.prelim2.patch, 
> YARN-7581.prelim.patch
>
>
> When compiling hadoop-yarn-server-timelineservice-hbase against 2.0.0-alpha3, 
> I got the following errors:
> https://pastebin.com/Ms4jYEVB
> This issue is to fix the compilation errors.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7949) ArtifactsId should not be a compulsory field for new service

2018-02-22 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16373958#comment-16373958
 ] 

Sunil G commented on YARN-7949:
---

Looks good. Committing shortly.

> ArtifactsId should not be a compulsory field for new service
> 
>
> Key: YARN-7949
> URL: https://issues.apache.org/jira/browse/YARN-7949
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Affects Versions: 3.1.0
>Reporter: Yesha Vora
>Assignee: Yesha Vora
>Priority: Major
> Attachments: YARN-7949.001.patch
>
>
> 1) Click on New Service 
> 2) Create a component
> Create Component page has Artifacts Id as compulsory entry. Few yarn service 
> example such as sleeper.json does not need to provide artifacts id.
> {code:java|title=sleeper.json}
> {
>   "name": "sleeper-service",
>   "components" :
>   [
> {
>   "name": "sleeper",
>   "number_of_containers": 2,
>   "launch_command": "sleep 90",
>   "resource": {
> "cpus": 1,
> "memory": "256"
>   }
> }
>   ]
> }{code}
> Thus, artifactsId should not be compulsory field.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7957) Yarn service delete option disappears after stopping application

2018-02-22 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16373930#comment-16373930
 ] 

Sunil G commented on YARN-7957:
---

Thanks [~gsaha].

I ll check this api and see how we can integrate.

> Yarn service delete option disappears after stopping application
> 
>
> Key: YARN-7957
> URL: https://issues.apache.org/jira/browse/YARN-7957
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Affects Versions: 3.1.0
>Reporter: Yesha Vora
>Assignee: Sunil G
>Priority: Critical
> Attachments: YARN-7957.01.patch
>
>
> Steps:
> 1) Launch yarn service
> 2) Go to service page and click on Setting button->"Stop Service". The 
> application will be stopped.
> 3) Refresh page
> Here, setting button disappears. Thus, user can not delete service from UI 
> after stopping application
> Expected behavior:
> Setting button should be present on UI page after application is stopped. If 
> application is stopped, setting button should only have "Delete Service" 
> action available.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7921) Transform a PlacementConstraint to a string expression

2018-02-22 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16373913#comment-16373913
 ] 

genericqa commented on YARN-7921:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  9m 
42s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
58s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m  5s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
19s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api in 
trunk has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
18s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 58s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
41s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m  
7s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
36s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 95m 46s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7921 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12911660/YARN-7921.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux e703c0ba7410 3.13.0-133-generic #182-Ubuntu SMP Tue Sep 19 
15:49:21 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 514794e |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| findbugs | 

[jira] [Commented] (YARN-7925) Some NPE errors caused a display errors when setting node labels

2018-02-22 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16373887#comment-16373887
 ] 

genericqa commented on YARN-7925:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
 0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 25s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 23s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 1 new + 39 unchanged - 0 fixed = 40 total (was 39) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 50s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 63m 
47s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}108m 36s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7925 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12911655/YARN-7925.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux c80505fbc6a9 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 95904f6 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/19790/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/19790/testReport/ |
| Max. process+thread count | 873 (vs. 

[jira] [Commented] (YARN-7955) Calling stop on an already stopped service says "Successfully stopped service"

2018-02-22 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16373888#comment-16373888
 ] 

genericqa commented on YARN-7955:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
49s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
47s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 49s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
7s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 14s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 34m  9s{color} 
| {color:red} hadoop-yarn-services-core in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
22s{color} | {color:green} hadoop-yarn-services-api in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 79m  8s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7955 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12911658/YARN-7955.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 05ed96d3c903 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 514794e |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| unit | 

[jira] [Commented] (YARN-7921) Transform a PlacementConstraint to a string expression

2018-02-22 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16373841#comment-16373841
 ] 

Weiwei Yang commented on YARN-7921:
---

Hi [~kkaranasos], could you please help to review this patch? Thanks

> Transform a PlacementConstraint to a string expression
> --
>
> Key: YARN-7921
> URL: https://issues.apache.org/jira/browse/YARN-7921
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Major
> Attachments: YARN-7921.001.patch, YARN-7921.002.patch
>
>
> Purpose:
> Let placement constraint viewable on UI or log, e.g print app placement 
> constraint in RM app page. Help user to use constraints and analysis 
> placement issues easier.
> Propose:
> Like what was added for DS, toString is a reversed process of 
> {{PlacementConstraintParser}} that transforms a PlacementConstraint to a 
> string, using same syntax. E.g
> {code}
> AbstractConstraint constraintExpr = targetIn(NODE, allocationTag("hbase-m"));
> constraint.toString();
> // This prints: IN,NODE,hbase-m
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7921) Transform a PlacementConstraint to a string expression

2018-02-22 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated YARN-7921:
--
Attachment: YARN-7921.002.patch

> Transform a PlacementConstraint to a string expression
> --
>
> Key: YARN-7921
> URL: https://issues.apache.org/jira/browse/YARN-7921
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Major
> Attachments: YARN-7921.001.patch, YARN-7921.002.patch
>
>
> Purpose:
> Let placement constraint viewable on UI or log, e.g print app placement 
> constraint in RM app page. Help user to use constraints and analysis 
> placement issues easier.
> Propose:
> Like what was added for DS, toString is a reversed process of 
> {{PlacementConstraintParser}} that transforms a PlacementConstraint to a 
> string, using same syntax. E.g
> {code}
> AbstractConstraint constraintExpr = targetIn(NODE, allocationTag("hbase-m"));
> constraint.toString();
> // This prints: IN,NODE,hbase-m
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7934) [GQ] Refactor preemption calculators to allow overriding for Federation Global Algos

2018-02-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16373831#comment-16373831
 ] 

Hudson commented on YARN-7934:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13702 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13702/])
YARN-7934. [GQ] Refactor preemption calculators to allow overriding for (carlo 
curino: rev 514794e1a5a39ca61de3981d53a05547ae17f5e4)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/PreemptableResourceCalculator.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/AbstractPreemptableResourceCalculator.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/AbstractPreemptionEntity.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/ResourceInfo.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/TempQueuePerPartition.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/CapacitySchedulerPreemptionContext.java


> [GQ] Refactor preemption calculators to allow overriding for Federation 
> Global Algos
> 
>
> Key: YARN-7934
> URL: https://issues.apache.org/jira/browse/YARN-7934
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Carlo Curino
>Assignee: Carlo Curino
>Priority: Major
> Attachments: YARN-7934.v1.patch, YARN-7934.v2.patch, 
> YARN-7934.v3.patch, YARN-7934.v4.patch
>
>
> This Jira tracks minimal changes in the capacity scheduler preemption 
> mechanics that allow for sub-classing and overriding of certain behaviors, 
> which we use to implement federation global algorithms, e.g., in YARN-7403.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7955) Calling stop on an already stopped service says "Successfully stopped service"

2018-02-22 Thread Gour Saha (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7955?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gour Saha updated YARN-7955:

Attachment: YARN-7955.002.patch

> Calling stop on an already stopped service says "Successfully stopped service"
> --
>
> Key: YARN-7955
> URL: https://issues.apache.org/jira/browse/YARN-7955
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.1.0
>Reporter: Gour Saha
>Assignee: Gour Saha
>Priority: Major
> Attachments: YARN-7955.001.patch, YARN-7955.002.patch
>
>
> If you invoke "yarn app -stop " on an already stopped service 
> it confusingly responds with message "Successfully stopped service 
> ". It should say "Service is already stopped".
> The same is seen with the REST API PUT request with data \{ "state": 
> "STOPPED"}, the response is 200 OK and diagnostics with same message 
> "Successfully stopped service ". It should return 400 Bad 
> Request with message "Service is already stopped".



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6858) Attribute Manager to store and provide node attributes in RM

2018-02-22 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-6858:
--
Summary: Attribute Manager to store and provide node attributes in RM  
(was: Attribute Manager to store and provide the attributes in RM)

> Attribute Manager to store and provide node attributes in RM
> 
>
> Key: YARN-6858
> URL: https://issues.apache.org/jira/browse/YARN-6858
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api, capacityscheduler, client
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
>Priority: Major
> Attachments: YARN-6858-YARN-3409.001.patch, 
> YARN-6858-YARN-3409.002.patch, YARN-6858-YARN-3409.003.patch, 
> YARN-6858-YARN-3409.004.patch, YARN-6858-YARN-3409.005.patch, 
> YARN-6858-YARN-3409.006.patch, YARN-6858-YARN-3409.007.patch, 
> YARN-6858-YARN-3409.008.patch, YARN-6858-YARN-3409.009.patch
>
>
> Similar to CommonNodeLabelsManager we need to have a centralized manager for 
> Node Attributes too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7925) Some NPE errors caused a display errors when setting node labels

2018-02-22 Thread Jinjiang Ling (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jinjiang Ling updated YARN-7925:

Attachment: YARN-7925.002.patch

> Some NPE errors caused a display errors when setting node labels
> 
>
> Key: YARN-7925
> URL: https://issues.apache.org/jira/browse/YARN-7925
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.1.0
>Reporter: Jinjiang Ling
>Assignee: Jinjiang Ling
>Priority: Blocker
> Attachments: DisplayError.png, YARN-7925.001.patch, 
> YARN-7925.002.patch
>
>
> I'm trying to using the node label with latest hadoop (3.1.0-SNAPSHOT). But 
> when I add a new node label and append a nodemanager to it, sometimes it may 
> cause a display error.
> !DisplayError.png|width=573,height=188!
> Then I found *when there is no queues can access to the label*,  this error 
> will happen.
> After checking the log, I find some NPE errors.
> {quote}..
>  Caused by: java.lang.NullPointerException
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.ResourceInfo.toString(ResourceInfo.java:73)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.renderQueueCapacityInfo(CapacitySchedulerPage.java:160)
>  ..
> {quote}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6858) Attribute Manager to store and provide the attributes in RM

2018-02-22 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16373794#comment-16373794
 ] 

Sunil G commented on YARN-6858:
---

Thanks [~Naganarasimha] . Looks good. Committing.

[~bibinchundatt], is it good to go.

 

> Attribute Manager to store and provide the attributes in RM
> ---
>
> Key: YARN-6858
> URL: https://issues.apache.org/jira/browse/YARN-6858
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api, capacityscheduler, client
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
>Priority: Major
> Attachments: YARN-6858-YARN-3409.001.patch, 
> YARN-6858-YARN-3409.002.patch, YARN-6858-YARN-3409.003.patch, 
> YARN-6858-YARN-3409.004.patch, YARN-6858-YARN-3409.005.patch, 
> YARN-6858-YARN-3409.006.patch, YARN-6858-YARN-3409.007.patch, 
> YARN-6858-YARN-3409.008.patch, YARN-6858-YARN-3409.009.patch
>
>
> Similar to CommonNodeLabelsManager we need to have a centralized manager for 
> Node Attributes too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6858) Attribute Manager to store and provide the attributes in RM

2018-02-22 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16373788#comment-16373788
 ] 

genericqa commented on YARN-6858:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
27s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} YARN-3409 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
29s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
51s{color} | {color:green} YARN-3409 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m  
6s{color} | {color:green} YARN-3409 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
53s{color} | {color:green} YARN-3409 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
59s{color} | {color:green} YARN-3409 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  3s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
21s{color} | {color:green} YARN-3409 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
48s{color} | {color:green} YARN-3409 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
21s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 56s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 4 new + 115 unchanged - 1 fixed = 119 total (was 116) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 51s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m  
1s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 19m 
12s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 76m 33s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}170m  1s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesNodeLabels |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-6858 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12911623/YARN-6858-YARN-3409.009.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  

[jira] [Comment Edited] (YARN-5910) Support for multi-cluster delegation tokens

2018-02-22 Thread Greg Senia (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16373772#comment-16373772
 ] 

Greg Senia edited comment on YARN-5910 at 2/23/18 1:21 AM:
---

[~jlowe] I was able to resolve my issue with the following:

 

Example/Details: 

*Distcp with Kerberos between Secure Clusters without Cross-realm 
Authentication*

Two clusters with the realms: *Source Realm (PROD.HDP.EXAMPLE.COM) and 
Destination Realm (MODL.HDP.EXAMPLE.COM)*

Data moves from the *Source Cluster (hdfs://prod) to the Destination Cluster 
(hdfs://modl)*

A *one-way cross-realm* trust exists between *Source Realm 
(PROD.HDP.EXAMPLE.COM and Active Directory (NT.EXAMPLE.COM), and Destination 
Realm (MODL.HDP.EXAMPLE.COM) and Active Directory (NT.EXAMPLE.COM).*

Both the Source Cluster (prod) and Destination Cluster (modl) are running a 
Hadoop Distribution with the following patches: 

https://issues.apache.org/jira/browse/HDFS-7546 

https://issues.apache.org/jira/browse/YARN-3021

We set mapreduce.job.hdfs-servers.token-renewal.exclude property which 
apparently instructs the Resource Managers on either cluster to skip or perform 
delegation token renewal for NameNode hosts and we also set the 
dfs.namenode.kerberos.principal.pattern property to * to allow distcp 
irrespective of the principal patterns of the source and destination clusters

*Example of Command that works:*

hadoop distcp -Ddfs.namenode.kerberos.principal.pattern=* 
-Dmapreduce.job.hdfs-servers.token-renewal.exclude=modl hdfs:///public_data 
hdfs://modl/public_data/gss_test2


was (Author: gss2002):
[~jlowe] I was able to resolve my issue with the following:

 

Kerberos Distcp between Secure Clusters (without Cross-realm Authentication)
Two clusters with the realms: SOURCE (PROD.HDP.EXAMPLE.COM) and DESTINATION 
(MODL.HDP.EXAMPLE.COM)
Data needs to move between SOURCE (hdfs://prod) to DESTINATION (hdfs://modl)
Trust exists between SOURCE (PROD.HDP.EXAMPLE.COM and Active Directory 
(NT.EXAMPLE.COM), and DESTINATION (MODL.HDP.EXAMPLE.COM) and Active Directory 
(NT.EXAMPLE.COM).
Both SOURCE (prod) and DESTINATION (modl) clusters are running a Hadoop Distro 
with the following Patches: https://issues.apache.org/jira/browse/HDFS-7546 and 
https://issues.apache.org/jira/browse/YARN-3021

Set mapreduce.job.hdfs-servers.token-renewal.exclude property to instruct 
ResourceManagers on either cluster to skip or perform delegation token renewal 
for NameNode hosts and set the dfs.namenode.kerberos.principal.pattern property 
to * to allow distcp irrespective of the principal patterns of the source and 
destination clusters

Example of Command that works:

hadoop distcp -Ddfs.namenode.kerberos.principal.pattern=* 
-Dmapreduce.job.hdfs-servers.token-renewal.exclude=modl hdfs:///public_data 
hdfs://modl/public_data/gss_test2

> Support for multi-cluster delegation tokens
> ---
>
> Key: YARN-5910
> URL: https://issues.apache.org/jira/browse/YARN-5910
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: security
>Reporter: Clay B.
>Assignee: Jian He
>Priority: Minor
> Fix For: 2.9.0, 3.0.0-alpha4
>
> Attachments: YARN-5910.01.patch, YARN-5910.2.patch, 
> YARN-5910.3.patch, YARN-5910.4.patch, YARN-5910.5.patch, YARN-5910.6.patch, 
> YARN-5910.7.patch
>
>
> As an administrator running many secure (kerberized) clusters, some which 
> have peer clusters managed by other teams, I am looking for a way to run jobs 
> which may require services running on other clusters. Particular cases where 
> this rears itself are running something as core as a distcp between two 
> kerberized clusters (e.g. {{hadoop --config /home/user292/conf/ distcp 
> hdfs://LOCALCLUSTER/user/user292/test.out 
> hdfs://REMOTECLUSTER/user/user292/test.out.result}}).
> Thanks to YARN-3021, once can run for a while but if the delegation token for 
> the remote cluster needs renewal the job will fail[1]. One can pre-configure 
> their {{hdfs-site.xml}} loaded by the YARN RM to know of all possible HDFSes 
> available but that requires coordination that is not always feasible, 
> especially as a cluster's peers grow into the tens of clusters or across 
> management teams. Ideally, one could have core systems configured this way 
> but jobs could also specify their own handling of tokens and management when 
> needed?
> [1]: Example stack trace when the RM is unaware of a remote service:
> 
> {code}
> 2016-03-23 14:59:50,528 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer:
>  application_1458441356031_3317 found existing hdfs token Kind: 
> HDFS_DELEGATION_TOKEN, Service: ha-hdfs:REMOTECLUSTER, Ident: 
> (HDFS_DELEGATION_TOKEN token
>  10927 for user292)
> 2016-03-23 14:59:50,557 WARN 

[jira] [Commented] (YARN-5910) Support for multi-cluster delegation tokens

2018-02-22 Thread Greg Senia (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16373772#comment-16373772
 ] 

Greg Senia commented on YARN-5910:
--

[~jlowe] I was able to resolve my issue with the following:

 

Kerberos Distcp between Secure Clusters (without Cross-realm Authentication)
Two clusters with the realms: SOURCE (PROD.HDP.EXAMPLE.COM) and DESTINATION 
(MODL.HDP.EXAMPLE.COM)
Data needs to move between SOURCE (hdfs://prod) to DESTINATION (hdfs://modl)
Trust exists between SOURCE (PROD.HDP.EXAMPLE.COM and Active Directory 
(NT.EXAMPLE.COM), and DESTINATION (MODL.HDP.EXAMPLE.COM) and Active Directory 
(NT.EXAMPLE.COM).
Both SOURCE (prod) and DESTINATION (modl) clusters are running a Hadoop Distro 
with the following Patches: https://issues.apache.org/jira/browse/HDFS-7546 and 
https://issues.apache.org/jira/browse/YARN-3021

Set mapreduce.job.hdfs-servers.token-renewal.exclude property to instruct 
ResourceManagers on either cluster to skip or perform delegation token renewal 
for NameNode hosts and set the dfs.namenode.kerberos.principal.pattern property 
to * to allow distcp irrespective of the principal patterns of the source and 
destination clusters

Example of Command that works:

hadoop distcp -Ddfs.namenode.kerberos.principal.pattern=* 
-Dmapreduce.job.hdfs-servers.token-renewal.exclude=modl hdfs:///public_data 
hdfs://modl/public_data/gss_test2

> Support for multi-cluster delegation tokens
> ---
>
> Key: YARN-5910
> URL: https://issues.apache.org/jira/browse/YARN-5910
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: security
>Reporter: Clay B.
>Assignee: Jian He
>Priority: Minor
> Fix For: 2.9.0, 3.0.0-alpha4
>
> Attachments: YARN-5910.01.patch, YARN-5910.2.patch, 
> YARN-5910.3.patch, YARN-5910.4.patch, YARN-5910.5.patch, YARN-5910.6.patch, 
> YARN-5910.7.patch
>
>
> As an administrator running many secure (kerberized) clusters, some which 
> have peer clusters managed by other teams, I am looking for a way to run jobs 
> which may require services running on other clusters. Particular cases where 
> this rears itself are running something as core as a distcp between two 
> kerberized clusters (e.g. {{hadoop --config /home/user292/conf/ distcp 
> hdfs://LOCALCLUSTER/user/user292/test.out 
> hdfs://REMOTECLUSTER/user/user292/test.out.result}}).
> Thanks to YARN-3021, once can run for a while but if the delegation token for 
> the remote cluster needs renewal the job will fail[1]. One can pre-configure 
> their {{hdfs-site.xml}} loaded by the YARN RM to know of all possible HDFSes 
> available but that requires coordination that is not always feasible, 
> especially as a cluster's peers grow into the tens of clusters or across 
> management teams. Ideally, one could have core systems configured this way 
> but jobs could also specify their own handling of tokens and management when 
> needed?
> [1]: Example stack trace when the RM is unaware of a remote service:
> 
> {code}
> 2016-03-23 14:59:50,528 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer:
>  application_1458441356031_3317 found existing hdfs token Kind: 
> HDFS_DELEGATION_TOKEN, Service: ha-hdfs:REMOTECLUSTER, Ident: 
> (HDFS_DELEGATION_TOKEN token
>  10927 for user292)
> 2016-03-23 14:59:50,557 WARN 
> org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer:
>  Unable to add the application to the delegation token renewer.
> java.io.IOException: Failed to renew token: Kind: HDFS_DELEGATION_TOKEN, 
> Service: ha-hdfs:REMOTECLUSTER, Ident: (HDFS_DELEGATION_TOKEN token 10927 for 
> user292)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer.handleAppSubmitEvent(DelegationTokenRenewer.java:427)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer.access$700(DelegationTokenRenewer.java:78)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer$DelegationTokenRenewerRunnable.handleDTRenewerAppSubmitEvent(DelegationTokenRenewer.java:781)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer$DelegationTokenRenewerRunnable.run(DelegationTokenRenewer.java:762)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:744)
> Caused by: java.io.IOException: Unable to map logical nameservice URI 
> 'hdfs://REMOTECLUSTER' to a NameNode. Local configuration does not have a 
> failover proxy provider configured.
> at org.apache.hadoop.hdfs.DFSClient$Renewer.getNNProxy(DFSClient.java:1164)
> at 

[jira] [Created] (YARN-7964) Yarn Scheduler Load Simulator (SLS): MetricsLogRunnable stops working when there are too many jobs needed to load from sls

2018-02-22 Thread Wei Chen (JIRA)
Wei Chen created YARN-7964:
--

 Summary: Yarn Scheduler Load Simulator (SLS): MetricsLogRunnable 
stops working when there are too many jobs needed to load from sls
 Key: YARN-7964
 URL: https://issues.apache.org/jira/browse/YARN-7964
 Project: Hadoop YARN
  Issue Type: Bug
  Components: scheduler-load-simulator
Affects Versions: 3.0.0, 2.7.5
 Environment: I am running sls on a linux server (ubuntu-16.04). The 
hadoop version is 3.0.0
Reporter: Wei Chen


hi, I am using sls to simulate a large scale cluster, which consists more than 
100 nodes and runs more than 4k jobs. I found that MetricsLogRunnable 
(periodically flush real-time metrics to a file) stops working if the sls takes 
too long to load sls file.

More specifically, the exception is thrown at here in function String 
generateRealTimeTrackingMetrics() in SLSWebApp.java :
{code:java}
for (String queue : wrapper.getQueueSet()) {
..
}
{code}
 
The excepthion is reported as:
2018-02-22 17:13:59,450 INFO sls.SLSRunner: newly creaed job: 
6263127055conainer size: 10queue: default
java.lang.NullPointerException
at 
org.apache.hadoop.yarn.sls.web.SLSWebApp.generateRealTimeTrackingMetrics(SLSWebApp.java:438)
at 
org.apache.hadoop.yarn.sls.scheduler.SLSCapacityScheduler$MetricsLogRunnable.run(SLSCapacityScheduler.java:724)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

 

So the wrapper.getQueueSet() returns a NullPointer which causes the exception.

After we further analyzing the source code, we noticed that in SLSRunner.java:
{code:java}
public void start() throws Exception {
// start resource manager
startRM();
// start node managers
startNM();
// start application masters
startAM();
// set queue & tracked apps information
((SchedulerWrapper) rm.getResourceScheduler())
.setQueueSet(this.queueAppNumMap.keySet());
((SchedulerWrapper) rm.getResourceScheduler())
.setTrackedAppSet(this.trackedApps);
// print out simulation info
printSimulationInfo();
// blocked until all nodes RUNNING
waitForNodesRunning();
// starting the runner once everything is ready to go,
runner.start();
  }
{code}

As you can see the queue set for tracking is set by  
((SchedulerWrapper)rm.getResourceScheduler())
.setQueueSet(this.queueAppNumMap.keySet()); which 
is done after rm, nm and app initilization. Before the queue set is set, the  
MetricsLogRunnable has already been lauched. That's the reason why the queue 
set is empty and cause NullPointerException.
 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7963) TestServiceAM and TestServiceMonitor test cases are hanging

2018-02-22 Thread Eric Yang (JIRA)
Eric Yang created YARN-7963:
---

 Summary: TestServiceAM and TestServiceMonitor test cases are 
hanging
 Key: YARN-7963
 URL: https://issues.apache.org/jira/browse/YARN-7963
 Project: Hadoop YARN
  Issue Type: Bug
  Components: yarn-native-services
Affects Versions: 3.1.0
Reporter: Eric Yang


There is a regression when merge YARN-6592 that prevents YARN services test 
cases from working.  The unit tests hang on contacting embedded ZooKeepers.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5910) Support for multi-cluster delegation tokens

2018-02-22 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16373672#comment-16373672
 ] 

Jason Lowe commented on YARN-5910:
--

I think the problem here is that the token renewer, which is the 
ResourceManager, has no way to authenticate with the remote cluster from which 
the token was obtained.  So that leaves us stuck with a token the user can get 
with their credentials but YARN cannot keep active because the RM's credentials 
are worthless to the other cluster.  The RM always attempts to renew the 
delegation tokens of an application that was submitted to both ensure validity 
and guarantee that it will be able to keep the tokens renewed while the app 
either waits to be scheduled or while it is running.  This is to fail fast 
rather than have apps run for hours then have their credentials expire because 
the RM could not renew them.

The easiest fix is to establish valid credentials for the RM to the other 
cluster.  If you don't want to expose the full trust between the two clusters, 
one workaround is to create yet another realm where the RM principals exist and 
there's a one-way trust from that new realm to each Hadoop cluster's realm that 
wishes to communicate cross-realm like this.

> Support for multi-cluster delegation tokens
> ---
>
> Key: YARN-5910
> URL: https://issues.apache.org/jira/browse/YARN-5910
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: security
>Reporter: Clay B.
>Assignee: Jian He
>Priority: Minor
> Fix For: 2.9.0, 3.0.0-alpha4
>
> Attachments: YARN-5910.01.patch, YARN-5910.2.patch, 
> YARN-5910.3.patch, YARN-5910.4.patch, YARN-5910.5.patch, YARN-5910.6.patch, 
> YARN-5910.7.patch
>
>
> As an administrator running many secure (kerberized) clusters, some which 
> have peer clusters managed by other teams, I am looking for a way to run jobs 
> which may require services running on other clusters. Particular cases where 
> this rears itself are running something as core as a distcp between two 
> kerberized clusters (e.g. {{hadoop --config /home/user292/conf/ distcp 
> hdfs://LOCALCLUSTER/user/user292/test.out 
> hdfs://REMOTECLUSTER/user/user292/test.out.result}}).
> Thanks to YARN-3021, once can run for a while but if the delegation token for 
> the remote cluster needs renewal the job will fail[1]. One can pre-configure 
> their {{hdfs-site.xml}} loaded by the YARN RM to know of all possible HDFSes 
> available but that requires coordination that is not always feasible, 
> especially as a cluster's peers grow into the tens of clusters or across 
> management teams. Ideally, one could have core systems configured this way 
> but jobs could also specify their own handling of tokens and management when 
> needed?
> [1]: Example stack trace when the RM is unaware of a remote service:
> 
> {code}
> 2016-03-23 14:59:50,528 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer:
>  application_1458441356031_3317 found existing hdfs token Kind: 
> HDFS_DELEGATION_TOKEN, Service: ha-hdfs:REMOTECLUSTER, Ident: 
> (HDFS_DELEGATION_TOKEN token
>  10927 for user292)
> 2016-03-23 14:59:50,557 WARN 
> org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer:
>  Unable to add the application to the delegation token renewer.
> java.io.IOException: Failed to renew token: Kind: HDFS_DELEGATION_TOKEN, 
> Service: ha-hdfs:REMOTECLUSTER, Ident: (HDFS_DELEGATION_TOKEN token 10927 for 
> user292)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer.handleAppSubmitEvent(DelegationTokenRenewer.java:427)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer.access$700(DelegationTokenRenewer.java:78)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer$DelegationTokenRenewerRunnable.handleDTRenewerAppSubmitEvent(DelegationTokenRenewer.java:781)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer$DelegationTokenRenewerRunnable.run(DelegationTokenRenewer.java:762)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:744)
> Caused by: java.io.IOException: Unable to map logical nameservice URI 
> 'hdfs://REMOTECLUSTER' to a NameNode. Local configuration does not have a 
> failover proxy provider configured.
> at org.apache.hadoop.hdfs.DFSClient$Renewer.getNNProxy(DFSClient.java:1164)
> at org.apache.hadoop.hdfs.DFSClient$Renewer.renew(DFSClient.java:1128)
> at org.apache.hadoop.security.token.Token.renew(Token.java:377)
> at 
> 

[jira] [Commented] (YARN-5714) ContainerExecutor does not order environment map

2018-02-22 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16373644#comment-16373644
 ] 

Jason Lowe commented on YARN-5714:
--

+1 lgtm.  I'll commit this tomorrow if there are no objections.

> ContainerExecutor does not order environment map
> 
>
> Key: YARN-5714
> URL: https://issues.apache.org/jira/browse/YARN-5714
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.4.1, 2.5.2, 2.7.3, 2.6.4, 3.0.0-alpha1
> Environment: all (linux and windows alike)
>Reporter: Remi Catherinot
>Assignee: Remi Catherinot
>Priority: Trivial
>  Labels: oct16-medium
> Attachments: YARN-5714.001.patch, YARN-5714.002.patch, 
> YARN-5714.003.patch, YARN-5714.004.patch, YARN-5714.005.patch, 
> YARN-5714.006.patch, YARN-5714.007.patch, YARN-5714.008.patch, 
> YARN-5714.009.patch
>
>   Original Estimate: 120h
>  Remaining Estimate: 120h
>
> when dumping the launch container script, environment variables are dumped 
> based on the order internally used by the map implementation (hash based). It 
> does not take into consideration that some env varibales may refer each 
> other, and so that some env variables must be declared before those 
> referencing them.
> In my case, i ended up having LD_LIBRARY_PATH which was depending on 
> HADOOP_COMMON_HOME being dumped before HADOOP_COMMON_HOME. Thus it had a 
> wrong value and so native libraries weren't loaded. jobs were running but not 
> at their best efficiency. This is just a use case falling into that bug, but 
> i'm sure others may happen as well.
> I already have a patch running in my production environment, i just estimate 
> to 5 days for packaging the patch in the right fashion for JIRA + try my best 
> to add tests.
> Note : the patch is not OS aware with a default empty implementation. I will 
> only implement the unix version on a 1st release. I'm not used to windows env 
> variables syntax so it will take me more time/research for it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7962) Race Condition When Stopping DelegationTokenRenewer

2018-02-22 Thread BELUGA BEHR (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16373587#comment-16373587
 ] 

BELUGA BEHR commented on YARN-7962:
---

{quote}pool size = 0, active threads = 0, queued tasks = 0{quote}

Pool size is 0 because the pool was shut down is my guess.

> Race Condition When Stopping DelegationTokenRenewer
> ---
>
> Key: YARN-7962
> URL: https://issues.apache.org/jira/browse/YARN-7962
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 3.0.0
>Reporter: BELUGA BEHR
>Priority: Minor
>
> [https://github.com/apache/hadoop/blob/69fa81679f59378fd19a2c65db8019393d7c05a2/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/security/DelegationTokenRenewer.java]
> {code:java}
>   private ThreadPoolExecutor renewerService;
>   private void processDelegationTokenRenewerEvent(
>   DelegationTokenRenewerEvent evt) {
> serviceStateLock.readLock().lock();
> try {
>   if (isServiceStarted) {
> renewerService.execute(new DelegationTokenRenewerRunnable(evt));
>   } else {
> pendingEventQueue.add(evt);
>   }
> } finally {
>   serviceStateLock.readLock().unlock();
> }
>   }
>   @Override
>   protected void serviceStop() {
> if (renewalTimer != null) {
>   renewalTimer.cancel();
> }
> appTokens.clear();
> allTokens.clear();
> this.renewerService.shutdown();
> {code}
> {code:java}
> 2018-02-21 11:18:16,253  FATAL org.apache.hadoop.yarn.event.AsyncDispatcher: 
> Error in dispatcher thread
> java.util.concurrent.RejectedExecutionException: Task 
> org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer$DelegationTokenRenewerRunnable@39bddaf2
>  rejected from java.util.concurrent.ThreadPoolExecutor@5f71637b[Terminated, 
> pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 15487]
>   at 
> java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2048)
>   at 
> java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:821)
>   at 
> java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1372)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer.processDelegationTokenRenewerEvent(DelegationTokenRenewer.java:196)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer.applicationFinished(DelegationTokenRenewer.java:734)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.finishApplication(RMAppManager.java:199)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.handle(RMAppManager.java:424)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.handle(RMAppManager.java:65)
>   at 
> org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:177)
>   at 
> org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:109)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> What I think is going on here is that the {{serviceStop}} method is not 
> setting the {{isServiceStarted}} flag to 'false'.
> Please update so that the {{serviceStop}} method grabs the 
> {{serviceStateLock}} and sets {{isServiceStarted}} to _false_, before 
> shutting down the {{renewerService}} thread pool, to avoid this condition.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7962) Race Condition When Stopping DelegationTokenRenewer

2018-02-22 Thread BELUGA BEHR (JIRA)
BELUGA BEHR created YARN-7962:
-

 Summary: Race Condition When Stopping DelegationTokenRenewer
 Key: YARN-7962
 URL: https://issues.apache.org/jira/browse/YARN-7962
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Affects Versions: 3.0.0
Reporter: BELUGA BEHR


[https://github.com/apache/hadoop/blob/69fa81679f59378fd19a2c65db8019393d7c05a2/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/security/DelegationTokenRenewer.java]
{code:java}
  private ThreadPoolExecutor renewerService;

  private void processDelegationTokenRenewerEvent(
  DelegationTokenRenewerEvent evt) {
serviceStateLock.readLock().lock();
try {
  if (isServiceStarted) {
renewerService.execute(new DelegationTokenRenewerRunnable(evt));
  } else {
pendingEventQueue.add(evt);
  }
} finally {
  serviceStateLock.readLock().unlock();
}
  }

  @Override
  protected void serviceStop() {
if (renewalTimer != null) {
  renewalTimer.cancel();
}
appTokens.clear();
allTokens.clear();
this.renewerService.shutdown();
{code}
{code:java}
2018-02-21 11:18:16,253  FATAL org.apache.hadoop.yarn.event.AsyncDispatcher: 
Error in dispatcher thread
java.util.concurrent.RejectedExecutionException: Task 
org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer$DelegationTokenRenewerRunnable@39bddaf2
 rejected from java.util.concurrent.ThreadPoolExecutor@5f71637b[Terminated, 
pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 15487]
at 
java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2048)
at 
java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:821)
at 
java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1372)
at 
org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer.processDelegationTokenRenewerEvent(DelegationTokenRenewer.java:196)
at 
org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer.applicationFinished(DelegationTokenRenewer.java:734)
at 
org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.finishApplication(RMAppManager.java:199)
at 
org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.handle(RMAppManager.java:424)
at 
org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.handle(RMAppManager.java:65)
at 
org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:177)
at 
org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:109)
at java.lang.Thread.run(Thread.java:745)
{code}
What I think is going on here is that the {{serviceStop}} method is not setting 
the {{isServiceStarted}} flag to 'false'.

Please update so that the {{serviceStop}} method grabs the {{serviceStateLock}} 
and sets {{isServiceStarted}} to _false_, before shutting down the 
{{renewerService}} thread pool, to avoid this condition.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6858) Attribute Manager to store and provide the attributes in RM

2018-02-22 Thread Naganarasimha G R (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naganarasimha G R updated YARN-6858:

Attachment: YARN-6858-YARN-3409.009.patch

> Attribute Manager to store and provide the attributes in RM
> ---
>
> Key: YARN-6858
> URL: https://issues.apache.org/jira/browse/YARN-6858
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api, capacityscheduler, client
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
>Priority: Major
> Attachments: YARN-6858-YARN-3409.001.patch, 
> YARN-6858-YARN-3409.002.patch, YARN-6858-YARN-3409.003.patch, 
> YARN-6858-YARN-3409.004.patch, YARN-6858-YARN-3409.005.patch, 
> YARN-6858-YARN-3409.006.patch, YARN-6858-YARN-3409.007.patch, 
> YARN-6858-YARN-3409.008.patch, YARN-6858-YARN-3409.009.patch
>
>
> Similar to CommonNodeLabelsManager we need to have a centralized manager for 
> Node Attributes too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7512) Support service upgrade via YARN Service API and CLI

2018-02-22 Thread Chandni Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16373578#comment-16373578
 ] 

Chandni Singh commented on YARN-7512:
-

[~gsaha] and I worked on the design of in-place upgrade of long running 
service. The design is attached to this jira.  Thanks to [~jianhe] , 
[~billie.rinaldi] and [~eyang] for reviewing it.

 

> Support service upgrade via YARN Service API and CLI
> 
>
> Key: YARN-7512
> URL: https://issues.apache.org/jira/browse/YARN-7512
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gour Saha
>Assignee: Chandni Singh
>Priority: Major
> Fix For: yarn-native-services
>
> Attachments: _In-Place Upgrade of Long-Running Applications in 
> YARN.pdf
>
>
> YARN Service API and CLI needs to support service (and containers) upgrade in 
> line with what Slider supported in SLIDER-787 
> (http://slider.incubator.apache.org/docs/slider_specs/application_pkg_upgrade.html)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6858) Attribute Manager to store and provide the attributes in RM

2018-02-22 Thread Naganarasimha G R (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naganarasimha G R updated YARN-6858:

Attachment: (was: YARN-6858-YARN-3409.009.patch)

> Attribute Manager to store and provide the attributes in RM
> ---
>
> Key: YARN-6858
> URL: https://issues.apache.org/jira/browse/YARN-6858
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api, capacityscheduler, client
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
>Priority: Major
> Attachments: YARN-6858-YARN-3409.001.patch, 
> YARN-6858-YARN-3409.002.patch, YARN-6858-YARN-3409.003.patch, 
> YARN-6858-YARN-3409.004.patch, YARN-6858-YARN-3409.005.patch, 
> YARN-6858-YARN-3409.006.patch, YARN-6858-YARN-3409.007.patch, 
> YARN-6858-YARN-3409.008.patch
>
>
> Similar to CommonNodeLabelsManager we need to have a centralized manager for 
> Node Attributes too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6858) Attribute Manager to store and provide the attributes in RM

2018-02-22 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16373575#comment-16373575
 ] 

Naganarasimha G R commented on YARN-6858:
-

Thanks [~bibinchundatt], Have addressed your comment in the latest patch.

> Attribute Manager to store and provide the attributes in RM
> ---
>
> Key: YARN-6858
> URL: https://issues.apache.org/jira/browse/YARN-6858
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api, capacityscheduler, client
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
>Priority: Major
> Attachments: YARN-6858-YARN-3409.001.patch, 
> YARN-6858-YARN-3409.002.patch, YARN-6858-YARN-3409.003.patch, 
> YARN-6858-YARN-3409.004.patch, YARN-6858-YARN-3409.005.patch, 
> YARN-6858-YARN-3409.006.patch, YARN-6858-YARN-3409.007.patch, 
> YARN-6858-YARN-3409.008.patch, YARN-6858-YARN-3409.009.patch
>
>
> Similar to CommonNodeLabelsManager we need to have a centralized manager for 
> Node Attributes too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7512) Support service upgrade via YARN Service API and CLI

2018-02-22 Thread Chandni Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chandni Singh updated YARN-7512:

Attachment: _In-Place Upgrade of Long-Running Applications in YARN.pdf

> Support service upgrade via YARN Service API and CLI
> 
>
> Key: YARN-7512
> URL: https://issues.apache.org/jira/browse/YARN-7512
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gour Saha
>Assignee: Chandni Singh
>Priority: Major
> Fix For: yarn-native-services
>
> Attachments: _In-Place Upgrade of Long-Running Applications in 
> YARN.pdf
>
>
> YARN Service API and CLI needs to support service (and containers) upgrade in 
> line with what Slider supported in SLIDER-787 
> (http://slider.incubator.apache.org/docs/slider_specs/application_pkg_upgrade.html)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6858) Attribute Manager to store and provide the attributes in RM

2018-02-22 Thread Naganarasimha G R (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naganarasimha G R updated YARN-6858:

Attachment: YARN-6858-YARN-3409.009.patch

> Attribute Manager to store and provide the attributes in RM
> ---
>
> Key: YARN-6858
> URL: https://issues.apache.org/jira/browse/YARN-6858
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api, capacityscheduler, client
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
>Priority: Major
> Attachments: YARN-6858-YARN-3409.001.patch, 
> YARN-6858-YARN-3409.002.patch, YARN-6858-YARN-3409.003.patch, 
> YARN-6858-YARN-3409.004.patch, YARN-6858-YARN-3409.005.patch, 
> YARN-6858-YARN-3409.006.patch, YARN-6858-YARN-3409.007.patch, 
> YARN-6858-YARN-3409.008.patch, YARN-6858-YARN-3409.009.patch
>
>
> Similar to CommonNodeLabelsManager we need to have a centralized manager for 
> Node Attributes too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7953) [GQ] Data structures for federation global queues calculations

2018-02-22 Thread Konstantinos Karanasos (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16373541#comment-16373541
 ] 

Konstantinos Karanasos commented on YARN-7953:
--

Thanks for working on this and for splitting the patches, [~curino].

The data structures make sense. Some comments below:

In {{FedQueue}}:
 * Should idealalloc be part of the FedQueue? Maybe targetAlloc? But still not 
sure if it should be part of this class.
 * Should we abstract out the creation of the JAXB context, the toString, and 
the reading from JSON in a Util class, given we are using it in various 
classes? JAXB context and toString are currently in FedQueue and 
FederationGlobalView, the reading is in the test classes. The loadFile method 
from the test classes can be put in there too.
 * Split the propagate to two methods, one for the top-down traversal and one 
for the bottom-up.
 * Not clear what mergeQueues does. Does it take multiple subcluster queues and 
combines them in a global queue?
 * Is recursiveSetOfTotCap used? I guess it could also be used in the propagate 
(especially after we break it)?

In {{FederationGlobalView}}:
 * Does this class need a resource calculator?
 * Shouldn't we have a method that builds the global view inside this class?

Nits (all but the first are in FedQueue):
 * Is the org.ojalgo dependency needed in the pom?
 * Better javadoc for class (explain GPG and global policies a bit)
 * FedQueue -> FederatedQueue or FederationQueue
 * scope -> subcluster?
 * getChildrenByName -> getChildByName

 

> [GQ] Data structures for federation global queues calculations
> --
>
> Key: YARN-7953
> URL: https://issues.apache.org/jira/browse/YARN-7953
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Carlo Curino
>Assignee: Carlo Curino
>Priority: Major
> Attachments: YARN-7953.v1.patch
>
>
> This Jira tracks data structures and helper classes used by the core 
> algorithms of YARN-7402 umbrella Jira (currently YARN-7403, and YARN-7834).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7942) Yarn ServiceClient does not not delete znode from secure ZooKeeper

2018-02-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16373494#comment-16373494
 ] 

Hudson commented on YARN-7942:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13701 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13701/])
YARN-7942. Add check for JAAS configuration for Yarn Service.
(eyang: rev 95904f6b3ccd1d167088086472eabdd85b2d148d)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/impl/zk/RegistrySecurity.java


> Yarn ServiceClient does not not delete znode from secure ZooKeeper
> --
>
> Key: YARN-7942
> URL: https://issues.apache.org/jira/browse/YARN-7942
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-native-services
>Affects Versions: 3.1.0
>Reporter: Eric Yang
>Assignee: Billie Rinaldi
>Priority: Blocker
> Fix For: 3.1.0
>
> Attachments: YARN-7942.001.patch, YARN-7942.002.patch, 
> YARN-7942.003.patch
>
>
> Even with sasl:rm:cdrwa set on the ZK node (from the registry system accounts 
> property), the RM fails to remove the node with the below error. Also, the 
> destroy call succeeds.
> {code}
> 2018-02-16 15:49:29,691 WARN  client.ServiceClient 
> (ServiceClient.java:actionDestroy(470)) - Error deleting registry entry 
> /users/hbase/services/yarn-service/hbase-app-test
> org.apache.hadoop.registry.client.exceptions.NoPathPermissionsException: 
> `/registry/users/hbase/services/yarn-service/hbase-app-test': Not authorized 
> to access path; ACLs: [null ACL]: KeeperErrorCode = NoAuth for 
> /registry/users/hbase/services/yarn-service/hbase-app-test
> at 
> org.apache.hadoop.registry.client.impl.zk.CuratorService.operationFailure(CuratorService.java:412)
> at 
> org.apache.hadoop.registry.client.impl.zk.CuratorService.operationFailure(CuratorService.java:390)
> at 
> org.apache.hadoop.registry.client.impl.zk.CuratorService.zkDelete(CuratorService.java:722)
> at 
> org.apache.hadoop.registry.client.impl.zk.RegistryOperationsService.delete(RegistryOperationsService.java:162)
> at 
> org.apache.hadoop.yarn.service.client.ServiceClient.actionDestroy(ServiceClient.java:462)
> at 
> org.apache.hadoop.yarn.service.webapp.ApiServer$4.run(ApiServer.java:253)
> at 
> org.apache.hadoop.yarn.service.webapp.ApiServer$4.run(ApiServer.java:243)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1965)
> at 
> org.apache.hadoop.yarn.service.webapp.ApiServer.stopService(ApiServer.java:243)
> at 
> org.apache.hadoop.yarn.service.webapp.ApiServer.deleteService(ApiServer.java:223)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:205)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
> at 
> com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:302)
> at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
> at 
> com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
> at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
> at 
> com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1542)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1473)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1419)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1409)
> at 
> com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:409)
> at 
> 

[jira] [Commented] (YARN-7836) YARN Service component update PUT API should not use component name from JSON body

2018-02-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7836?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16373493#comment-16373493
 ] 

Hudson commented on YARN-7836:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13701 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13701/])
YARN-7836.  Added error check for updating service components.   
(eyang: rev 190969006d4a7f9ef86d67bba472f7dc5642668a)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services-api/src/test/java/org/apache/hadoop/yarn/service/TestApiServer.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services-api/src/main/java/org/apache/hadoop/yarn/service/webapp/ApiServer.java


> YARN Service component update PUT API should not use component name from JSON 
> body
> --
>
> Key: YARN-7836
> URL: https://issues.apache.org/jira/browse/YARN-7836
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api, yarn-native-services
>Reporter: Gour Saha
>Assignee: Gour Saha
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: YARN-7836.001.patch, YARN-7836.002.patch, 
> YARN-7836.003.patch
>
>
> The YARN Service PUT API for component update should not use component name 
> from the JSON body. The component update PUT URI is as follows -
>  [http://localhost:9191/app/v1/services/]/components/
> e.g. [http://localhost:9191/app/v1/services/hello-world/components/hello]
> The component name is already in the URI, hence the JSON body expected should 
> be only -
> {noformat}
> {
> "number_of_containers": 3
> }
> {noformat}
> It should not expect the name attribute in the JSON body. In fact, if the 
> JSON body contains a name attribute with value anything other than the 
>  in the path param, we should send a 400 bad request saying they 
> do not match. If they are the same, it should be okay and we can process the 
> request.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7836) YARN Service component update PUT API should not use component name from JSON body

2018-02-22 Thread Gour Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7836?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16373495#comment-16373495
 ] 

Gour Saha commented on YARN-7836:
-

Thank you [~eyang]

> YARN Service component update PUT API should not use component name from JSON 
> body
> --
>
> Key: YARN-7836
> URL: https://issues.apache.org/jira/browse/YARN-7836
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api, yarn-native-services
>Reporter: Gour Saha
>Assignee: Gour Saha
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: YARN-7836.001.patch, YARN-7836.002.patch, 
> YARN-7836.003.patch
>
>
> The YARN Service PUT API for component update should not use component name 
> from the JSON body. The component update PUT URI is as follows -
>  [http://localhost:9191/app/v1/services/]/components/
> e.g. [http://localhost:9191/app/v1/services/hello-world/components/hello]
> The component name is already in the URI, hence the JSON body expected should 
> be only -
> {noformat}
> {
> "number_of_containers": 3
> }
> {noformat}
> It should not expect the name attribute in the JSON body. In fact, if the 
> JSON body contains a name attribute with value anything other than the 
>  in the path param, we should send a 400 bad request saying they 
> do not match. If they are the same, it should be okay and we can process the 
> request.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7836) YARN Service component update PUT API should not use component name from JSON body

2018-02-22 Thread Eric Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7836?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-7836:

Fix Version/s: 3.1.0

> YARN Service component update PUT API should not use component name from JSON 
> body
> --
>
> Key: YARN-7836
> URL: https://issues.apache.org/jira/browse/YARN-7836
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api, yarn-native-services
>Reporter: Gour Saha
>Assignee: Gour Saha
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: YARN-7836.001.patch, YARN-7836.002.patch, 
> YARN-7836.003.patch
>
>
> The YARN Service PUT API for component update should not use component name 
> from the JSON body. The component update PUT URI is as follows -
>  [http://localhost:9191/app/v1/services/]/components/
> e.g. [http://localhost:9191/app/v1/services/hello-world/components/hello]
> The component name is already in the URI, hence the JSON body expected should 
> be only -
> {noformat}
> {
> "number_of_containers": 3
> }
> {noformat}
> It should not expect the name attribute in the JSON body. In fact, if the 
> JSON body contains a name attribute with value anything other than the 
>  in the path param, we should send a 400 bad request saying they 
> do not match. If they are the same, it should be okay and we can process the 
> request.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7707) [GPG] Policy generator framework

2018-02-22 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16373455#comment-16373455
 ] 

genericqa commented on YARN-7707:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 15m 
43s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-7402 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
34s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
10s{color} | {color:green} YARN-7402 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m  
4s{color} | {color:green} YARN-7402 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 3s{color} | {color:green} YARN-7402 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
27s{color} | {color:green} YARN-7402 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 49s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
16s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api in 
YARN-7402 has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
5s{color} | {color:green} YARN-7402 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 12s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
1s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
40s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m  
4s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m  
6s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
38s{color} | {color:green} hadoop-yarn-server-globalpolicygenerator in the 
patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
36s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}100m 24s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7707 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12911599/YARN-7707-YARN-7402.06.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  

[jira] [Commented] (YARN-7836) YARN Service component update PUT API should not use component name from JSON body

2018-02-22 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7836?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16373450#comment-16373450
 ] 

Eric Yang commented on YARN-7836:
-

+1 the patch works as advertised.

> YARN Service component update PUT API should not use component name from JSON 
> body
> --
>
> Key: YARN-7836
> URL: https://issues.apache.org/jira/browse/YARN-7836
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api, yarn-native-services
>Reporter: Gour Saha
>Assignee: Gour Saha
>Priority: Major
> Attachments: YARN-7836.001.patch, YARN-7836.002.patch, 
> YARN-7836.003.patch
>
>
> The YARN Service PUT API for component update should not use component name 
> from the JSON body. The component update PUT URI is as follows -
>  [http://localhost:9191/app/v1/services/]/components/
> e.g. [http://localhost:9191/app/v1/services/hello-world/components/hello]
> The component name is already in the URI, hence the JSON body expected should 
> be only -
> {noformat}
> {
> "number_of_containers": 3
> }
> {noformat}
> It should not expect the name attribute in the JSON body. In fact, if the 
> JSON body contains a name attribute with value anything other than the 
>  in the path param, we should send a 400 bad request saying they 
> do not match. If they are the same, it should be okay and we can process the 
> request.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5714) ContainerExecutor does not order environment map

2018-02-22 Thread Jim Brennan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16373444#comment-16373444
 ] 

Jim Brennan commented on YARN-5714:
---

This patch is ready for review.

> ContainerExecutor does not order environment map
> 
>
> Key: YARN-5714
> URL: https://issues.apache.org/jira/browse/YARN-5714
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.4.1, 2.5.2, 2.7.3, 2.6.4, 3.0.0-alpha1
> Environment: all (linux and windows alike)
>Reporter: Remi Catherinot
>Assignee: Remi Catherinot
>Priority: Trivial
>  Labels: oct16-medium
> Attachments: YARN-5714.001.patch, YARN-5714.002.patch, 
> YARN-5714.003.patch, YARN-5714.004.patch, YARN-5714.005.patch, 
> YARN-5714.006.patch, YARN-5714.007.patch, YARN-5714.008.patch, 
> YARN-5714.009.patch
>
>   Original Estimate: 120h
>  Remaining Estimate: 120h
>
> when dumping the launch container script, environment variables are dumped 
> based on the order internally used by the map implementation (hash based). It 
> does not take into consideration that some env varibales may refer each 
> other, and so that some env variables must be declared before those 
> referencing them.
> In my case, i ended up having LD_LIBRARY_PATH which was depending on 
> HADOOP_COMMON_HOME being dumped before HADOOP_COMMON_HOME. Thus it had a 
> wrong value and so native libraries weren't loaded. jobs were running but not 
> at their best efficiency. This is just a use case falling into that bug, but 
> i'm sure others may happen as well.
> I already have a patch running in my production environment, i just estimate 
> to 5 days for packaging the patch in the right fashion for JIRA + try my best 
> to add tests.
> Note : the patch is not OS aware with a default empty implementation. I will 
> only implement the unix version on a 1st release. I'm not used to windows env 
> variables syntax so it will take me more time/research for it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7346) Fix compilation errors against hbase2 beta release

2018-02-22 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16373441#comment-16373441
 ] 

genericqa commented on YARN-7346:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
31s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 1s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 41s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-assemblies 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests
 {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
40s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
43s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 56s{color} | {color:orange} root: The patch generated 3 new + 1 unchanged - 
0 fixed = 4 total (was 1) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m 
11s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m  9s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-assemblies 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-server
 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests
 {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
24s{color} | {color:red} 
patch/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-server/hadoop-yarn-server-timelineservice-hbase-server-2
 no findbugs output file 
(hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-server/hadoop-yarn-server-timelineservice-hbase-server-2/target/findbugsXml.xml)
 {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
27s{color} | {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase_hadoop-yarn-server-timelineservice-hbase-server
 generated 107 new + 0 unchanged - 0 fixed = 107 total (was 0) 

[jira] [Commented] (YARN-7942) Yarn ServiceClient does not not delete znode from secure ZooKeeper

2018-02-22 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16373435#comment-16373435
 ] 

Eric Yang commented on YARN-7942:
-

+1 Tested the following scenarios:

1.  Started YARN without JAAS configuration, RM logs error message about the 
misconfiguration for secure ZooKeeper access.
2.  Started YARN with useTicketCache=true, YARN service gets destroyed 
correctly.
3.  Started YARN with useKeytab=true, and principal, YARN service gets 
destroyed correctly.
4.  Started a test service without service keytab, Application logs error 
message about missing keytab.
5.  Started a test service with service keytab, application flex/stop/destroy 
works fine.

> Yarn ServiceClient does not not delete znode from secure ZooKeeper
> --
>
> Key: YARN-7942
> URL: https://issues.apache.org/jira/browse/YARN-7942
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-native-services
>Affects Versions: 3.1.0
>Reporter: Eric Yang
>Assignee: Billie Rinaldi
>Priority: Blocker
> Attachments: YARN-7942.001.patch, YARN-7942.002.patch, 
> YARN-7942.003.patch
>
>
> Even with sasl:rm:cdrwa set on the ZK node (from the registry system accounts 
> property), the RM fails to remove the node with the below error. Also, the 
> destroy call succeeds.
> {code}
> 2018-02-16 15:49:29,691 WARN  client.ServiceClient 
> (ServiceClient.java:actionDestroy(470)) - Error deleting registry entry 
> /users/hbase/services/yarn-service/hbase-app-test
> org.apache.hadoop.registry.client.exceptions.NoPathPermissionsException: 
> `/registry/users/hbase/services/yarn-service/hbase-app-test': Not authorized 
> to access path; ACLs: [null ACL]: KeeperErrorCode = NoAuth for 
> /registry/users/hbase/services/yarn-service/hbase-app-test
> at 
> org.apache.hadoop.registry.client.impl.zk.CuratorService.operationFailure(CuratorService.java:412)
> at 
> org.apache.hadoop.registry.client.impl.zk.CuratorService.operationFailure(CuratorService.java:390)
> at 
> org.apache.hadoop.registry.client.impl.zk.CuratorService.zkDelete(CuratorService.java:722)
> at 
> org.apache.hadoop.registry.client.impl.zk.RegistryOperationsService.delete(RegistryOperationsService.java:162)
> at 
> org.apache.hadoop.yarn.service.client.ServiceClient.actionDestroy(ServiceClient.java:462)
> at 
> org.apache.hadoop.yarn.service.webapp.ApiServer$4.run(ApiServer.java:253)
> at 
> org.apache.hadoop.yarn.service.webapp.ApiServer$4.run(ApiServer.java:243)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1965)
> at 
> org.apache.hadoop.yarn.service.webapp.ApiServer.stopService(ApiServer.java:243)
> at 
> org.apache.hadoop.yarn.service.webapp.ApiServer.deleteService(ApiServer.java:223)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:205)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
> at 
> com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:302)
> at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
> at 
> com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
> at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
> at 
> com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1542)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1473)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1419)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1409)
> at 
> 

[jira] [Commented] (YARN-7708) [GPG] Load based policy generator

2018-02-22 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16373428#comment-16373428
 ] 

genericqa commented on YARN-7708:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-7402 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  3m  
7s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
23s{color} | {color:green} YARN-7402 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
54s{color} | {color:green} YARN-7402 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
56s{color} | {color:green} YARN-7402 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
15s{color} | {color:green} YARN-7402 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 58s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
14s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api in 
YARN-7402 has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
1s{color} | {color:green} YARN-7402 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
13s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 59s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 16 new + 226 unchanged - 0 fixed = 242 total (was 226) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 23s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
43s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator
 generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
6s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
41s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
15s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
13s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
41s{color} | {color:green} hadoop-yarn-server-globalpolicygenerator in the 
patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 84m 28s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | 

[jira] [Commented] (YARN-6528) [PERF/TEST] Add JMX metrics for Plan Follower and Agent Placement and Plan Operations

2018-02-22 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16373378#comment-16373378
 ] 

genericqa commented on YARN-6528:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
26s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 7 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 37s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 29s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 1 new + 413 unchanged - 4 fixed = 414 total (was 417) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  2s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 65m 
29s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}109m 30s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-6528 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12911586/YARN-6528.v010.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 9bff01f1feca 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 3132709 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/19784/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/19784/testReport/ |
| Max. process+thread count | 870 (vs. ulimit of 1) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 

[jira] [Commented] (YARN-7942) Yarn ServiceClient does not not delete znode from secure ZooKeeper

2018-02-22 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16373373#comment-16373373
 ] 

genericqa commented on YARN-7942:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 42s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 12s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry: The patch generated 13 
new + 103 unchanged - 6 fixed = 116 total (was 109) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 34s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
59s{color} | {color:green} hadoop-yarn-registry in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 46m 44s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7942 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12911598/YARN-7942.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 70f977741485 3.13.0-137-generic #186-Ubuntu SMP Mon Dec 4 
19:09:19 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 3132709 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/19788/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-registry.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/19788/testReport/ |
| Max. process+thread count | 326 (vs. ulimit of 1) |
| modules | C: 

[jira] [Commented] (YARN-7942) Yarn ServiceClient does not not delete znode from secure ZooKeeper

2018-02-22 Thread Billie Rinaldi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16373304#comment-16373304
 ] 

Billie Rinaldi commented on YARN-7942:
--

Patch 003 attempts to clarify the cases we're handling, improves the logging in 
the case where the jaas config is already specified, and throws an exception if 
SASL is configured for the registry but neither keytab+principal nor jaas 
config are specified.

> Yarn ServiceClient does not not delete znode from secure ZooKeeper
> --
>
> Key: YARN-7942
> URL: https://issues.apache.org/jira/browse/YARN-7942
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-native-services
>Affects Versions: 3.1.0
>Reporter: Eric Yang
>Assignee: Billie Rinaldi
>Priority: Blocker
> Attachments: YARN-7942.001.patch, YARN-7942.002.patch, 
> YARN-7942.003.patch
>
>
> Even with sasl:rm:cdrwa set on the ZK node (from the registry system accounts 
> property), the RM fails to remove the node with the below error. Also, the 
> destroy call succeeds.
> {code}
> 2018-02-16 15:49:29,691 WARN  client.ServiceClient 
> (ServiceClient.java:actionDestroy(470)) - Error deleting registry entry 
> /users/hbase/services/yarn-service/hbase-app-test
> org.apache.hadoop.registry.client.exceptions.NoPathPermissionsException: 
> `/registry/users/hbase/services/yarn-service/hbase-app-test': Not authorized 
> to access path; ACLs: [null ACL]: KeeperErrorCode = NoAuth for 
> /registry/users/hbase/services/yarn-service/hbase-app-test
> at 
> org.apache.hadoop.registry.client.impl.zk.CuratorService.operationFailure(CuratorService.java:412)
> at 
> org.apache.hadoop.registry.client.impl.zk.CuratorService.operationFailure(CuratorService.java:390)
> at 
> org.apache.hadoop.registry.client.impl.zk.CuratorService.zkDelete(CuratorService.java:722)
> at 
> org.apache.hadoop.registry.client.impl.zk.RegistryOperationsService.delete(RegistryOperationsService.java:162)
> at 
> org.apache.hadoop.yarn.service.client.ServiceClient.actionDestroy(ServiceClient.java:462)
> at 
> org.apache.hadoop.yarn.service.webapp.ApiServer$4.run(ApiServer.java:253)
> at 
> org.apache.hadoop.yarn.service.webapp.ApiServer$4.run(ApiServer.java:243)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1965)
> at 
> org.apache.hadoop.yarn.service.webapp.ApiServer.stopService(ApiServer.java:243)
> at 
> org.apache.hadoop.yarn.service.webapp.ApiServer.deleteService(ApiServer.java:223)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:205)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
> at 
> com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:302)
> at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
> at 
> com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
> at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
> at 
> com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1542)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1473)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1419)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1409)
> at 
> com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:409)
> at 
> com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:558)
> at 
> com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:733)
>

[jira] [Updated] (YARN-7707) [GPG] Policy generator framework

2018-02-22 Thread Young Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Young Chen updated YARN-7707:
-
Attachment: YARN-7707-YARN-7402.06.patch

> [GPG] Policy generator framework
> 
>
> Key: YARN-7707
> URL: https://issues.apache.org/jira/browse/YARN-7707
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Carlo Curino
>Assignee: Young Chen
>Priority: Major
>  Labels: federation, gpg
> Attachments: YARN-7707-YARN-7402.01.patch, 
> YARN-7707-YARN-7402.02.patch, YARN-7707-YARN-7402.03.patch, 
> YARN-7707-YARN-7402.04.patch, YARN-7707-YARN-7402.05.patch, 
> YARN-7707-YARN-7402.06.patch
>
>
> This JIRA tracks the development of a generic framework for querying 
> sub-clusters for metrics, running policies, and updating them in the 
> FederationStateStore.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7708) [GPG] Load based policy generator

2018-02-22 Thread Young Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Young Chen updated YARN-7708:
-
Attachment: YARN-7708-YARN-7402.02.cumulative.patch

> [GPG] Load based policy generator
> -
>
> Key: YARN-7708
> URL: https://issues.apache.org/jira/browse/YARN-7708
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Carlo Curino
>Assignee: Young Chen
>Priority: Major
> Attachments: YARN-7708-YARN-7402.01.cumulative.patch, 
> YARN-7708-YARN-7402.02.cumulative.patch
>
>
> This policy reads load from the "pendingQueueLength" metrics and provides 
> scaling into a set of weights that influence the AMRMProxy and Router 
> behaviors.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7942) Yarn ServiceClient does not not delete znode from secure ZooKeeper

2018-02-22 Thread Billie Rinaldi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Billie Rinaldi updated YARN-7942:
-
Attachment: YARN-7942.003.patch

> Yarn ServiceClient does not not delete znode from secure ZooKeeper
> --
>
> Key: YARN-7942
> URL: https://issues.apache.org/jira/browse/YARN-7942
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-native-services
>Affects Versions: 3.1.0
>Reporter: Eric Yang
>Assignee: Billie Rinaldi
>Priority: Blocker
> Attachments: YARN-7942.001.patch, YARN-7942.002.patch, 
> YARN-7942.003.patch
>
>
> Even with sasl:rm:cdrwa set on the ZK node (from the registry system accounts 
> property), the RM fails to remove the node with the below error. Also, the 
> destroy call succeeds.
> {code}
> 2018-02-16 15:49:29,691 WARN  client.ServiceClient 
> (ServiceClient.java:actionDestroy(470)) - Error deleting registry entry 
> /users/hbase/services/yarn-service/hbase-app-test
> org.apache.hadoop.registry.client.exceptions.NoPathPermissionsException: 
> `/registry/users/hbase/services/yarn-service/hbase-app-test': Not authorized 
> to access path; ACLs: [null ACL]: KeeperErrorCode = NoAuth for 
> /registry/users/hbase/services/yarn-service/hbase-app-test
> at 
> org.apache.hadoop.registry.client.impl.zk.CuratorService.operationFailure(CuratorService.java:412)
> at 
> org.apache.hadoop.registry.client.impl.zk.CuratorService.operationFailure(CuratorService.java:390)
> at 
> org.apache.hadoop.registry.client.impl.zk.CuratorService.zkDelete(CuratorService.java:722)
> at 
> org.apache.hadoop.registry.client.impl.zk.RegistryOperationsService.delete(RegistryOperationsService.java:162)
> at 
> org.apache.hadoop.yarn.service.client.ServiceClient.actionDestroy(ServiceClient.java:462)
> at 
> org.apache.hadoop.yarn.service.webapp.ApiServer$4.run(ApiServer.java:253)
> at 
> org.apache.hadoop.yarn.service.webapp.ApiServer$4.run(ApiServer.java:243)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1965)
> at 
> org.apache.hadoop.yarn.service.webapp.ApiServer.stopService(ApiServer.java:243)
> at 
> org.apache.hadoop.yarn.service.webapp.ApiServer.deleteService(ApiServer.java:223)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:205)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
> at 
> com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:302)
> at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
> at 
> com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
> at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
> at 
> com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1542)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1473)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1419)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1409)
> at 
> com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:409)
> at 
> com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:558)
> at 
> com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:733)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
> at 
> org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:848)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1772)
>   

[jira] [Commented] (YARN-7346) Fix compilation errors against hbase2 beta release

2018-02-22 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16373287#comment-16373287
 ] 

Haibo Chen commented on YARN-7346:
--

Upload a new patch that tries to address the javadoc/findbugs issue.

> Fix compilation errors against hbase2 beta release
> --
>
> Key: YARN-7346
> URL: https://issues.apache.org/jira/browse/YARN-7346
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Ted Yu
>Assignee: Haibo Chen
>Priority: Major
> Attachments: YARN-7346.00.patch, YARN-7346.01.patch, 
> YARN-7346.02.patch, YARN-7346.03-incremental.patch, YARN-7346.03.patch, 
> YARN-7346.04-incremental.patch, YARN-7346.04.patch, YARN-7346.05.patch, 
> YARN-7346.06.patch, YARN-7346.prelim1.patch, YARN-7346.prelim2.patch, 
> YARN-7581.prelim.patch
>
>
> When compiling hadoop-yarn-server-timelineservice-hbase against 2.0.0-alpha3, 
> I got the following errors:
> https://pastebin.com/Ms4jYEVB
> This issue is to fix the compilation errors.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7346) Fix compilation errors against hbase2 beta release

2018-02-22 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7346?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated YARN-7346:
-
Attachment: YARN-7346.06.patch

> Fix compilation errors against hbase2 beta release
> --
>
> Key: YARN-7346
> URL: https://issues.apache.org/jira/browse/YARN-7346
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Ted Yu
>Assignee: Haibo Chen
>Priority: Major
> Attachments: YARN-7346.00.patch, YARN-7346.01.patch, 
> YARN-7346.02.patch, YARN-7346.03-incremental.patch, YARN-7346.03.patch, 
> YARN-7346.04-incremental.patch, YARN-7346.04.patch, YARN-7346.05.patch, 
> YARN-7346.06.patch, YARN-7346.prelim1.patch, YARN-7346.prelim2.patch, 
> YARN-7581.prelim.patch
>
>
> When compiling hadoop-yarn-server-timelineservice-hbase against 2.0.0-alpha3, 
> I got the following errors:
> https://pastebin.com/Ms4jYEVB
> This issue is to fix the compilation errors.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6528) [PERF/TEST] Add JMX metrics for Plan Follower and Agent Placement and Plan Operations

2018-02-22 Thread Xiaohua (Victor) Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6528?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaohua (Victor) Liang updated YARN-6528:
-
Attachment: YARN-6528.v010.patch

> [PERF/TEST] Add JMX metrics for Plan Follower and Agent Placement and Plan 
> Operations
> -
>
> Key: YARN-6528
> URL: https://issues.apache.org/jira/browse/YARN-6528
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Sean Po
>Assignee: Xiaohua (Victor) Liang
>Priority: Major
> Attachments: YARN-6528.v001.patch, YARN-6528.v002.patch, 
> YARN-6528.v003.patch, YARN-6528.v004.patch, YARN-6528.v005.patch, 
> YARN-6528.v006.patch, YARN-6528.v007.patch, YARN-6528.v008.patch, 
> YARN-6528.v009.patch, YARN-6528.v010.patch
>
>
> YARN-1051 introduced a ReservationSytem that enables the YARN RM to handle 
> time explicitly, i.e. users can now "reserve" capacity ahead of time which is 
> predictably allocated to them. In order to understand in finer detail the 
> performance of Rayon, YARN-6528 proposes to include JMX metrics in the Plan 
> Follower, Agent Placement and Plan Operations components of Rayon.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-7961) Improve status response when yarn application is destroyed

2018-02-22 Thread Gour Saha (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gour Saha reassigned YARN-7961:
---

Assignee: Gour Saha

> Improve status response when yarn application is destroyed
> --
>
> Key: YARN-7961
> URL: https://issues.apache.org/jira/browse/YARN-7961
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Affects Versions: 3.1.0
>Reporter: Yesha Vora
>Assignee: Gour Saha
>Priority: Major
>
> Yarn should provide some way to figure out if yarn service is destroyed.
> If yarn service application is stopped, "yarn app -status " shows 
> that service is Stopped. 
> After destroying yarn service, "yarn app -status " returns 404
> {code}
> [hdpuser@cn005 sleeper]$ yarn app -status yesha-sleeper
> WARNING: YARN_LOG_DIR has been replaced by HADOOP_LOG_DIR. Using value of 
> YARN_LOG_DIR.
> WARNING: YARN_LOGFILE has been replaced by HADOOP_LOGFILE. Using value of 
> YARN_LOGFILE.
> WARNING: YARN_PID_DIR has been replaced by HADOOP_PID_DIR. Using value of 
> YARN_PID_DIR.
> WARNING: YARN_OPTS has been replaced by HADOOP_OPTS. Using value of YARN_OPTS.
> 18/02/16 11:02:30 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> 18/02/16 11:02:31 INFO client.RMProxy: Connecting to ResourceManager at 
> xxx/xx.xx.xx.xx:8050
> 18/02/16 11:02:31 INFO client.AHSProxy: Connecting to Application History 
> server at xxx/xx.xx.xx.x:10200
> 18/02/16 11:02:31 INFO client.RMProxy: Connecting to ResourceManager at 
> xxx/xx.xx.xx.x:8050
> 18/02/16 11:02:31 INFO client.AHSProxy: Connecting to Application History 
> server at xxx/xx.xx.xx.x:10200
> 18/02/16 11:02:31 INFO util.log: Logging initialized @2075ms
> yesha-sleeper Failed : HTTP error code : 404
> {code}
> Yarn should be able to notify user that whether a certain app is destroyed or 
> never created. HTTP 404 error does not explicitly provide information.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7961) Improve status response when yarn application is destroyed

2018-02-22 Thread Yesha Vora (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yesha Vora updated YARN-7961:
-
Issue Type: Sub-task  (was: Bug)
Parent: YARN-7054

> Improve status response when yarn application is destroyed
> --
>
> Key: YARN-7961
> URL: https://issues.apache.org/jira/browse/YARN-7961
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Affects Versions: 3.1.0
>Reporter: Yesha Vora
>Priority: Major
>
> Yarn should provide some way to figure out if yarn service is destroyed.
> If yarn service application is stopped, "yarn app -status " shows 
> that service is Stopped. 
> After destroying yarn service, "yarn app -status " returns 404
> {code}
> [hdpuser@cn005 sleeper]$ yarn app -status yesha-sleeper
> WARNING: YARN_LOG_DIR has been replaced by HADOOP_LOG_DIR. Using value of 
> YARN_LOG_DIR.
> WARNING: YARN_LOGFILE has been replaced by HADOOP_LOGFILE. Using value of 
> YARN_LOGFILE.
> WARNING: YARN_PID_DIR has been replaced by HADOOP_PID_DIR. Using value of 
> YARN_PID_DIR.
> WARNING: YARN_OPTS has been replaced by HADOOP_OPTS. Using value of YARN_OPTS.
> 18/02/16 11:02:30 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> 18/02/16 11:02:31 INFO client.RMProxy: Connecting to ResourceManager at 
> xxx/xx.xx.xx.xx:8050
> 18/02/16 11:02:31 INFO client.AHSProxy: Connecting to Application History 
> server at xxx/xx.xx.xx.x:10200
> 18/02/16 11:02:31 INFO client.RMProxy: Connecting to ResourceManager at 
> xxx/xx.xx.xx.x:8050
> 18/02/16 11:02:31 INFO client.AHSProxy: Connecting to Application History 
> server at xxx/xx.xx.xx.x:10200
> 18/02/16 11:02:31 INFO util.log: Logging initialized @2075ms
> yesha-sleeper Failed : HTTP error code : 404
> {code}
> Yarn should be able to notify user that whether a certain app is destroyed or 
> never created. HTTP 404 error does not explicitly provide information.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7961) Improve status response when yarn application is destroyed

2018-02-22 Thread Yesha Vora (JIRA)
Yesha Vora created YARN-7961:


 Summary: Improve status response when yarn application is destroyed
 Key: YARN-7961
 URL: https://issues.apache.org/jira/browse/YARN-7961
 Project: Hadoop YARN
  Issue Type: Bug
  Components: yarn-native-services
Affects Versions: 3.1.0
Reporter: Yesha Vora


Yarn should provide some way to figure out if yarn service is destroyed.

If yarn service application is stopped, "yarn app -status " shows that 
service is Stopped. 

After destroying yarn service, "yarn app -status " returns 404
{code}
[hdpuser@cn005 sleeper]$ yarn app -status yesha-sleeper

WARNING: YARN_LOG_DIR has been replaced by HADOOP_LOG_DIR. Using value of 
YARN_LOG_DIR.

WARNING: YARN_LOGFILE has been replaced by HADOOP_LOGFILE. Using value of 
YARN_LOGFILE.

WARNING: YARN_PID_DIR has been replaced by HADOOP_PID_DIR. Using value of 
YARN_PID_DIR.

WARNING: YARN_OPTS has been replaced by HADOOP_OPTS. Using value of YARN_OPTS.

18/02/16 11:02:30 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable

18/02/16 11:02:31 INFO client.RMProxy: Connecting to ResourceManager at 
xxx/xx.xx.xx.xx:8050

18/02/16 11:02:31 INFO client.AHSProxy: Connecting to Application History 
server at xxx/xx.xx.xx.x:10200

18/02/16 11:02:31 INFO client.RMProxy: Connecting to ResourceManager at 
xxx/xx.xx.xx.x:8050

18/02/16 11:02:31 INFO client.AHSProxy: Connecting to Application History 
server at xxx/xx.xx.xx.x:10200

18/02/16 11:02:31 INFO util.log: Logging initialized @2075ms

yesha-sleeper Failed : HTTP error code : 404
{code}
Yarn should be able to notify user that whether a certain app is destroyed or 
never created. HTTP 404 error does not explicitly provide information.
 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7935) Expose container's hostname to applications running within the docker container

2018-02-22 Thread Mridul Muralidharan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16373202#comment-16373202
 ] 

Mridul Muralidharan commented on YARN-7935:
---

Hi [~tgraves],

   Initial expectation was that there will be no change in client application 
with introduction of docker support in yarn.
Unfortunately, in our experiments, we find the need to make application changes 
- the scope of which is still not know.

For example, with 'host' networking mode, there is actually no change required 
and things work seemlessly ! (and give us package isolation we so badly require)
But with bridge or overlay, YARN-7935 will be required (among other things - 
which is still under investigation).

>From spark side of things, I am still unsure about what all other changes are 
>needed and we are still investigating there too.

We use "--hostname" not just to specify where to bind to, but also what to 
report back to driver.
Driver uses this for comm as well as locality computation (HOST_LOCAL and 
RACK_LOCAL).
Hence it is not simple case of replacing --hostname with CONTAINER_HOSTNAME (if 
available) - that will break HOST locality computation.


I wanted to create an a jira in spark once the scope was more concrete, but I 
can do that sooner if it helps to collaborate better.
If you are doing some experiments, would be great to touch base !


> Expose container's hostname to applications running within the docker 
> container
> ---
>
> Key: YARN-7935
> URL: https://issues.apache.org/jira/browse/YARN-7935
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
>Priority: Major
> Attachments: YARN-7935.1.patch, YARN-7935.2.patch
>
>
> Some applications have a need to bind to the container's hostname (like 
> Spark) which is different from the NodeManager's hostname(NM_HOST which is 
> available as an env during container launch) when launched through Docker 
> runtime. The container's hostname can be exposed to applications via an env 
> CONTAINER_HOSTNAME. Another potential candidate is the container's IP but 
> this can be addressed in a separate jira.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7957) Yarn service delete option disappears after stopping application

2018-02-22 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16373152#comment-16373152
 ] 

genericqa commented on YARN-7957:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
25m 40s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 25s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 37m  9s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7957 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12911580/YARN-7957.01.patch |
| Optional Tests |  asflicense  shadedclient  |
| uname | Linux d2b86b2181f7 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 3132709 |
| maven | version: Apache Maven 3.3.9 |
| Max. process+thread count | 410 (vs. ulimit of 5500) |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/19783/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Yarn service delete option disappears after stopping application
> 
>
> Key: YARN-7957
> URL: https://issues.apache.org/jira/browse/YARN-7957
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Affects Versions: 3.1.0
>Reporter: Yesha Vora
>Assignee: Sunil G
>Priority: Critical
> Attachments: YARN-7957.01.patch
>
>
> Steps:
> 1) Launch yarn service
> 2) Go to service page and click on Setting button->"Stop Service". The 
> application will be stopped.
> 3) Refresh page
> Here, setting button disappears. Thus, user can not delete service from UI 
> after stopping application
> Expected behavior:
> Setting button should be present on UI page after application is stopped. If 
> application is stopped, setting button should only have "Delete Service" 
> action available.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5714) ContainerExecutor does not order environment map

2018-02-22 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16373138#comment-16373138
 ] 

genericqa commented on YARN-5714:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 14s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 51s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 19m 
13s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 60m 22s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-5714 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12911571/YARN-5714.009.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux e68537d19250 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 3132709 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/19782/testReport/ |
| Max. process+thread count | 407 (vs. ulimit of 5500) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/19782/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> ContainerExecutor does not order environment map
> 

[jira] [Commented] (YARN-5714) ContainerExecutor does not order environment map

2018-02-22 Thread Jim Brennan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16373113#comment-16373113
 ] 

Jim Brennan commented on YARN-5714:
---

I believe this patch is now ready for review.

 

> ContainerExecutor does not order environment map
> 
>
> Key: YARN-5714
> URL: https://issues.apache.org/jira/browse/YARN-5714
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.4.1, 2.5.2, 2.7.3, 2.6.4, 3.0.0-alpha1
> Environment: all (linux and windows alike)
>Reporter: Remi Catherinot
>Assignee: Remi Catherinot
>Priority: Trivial
>  Labels: oct16-medium
> Attachments: YARN-5714.001.patch, YARN-5714.002.patch, 
> YARN-5714.003.patch, YARN-5714.004.patch, YARN-5714.005.patch, 
> YARN-5714.006.patch, YARN-5714.007.patch, YARN-5714.008.patch, 
> YARN-5714.009.patch
>
>   Original Estimate: 120h
>  Remaining Estimate: 120h
>
> when dumping the launch container script, environment variables are dumped 
> based on the order internally used by the map implementation (hash based). It 
> does not take into consideration that some env varibales may refer each 
> other, and so that some env variables must be declared before those 
> referencing them.
> In my case, i ended up having LD_LIBRARY_PATH which was depending on 
> HADOOP_COMMON_HOME being dumped before HADOOP_COMMON_HOME. Thus it had a 
> wrong value and so native libraries weren't loaded. jobs were running but not 
> at their best efficiency. This is just a use case falling into that bug, but 
> i'm sure others may happen as well.
> I already have a patch running in my production environment, i just estimate 
> to 5 days for packaging the patch in the right fashion for JIRA + try my best 
> to add tests.
> Note : the patch is not OS aware with a default empty implementation. I will 
> only implement the unix version on a 1st release. I'm not used to windows env 
> variables syntax so it will take me more time/research for it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7957) Yarn service delete option disappears after stopping application

2018-02-22 Thread Gour Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16373110#comment-16373110
 ] 

Gour Saha commented on YARN-7957:
-

[~sunilg] in addition to checking states FAILED/FINISHED/KILLED can we also hit 
the YARN SERVICE REST API (running on RM host/port). If the app is deleted, 
then the REST API will return 404. If the app is stopped (not running) but not 
deleted it will return a 200 with a JSON which has "state":"STOPPED" in the 
body as well. In fact, we don't need to check YARN App states at all and just 
rely on the REST API response.

> Yarn service delete option disappears after stopping application
> 
>
> Key: YARN-7957
> URL: https://issues.apache.org/jira/browse/YARN-7957
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Affects Versions: 3.1.0
>Reporter: Yesha Vora
>Assignee: Sunil G
>Priority: Critical
> Attachments: YARN-7957.01.patch
>
>
> Steps:
> 1) Launch yarn service
> 2) Go to service page and click on Setting button->"Stop Service". The 
> application will be stopped.
> 3) Refresh page
> Here, setting button disappears. Thus, user can not delete service from UI 
> after stopping application
> Expected behavior:
> Setting button should be present on UI page after application is stopped. If 
> application is stopped, setting button should only have "Delete Service" 
> action available.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7875) AttributeStore for store and recover attributes

2018-02-22 Thread Bibin A Chundatt (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16373079#comment-16373079
 ] 

Bibin A Chundatt commented on YARN-7875:


Thank you [~cheersyang]

On second thought the abstraction and additional refactoring doesnt add much of 
meaning.
Also refactoring could impact backward compatibility too IIUC. 

Need further discussion in YARN-7757 . {{NodeLabelsProvider}} signature is 
changed in YARN-7757 which could cause old version {{NodeLabelsProvider}} 
implementation to fail.

Attaching patch 001 for {{FileSystemNodeAttributeStore}} Implementaion. Patch 
is based on YARN-6858 the manager integration will be done after jira gets 
committed


> AttributeStore for store and recover attributes
> ---
>
> Key: YARN-7875
> URL: https://issues.apache.org/jira/browse/YARN-7875
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>Priority: Major
> Attachments: YARN-7875-WIP.patch, YARN-7875-YARN-3409.001.patch
>
>
> Similar to NodeLabelStore need to support NodeAttributeStore for persisting 
> attributes mapping to Nodes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7957) Yarn service delete option disappears after stopping application

2018-02-22 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16373085#comment-16373085
 ] 

Sunil G commented on YARN-7957:
---

Currently from YARN App states, we can come to know whether app is stopped by 
checking states as FAILED/FINISHED/KILLED.

If an app is any of this, we can show "Delete Service" again. But once we 
delete a service, yarn does nt give that info, hence Delete button ll be still 
there. [~yeshavora], do you think this behavior is fine for now?

Updated a patch for ref. cc [~gsaha] [~leftnoteasy]

> Yarn service delete option disappears after stopping application
> 
>
> Key: YARN-7957
> URL: https://issues.apache.org/jira/browse/YARN-7957
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Affects Versions: 3.1.0
>Reporter: Yesha Vora
>Assignee: Sunil G
>Priority: Critical
> Attachments: YARN-7957.01.patch
>
>
> Steps:
> 1) Launch yarn service
> 2) Go to service page and click on Setting button->"Stop Service". The 
> application will be stopped.
> 3) Refresh page
> Here, setting button disappears. Thus, user can not delete service from UI 
> after stopping application
> Expected behavior:
> Setting button should be present on UI page after application is stopped. If 
> application is stopped, setting button should only have "Delete Service" 
> action available.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7957) Yarn service delete option disappears after stopping application

2018-02-22 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-7957:
--
Attachment: YARN-7957.01.patch

> Yarn service delete option disappears after stopping application
> 
>
> Key: YARN-7957
> URL: https://issues.apache.org/jira/browse/YARN-7957
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Affects Versions: 3.1.0
>Reporter: Yesha Vora
>Assignee: Sunil G
>Priority: Critical
> Attachments: YARN-7957.01.patch
>
>
> Steps:
> 1) Launch yarn service
> 2) Go to service page and click on Setting button->"Stop Service". The 
> application will be stopped.
> 3) Refresh page
> Here, setting button disappears. Thus, user can not delete service from UI 
> after stopping application
> Expected behavior:
> Setting button should be present on UI page after application is stopped. If 
> application is stopped, setting button should only have "Delete Service" 
> action available.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-7957) Yarn service delete option disappears after stopping application

2018-02-22 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G reassigned YARN-7957:
-

Assignee: Sunil G

> Yarn service delete option disappears after stopping application
> 
>
> Key: YARN-7957
> URL: https://issues.apache.org/jira/browse/YARN-7957
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Affects Versions: 3.1.0
>Reporter: Yesha Vora
>Assignee: Sunil G
>Priority: Critical
>
> Steps:
> 1) Launch yarn service
> 2) Go to service page and click on Setting button->"Stop Service". The 
> application will be stopped.
> 3) Refresh page
> Here, setting button disappears. Thus, user can not delete service from UI 
> after stopping application
> Expected behavior:
> Setting button should be present on UI page after application is stopped. If 
> application is stopped, setting button should only have "Delete Service" 
> action available.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7909) YARN service REST API returns charset=null when kerberos enabled

2018-02-22 Thread Eric Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-7909:

Fix Version/s: 3.1.0

> YARN service REST API returns charset=null when kerberos enabled
> 
>
> Key: YARN-7909
> URL: https://issues.apache.org/jira/browse/YARN-7909
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-native-services
>Affects Versions: 3.1.0
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: YARN-7909.001.patch
>
>
> When kerberos is enabled, the response header will reply with:
> {code:java}
> Content-Type: application/json;charset=null{code}
> This appears to be somehow AuthenticationFilter is trigger charset to become 
> undefined.
> The same problem is not observed when simple security is enabled.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5714) ContainerExecutor does not order environment map

2018-02-22 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16373075#comment-16373075
 ] 

genericqa commented on YARN-5714:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
37s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 55s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  2s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 19m 
33s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 64m  1s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-5714 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12911570/YARN-5714.008.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 9189ba0b9983 4.4.0-89-generic #112-Ubuntu SMP Mon Jul 31 
19:38:41 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 3132709 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/19781/testReport/ |
| Max. process+thread count | 410 (vs. ulimit of 5500) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/19781/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> ContainerExecutor does not order environment map
> 

[jira] [Assigned] (YARN-7942) Yarn ServiceClient does not not delete znode from secure ZooKeeper

2018-02-22 Thread Eric Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang reassigned YARN-7942:
---

Assignee: Billie Rinaldi  (was: Eric Yang)

> Yarn ServiceClient does not not delete znode from secure ZooKeeper
> --
>
> Key: YARN-7942
> URL: https://issues.apache.org/jira/browse/YARN-7942
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-native-services
>Affects Versions: 3.1.0
>Reporter: Eric Yang
>Assignee: Billie Rinaldi
>Priority: Blocker
> Attachments: YARN-7942.001.patch, YARN-7942.002.patch
>
>
> Even with sasl:rm:cdrwa set on the ZK node (from the registry system accounts 
> property), the RM fails to remove the node with the below error. Also, the 
> destroy call succeeds.
> {code}
> 2018-02-16 15:49:29,691 WARN  client.ServiceClient 
> (ServiceClient.java:actionDestroy(470)) - Error deleting registry entry 
> /users/hbase/services/yarn-service/hbase-app-test
> org.apache.hadoop.registry.client.exceptions.NoPathPermissionsException: 
> `/registry/users/hbase/services/yarn-service/hbase-app-test': Not authorized 
> to access path; ACLs: [null ACL]: KeeperErrorCode = NoAuth for 
> /registry/users/hbase/services/yarn-service/hbase-app-test
> at 
> org.apache.hadoop.registry.client.impl.zk.CuratorService.operationFailure(CuratorService.java:412)
> at 
> org.apache.hadoop.registry.client.impl.zk.CuratorService.operationFailure(CuratorService.java:390)
> at 
> org.apache.hadoop.registry.client.impl.zk.CuratorService.zkDelete(CuratorService.java:722)
> at 
> org.apache.hadoop.registry.client.impl.zk.RegistryOperationsService.delete(RegistryOperationsService.java:162)
> at 
> org.apache.hadoop.yarn.service.client.ServiceClient.actionDestroy(ServiceClient.java:462)
> at 
> org.apache.hadoop.yarn.service.webapp.ApiServer$4.run(ApiServer.java:253)
> at 
> org.apache.hadoop.yarn.service.webapp.ApiServer$4.run(ApiServer.java:243)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1965)
> at 
> org.apache.hadoop.yarn.service.webapp.ApiServer.stopService(ApiServer.java:243)
> at 
> org.apache.hadoop.yarn.service.webapp.ApiServer.deleteService(ApiServer.java:223)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:205)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
> at 
> com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:302)
> at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
> at 
> com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
> at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
> at 
> com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1542)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1473)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1419)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1409)
> at 
> com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:409)
> at 
> com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:558)
> at 
> com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:733)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
> at 
> org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:848)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1772)
> at 
> 

[jira] [Commented] (YARN-7781) Update YARN-Services-Examples.md to be in sync with the latest code

2018-02-22 Thread Gour Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16373064#comment-16373064
 ] 

Gour Saha commented on YARN-7781:
-

bq. With the new multi-component flex call, I think the service json needs 
state FLEX
[~jianhe] I think if you just add "state": "FLEX" to the JSON for "PUT URL - 
http://localhost:8088/app/v1/services/hello-world; in 
YARN-Services-Examples.md, it will complete the patch. But, I have a suggestion 
here - to rename the state name from FLEX to FLEXING similar to what we have in 
ComponentState.java.

> Update YARN-Services-Examples.md to be in sync with the latest code
> ---
>
> Key: YARN-7781
> URL: https://issues.apache.org/jira/browse/YARN-7781
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gour Saha
>Assignee: Jian He
>Priority: Major
> Attachments: YARN-7781.01.patch, YARN-7781.02.patch, 
> YARN-7781.03.patch
>
>
> Update YARN-Services-Examples.md to make the following additions/changes:
> 1. Add an additional URL and PUT Request JSON to support flex:
> Update to flex up/down the no of containers (instances) of a component of a 
> service
> PUT URL – http://localhost:8088/app/v1/services/hello-world
> PUT Request JSON
> {code}
> {
>   "components" : [ {
> "name" : "hello",
> "number_of_containers" : 3
>   } ]
> }
> {code}
> 2. Modify all occurrences of /ws/ to /app/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7875) AttributeStore for store and recover attributes

2018-02-22 Thread Bibin A Chundatt (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7875?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bibin A Chundatt updated YARN-7875:
---
Attachment: YARN-7875-YARN-3409.001.patch

> AttributeStore for store and recover attributes
> ---
>
> Key: YARN-7875
> URL: https://issues.apache.org/jira/browse/YARN-7875
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>Priority: Major
> Attachments: YARN-7875-WIP.patch, YARN-7875-YARN-3409.001.patch
>
>
> Similar to NodeLabelStore need to support NodeAttributeStore for persisting 
> attributes mapping to Nodes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7945) Java Doc error in UnmanagedAMPoolManager for branch-2

2018-02-22 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7945?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16373053#comment-16373053
 ] 

Jason Lowe commented on YARN-7945:
--

Thanks for updating the patch!  +1 lgtm.  Committing this.

> Java Doc error in UnmanagedAMPoolManager for branch-2
> -
>
> Key: YARN-7945
> URL: https://issues.apache.org/jira/browse/YARN-7945
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.10.0, 2.9.1
>Reporter: Rohith Sharma K S
>Assignee: Botong Huang
>Priority: Major
> Attachments: YARN-7945-branch-2.001.patch, 
> YARN-7945-branch-2.002.patch
>
>
> In branch-2, I see an java doc error while building package. 
> {code}
> [ERROR] 
> /Users/rsharmaks/Repos/Apache/Commit_Repos/branch-2/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/uam/UnmanagedAMPoolManager.java:151:
>  error: reference not found
> [ERROR]* @see ApplicationSubmissionContext
> [ERROR]   ^
> [ERROR] 
> /Users/rsharmaks/Repos/Apache/Commit_Repos/branch-2/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/uam/UnmanagedAMPoolManager.java:204:
>  error: reference not found
> [ERROR]* @see ApplicationSubmissionContext
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5714) ContainerExecutor does not order environment map

2018-02-22 Thread Jim Brennan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16373045#comment-16373045
 ] 

Jim Brennan commented on YARN-5714:
---

Original version of path.008 did not include the checkstyle fixes.  I replaced 
it, but apparently too late for the genericqa tests.

Put up a patch.009 to trigger another round of tests.

> ContainerExecutor does not order environment map
> 
>
> Key: YARN-5714
> URL: https://issues.apache.org/jira/browse/YARN-5714
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.4.1, 2.5.2, 2.7.3, 2.6.4, 3.0.0-alpha1
> Environment: all (linux and windows alike)
>Reporter: Remi Catherinot
>Assignee: Remi Catherinot
>Priority: Trivial
>  Labels: oct16-medium
> Attachments: YARN-5714.001.patch, YARN-5714.002.patch, 
> YARN-5714.003.patch, YARN-5714.004.patch, YARN-5714.005.patch, 
> YARN-5714.006.patch, YARN-5714.007.patch, YARN-5714.008.patch, 
> YARN-5714.009.patch
>
>   Original Estimate: 120h
>  Remaining Estimate: 120h
>
> when dumping the launch container script, environment variables are dumped 
> based on the order internally used by the map implementation (hash based). It 
> does not take into consideration that some env varibales may refer each 
> other, and so that some env variables must be declared before those 
> referencing them.
> In my case, i ended up having LD_LIBRARY_PATH which was depending on 
> HADOOP_COMMON_HOME being dumped before HADOOP_COMMON_HOME. Thus it had a 
> wrong value and so native libraries weren't loaded. jobs were running but not 
> at their best efficiency. This is just a use case falling into that bug, but 
> i'm sure others may happen as well.
> I already have a patch running in my production environment, i just estimate 
> to 5 days for packaging the patch in the right fashion for JIRA + try my best 
> to add tests.
> Note : the patch is not OS aware with a default empty implementation. I will 
> only implement the unix version on a 1st release. I'm not used to windows env 
> variables syntax so it will take me more time/research for it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5714) ContainerExecutor does not order environment map

2018-02-22 Thread Jim Brennan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Brennan updated YARN-5714:
--
Attachment: YARN-5714.009.patch

> ContainerExecutor does not order environment map
> 
>
> Key: YARN-5714
> URL: https://issues.apache.org/jira/browse/YARN-5714
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.4.1, 2.5.2, 2.7.3, 2.6.4, 3.0.0-alpha1
> Environment: all (linux and windows alike)
>Reporter: Remi Catherinot
>Assignee: Remi Catherinot
>Priority: Trivial
>  Labels: oct16-medium
> Attachments: YARN-5714.001.patch, YARN-5714.002.patch, 
> YARN-5714.003.patch, YARN-5714.004.patch, YARN-5714.005.patch, 
> YARN-5714.006.patch, YARN-5714.007.patch, YARN-5714.008.patch, 
> YARN-5714.009.patch
>
>   Original Estimate: 120h
>  Remaining Estimate: 120h
>
> when dumping the launch container script, environment variables are dumped 
> based on the order internally used by the map implementation (hash based). It 
> does not take into consideration that some env varibales may refer each 
> other, and so that some env variables must be declared before those 
> referencing them.
> In my case, i ended up having LD_LIBRARY_PATH which was depending on 
> HADOOP_COMMON_HOME being dumped before HADOOP_COMMON_HOME. Thus it had a 
> wrong value and so native libraries weren't loaded. jobs were running but not 
> at their best efficiency. This is just a use case falling into that bug, but 
> i'm sure others may happen as well.
> I already have a patch running in my production environment, i just estimate 
> to 5 days for packaging the patch in the right fashion for JIRA + try my best 
> to add tests.
> Note : the patch is not OS aware with a default empty implementation. I will 
> only implement the unix version on a 1st release. I'm not used to windows env 
> variables syntax so it will take me more time/research for it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7935) Expose container's hostname to applications running within the docker container

2018-02-22 Thread Thomas Graves (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16373039#comment-16373039
 ] 

Thomas Graves commented on YARN-7935:
-

[~mridulm80] what is the spark Jira for this?  If this goes in it will still 
have to grab this from env to pass in to the executorRunnable.

> Expose container's hostname to applications running within the docker 
> container
> ---
>
> Key: YARN-7935
> URL: https://issues.apache.org/jira/browse/YARN-7935
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
>Priority: Major
> Attachments: YARN-7935.1.patch, YARN-7935.2.patch
>
>
> Some applications have a need to bind to the container's hostname (like 
> Spark) which is different from the NodeManager's hostname(NM_HOST which is 
> available as an env during container launch) when launched through Docker 
> runtime. The container's hostname can be exposed to applications via an env 
> CONTAINER_HOSTNAME. Another potential candidate is the container's IP but 
> this can be addressed in a separate jira.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5714) ContainerExecutor does not order environment map

2018-02-22 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16373032#comment-16373032
 ] 

genericqa commented on YARN-5714:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 18s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 16s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 The patch generated 8 new + 149 unchanged - 0 fixed = 157 total (was 149) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 40s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 19m 
19s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 59m 57s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-5714 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12911567/YARN-5714.008.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 51f188ad2d27 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 3132709 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/19780/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/19780/testReport/ |
| Max. process+thread count | 409 (vs. ulimit of 5500) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 

[jira] [Comment Edited] (YARN-5910) Support for multi-cluster delegation tokens

2018-02-22 Thread Greg Senia (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16373013#comment-16373013
 ] 

Greg Senia edited comment on YARN-5910 at 2/22/18 4:27 PM:
---

[~jlowe] , [~jianhe] or [~clayb] do you know if this Jira solves the problem I 
have below where I attempt to run distcp as my user from active directory. In 
this scenerio each Hadoop Cluster has its own KDC that has a one-way trust to 
Active Directory.

In the case below I attempt to run distcp as my active directory user and it 
will not run. hadoop fs -cp hdfs://tech/public_data 
hdfs://unit/public_data/gss_test works no problem. So I am wondering if this 
Jira solves this problem of a cluster attempting to get a delegation token?

@clayb 's questions from above I guess my question is does this solve #2?

I see there are two issues here I was hoping to solve:
 1. A remote cluster's services are needed (e.g. as a data source to this job)
 2. A remote cluster does not trust this cluster's YARN principal

 

Distcp between clusters that are NOT allowed to trust each others kerberos 
realms:
 hadoop distcp hdfs://tech/public_data hdfs://unit/public_data/gss_test

RM Log from DISTCP?
 2018-02-22 11:11:45,677 INFO resourcemanager.ClientRMService 
(ClientRMService.java:submitApplication(588)) - Application with id 112 
submitted by user gss2002
 2018-02-22 11:11:45,677 INFO resourcemanager.RMAuditLogger 
(RMAuditLogger.java:logSuccess(170)) - USER=gss2002 IP=10.70.40.255 
OPERATION=Submit Application Request TARGET=ClientRMService RESULT=SUCCESS 
APPID=application_1519238638021_0112 CALLERCONTEXT=CLI
 2018-02-22 11:11:45,679 INFO security.DelegationTokenRenewer 
(DelegationTokenRenewer.java:handleAppSubmitEvent(449)) - 
application_1519238638021_0112 found existing hdfs token Kind: 
HDFS_DELEGATION_TOKEN, Service: ha-hdfs:unit, Ident: (HDFS_DELEGATION_TOKEN 
token 102956 for gss2002)
 2018-02-22 11:11:54,526 WARN ipc.Client (Client.java:run(717)) - Couldn't 
setup connection for rm/ha21t52mn.tech.hdp.example@tech.hdp.example.com to 
ha21d51nn.unit.hdp.example.com/10.69.81.6:8020
 javax.security.sasl.SaslException: GSS initiate failed [Caused by 
GSSException: No valid credentials provided (Mechanism level: Fail to create 
credential. (63) - No service creds)]
 at 
com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211)
 at org.apache.hadoop.security.SaslRpcClient.saslConnect(SaslRpcClient.java:414)
 at org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.java:597)
 at org.apache.hadoop.ipc.Client$Connection.access$2100(Client.java:399)
 at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:768)
 at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:764)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:422)
 at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1740)
 at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:764)
 at org.apache.hadoop.ipc.Client$Connection.access$3300(Client.java:399)
 at org.apache.hadoop.ipc.Client.getConnection(Client.java:1626)
 at org.apache.hadoop.ipc.Client.call(Client.java:1457)
 at org.apache.hadoop.ipc.Client.call(Client.java:1404)
 at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)
 at com.sun.proxy.$Proxy93.renewDelegationToken(Unknown Source)
 at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.renewDelegationToken(ClientNamenodeProtocolTranslatorPB.java:993)
 at sun.reflect.GeneratedMethodAccessor214.invoke(Unknown Source)
 at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498)
 at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:278)
 at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:194)
 at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:176)
 at com.sun.proxy.$Proxy94.renewDelegationToken(Unknown Source)
 at org.apache.hadoop.hdfs.DFSClient$Renewer.renew(DFSClient.java:1141)
 at org.apache.hadoop.security.token.Token.renew(Token.java:414)
 at 
org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer$1.run(DelegationTokenRenewer.java:597)
 at 
org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer$1.run(DelegationTokenRenewer.java:594)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:422)
 at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1740)
 at 
org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer.renewToken(DelegationTokenRenewer.java:592)
 at 

[jira] [Commented] (YARN-5910) Support for multi-cluster delegation tokens

2018-02-22 Thread Greg Senia (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16373013#comment-16373013
 ] 

Greg Senia commented on YARN-5910:
--

@jlowe @jianhe or @clayb do you know if this Jira solves the problem I have 
below where I attempt to run distcp as my user from active directory. In this 
scenerio each Hadoop Cluster has its own KDC that has a one-way trust to Active 
Directory.

In the case below I attempt to run distcp as my active directory user and it 
will not run. hadoop fs -cp hdfs://tech/public_data 
hdfs://unit/public_data/gss_test works no problem. So I am wondering if this 
Jira solves this problem of a cluster attempting to get a delegation token?

@clayb 's questions from above I guess my question is does this solve #2?

I see there are two issues here I was hoping to solve:
1. A remote cluster's services are needed (e.g. as a data source to this job)
2. A remote cluster does not trust this cluster's YARN principal

 

Distcp between clusters that are NOT allowed to trust each others kerberos 
realms:
hadoop distcp hdfs://tech/public_data hdfs://unit/public_data/gss_test

RM Log from DISTCP?
2018-02-22 11:11:45,677 INFO resourcemanager.ClientRMService 
(ClientRMService.java:submitApplication(588)) - Application with id 112 
submitted by user gss2002
2018-02-22 11:11:45,677 INFO resourcemanager.RMAuditLogger 
(RMAuditLogger.java:logSuccess(170)) - USER=gss2002 IP=10.70.40.255 
OPERATION=Submit Application Request TARGET=ClientRMService RESULT=SUCCESS 
APPID=application_1519238638021_0112 CALLERCONTEXT=CLI
2018-02-22 11:11:45,679 INFO security.DelegationTokenRenewer 
(DelegationTokenRenewer.java:handleAppSubmitEvent(449)) - 
application_1519238638021_0112 found existing hdfs token Kind: 
HDFS_DELEGATION_TOKEN, Service: ha-hdfs:unit, Ident: (HDFS_DELEGATION_TOKEN 
token 102956 for gss2002)
2018-02-22 11:11:54,526 WARN ipc.Client (Client.java:run(717)) - Couldn't setup 
connection for rm/ha21t52mn.tech.hdp.example@tech.hdp.example.com to 
ha21d51nn.unit.hdp.example.com/10.69.81.6:8020
javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: 
No valid credentials provided (Mechanism level: Fail to create credential. (63) 
- No service creds)]
 at 
com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211)
 at org.apache.hadoop.security.SaslRpcClient.saslConnect(SaslRpcClient.java:414)
 at org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.java:597)
 at org.apache.hadoop.ipc.Client$Connection.access$2100(Client.java:399)
 at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:768)
 at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:764)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:422)
 at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1740)
 at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:764)
 at org.apache.hadoop.ipc.Client$Connection.access$3300(Client.java:399)
 at org.apache.hadoop.ipc.Client.getConnection(Client.java:1626)
 at org.apache.hadoop.ipc.Client.call(Client.java:1457)
 at org.apache.hadoop.ipc.Client.call(Client.java:1404)
 at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)
 at com.sun.proxy.$Proxy93.renewDelegationToken(Unknown Source)
 at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.renewDelegationToken(ClientNamenodeProtocolTranslatorPB.java:993)
 at sun.reflect.GeneratedMethodAccessor214.invoke(Unknown Source)
 at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498)
 at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:278)
 at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:194)
 at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:176)
 at com.sun.proxy.$Proxy94.renewDelegationToken(Unknown Source)
 at org.apache.hadoop.hdfs.DFSClient$Renewer.renew(DFSClient.java:1141)
 at org.apache.hadoop.security.token.Token.renew(Token.java:414)
 at 
org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer$1.run(DelegationTokenRenewer.java:597)
 at 
org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer$1.run(DelegationTokenRenewer.java:594)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:422)
 at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1740)
 at 
org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer.renewToken(DelegationTokenRenewer.java:592)
 at 

[jira] [Commented] (YARN-7781) Update YARN-Services-Examples.md to be in sync with the latest code

2018-02-22 Thread Gour Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16373006#comment-16373006
 ] 

Gour Saha commented on YARN-7781:
-

bq. Also it looks like with the single component flex, we are not currently 
filling the component name from the path into the json.

Since YARN-7836 has a patch and takes care of this issue, we don't need to do 
anything here.

> Update YARN-Services-Examples.md to be in sync with the latest code
> ---
>
> Key: YARN-7781
> URL: https://issues.apache.org/jira/browse/YARN-7781
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gour Saha
>Assignee: Jian He
>Priority: Major
> Attachments: YARN-7781.01.patch, YARN-7781.02.patch, 
> YARN-7781.03.patch
>
>
> Update YARN-Services-Examples.md to make the following additions/changes:
> 1. Add an additional URL and PUT Request JSON to support flex:
> Update to flex up/down the no of containers (instances) of a component of a 
> service
> PUT URL – http://localhost:8088/app/v1/services/hello-world
> PUT Request JSON
> {code}
> {
>   "components" : [ {
> "name" : "hello",
> "number_of_containers" : 3
>   } ]
> }
> {code}
> 2. Modify all occurrences of /ws/ to /app/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7960) Add no-new-privileges flag to docker run

2018-02-22 Thread Eric Badger (JIRA)
Eric Badger created YARN-7960:
-

 Summary: Add no-new-privileges flag to docker run
 Key: YARN-7960
 URL: https://issues.apache.org/jira/browse/YARN-7960
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Eric Badger


Minimally, this should be used for unprivileged containers. It's a cheap way to 
add an extra layer of security to the docker model. For privileged containers, 
it might be appropriate to omit this flag

https://github.com/moby/moby/pull/20727



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5714) ContainerExecutor does not order environment map

2018-02-22 Thread Jim Brennan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Brennan updated YARN-5714:
--
Attachment: YARN-5714.008.patch

> ContainerExecutor does not order environment map
> 
>
> Key: YARN-5714
> URL: https://issues.apache.org/jira/browse/YARN-5714
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.4.1, 2.5.2, 2.7.3, 2.6.4, 3.0.0-alpha1
> Environment: all (linux and windows alike)
>Reporter: Remi Catherinot
>Assignee: Remi Catherinot
>Priority: Trivial
>  Labels: oct16-medium
> Attachments: YARN-5714.001.patch, YARN-5714.002.patch, 
> YARN-5714.003.patch, YARN-5714.004.patch, YARN-5714.005.patch, 
> YARN-5714.006.patch, YARN-5714.007.patch, YARN-5714.008.patch
>
>   Original Estimate: 120h
>  Remaining Estimate: 120h
>
> when dumping the launch container script, environment variables are dumped 
> based on the order internally used by the map implementation (hash based). It 
> does not take into consideration that some env varibales may refer each 
> other, and so that some env variables must be declared before those 
> referencing them.
> In my case, i ended up having LD_LIBRARY_PATH which was depending on 
> HADOOP_COMMON_HOME being dumped before HADOOP_COMMON_HOME. Thus it had a 
> wrong value and so native libraries weren't loaded. jobs were running but not 
> at their best efficiency. This is just a use case falling into that bug, but 
> i'm sure others may happen as well.
> I already have a patch running in my production environment, i just estimate 
> to 5 days for packaging the patch in the right fashion for JIRA + try my best 
> to add tests.
> Note : the patch is not OS aware with a default empty implementation. I will 
> only implement the unix version on a 1st release. I'm not used to windows env 
> variables syntax so it will take me more time/research for it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5714) ContainerExecutor does not order environment map

2018-02-22 Thread Jim Brennan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Brennan updated YARN-5714:
--
Attachment: (was: YARN-5714.008.patch)

> ContainerExecutor does not order environment map
> 
>
> Key: YARN-5714
> URL: https://issues.apache.org/jira/browse/YARN-5714
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.4.1, 2.5.2, 2.7.3, 2.6.4, 3.0.0-alpha1
> Environment: all (linux and windows alike)
>Reporter: Remi Catherinot
>Assignee: Remi Catherinot
>Priority: Trivial
>  Labels: oct16-medium
> Attachments: YARN-5714.001.patch, YARN-5714.002.patch, 
> YARN-5714.003.patch, YARN-5714.004.patch, YARN-5714.005.patch, 
> YARN-5714.006.patch, YARN-5714.007.patch
>
>   Original Estimate: 120h
>  Remaining Estimate: 120h
>
> when dumping the launch container script, environment variables are dumped 
> based on the order internally used by the map implementation (hash based). It 
> does not take into consideration that some env varibales may refer each 
> other, and so that some env variables must be declared before those 
> referencing them.
> In my case, i ended up having LD_LIBRARY_PATH which was depending on 
> HADOOP_COMMON_HOME being dumped before HADOOP_COMMON_HOME. Thus it had a 
> wrong value and so native libraries weren't loaded. jobs were running but not 
> at their best efficiency. This is just a use case falling into that bug, but 
> i'm sure others may happen as well.
> I already have a patch running in my production environment, i just estimate 
> to 5 days for packaging the patch in the right fashion for JIRA + try my best 
> to add tests.
> Note : the patch is not OS aware with a default empty implementation. I will 
> only implement the unix version on a 1st release. I'm not used to windows env 
> variables syntax so it will take me more time/research for it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5714) ContainerExecutor does not order environment map

2018-02-22 Thread Jim Brennan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Brennan updated YARN-5714:
--
Attachment: YARN-5714.008.patch

> ContainerExecutor does not order environment map
> 
>
> Key: YARN-5714
> URL: https://issues.apache.org/jira/browse/YARN-5714
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.4.1, 2.5.2, 2.7.3, 2.6.4, 3.0.0-alpha1
> Environment: all (linux and windows alike)
>Reporter: Remi Catherinot
>Assignee: Remi Catherinot
>Priority: Trivial
>  Labels: oct16-medium
> Attachments: YARN-5714.001.patch, YARN-5714.002.patch, 
> YARN-5714.003.patch, YARN-5714.004.patch, YARN-5714.005.patch, 
> YARN-5714.006.patch, YARN-5714.007.patch, YARN-5714.008.patch
>
>   Original Estimate: 120h
>  Remaining Estimate: 120h
>
> when dumping the launch container script, environment variables are dumped 
> based on the order internally used by the map implementation (hash based). It 
> does not take into consideration that some env varibales may refer each 
> other, and so that some env variables must be declared before those 
> referencing them.
> In my case, i ended up having LD_LIBRARY_PATH which was depending on 
> HADOOP_COMMON_HOME being dumped before HADOOP_COMMON_HOME. Thus it had a 
> wrong value and so native libraries weren't loaded. jobs were running but not 
> at their best efficiency. This is just a use case falling into that bug, but 
> i'm sure others may happen as well.
> I already have a patch running in my production environment, i just estimate 
> to 5 days for packaging the patch in the right fashion for JIRA + try my best 
> to add tests.
> Note : the patch is not OS aware with a default empty implementation. I will 
> only implement the unix version on a 1st release. I'm not used to windows env 
> variables syntax so it will take me more time/research for it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5714) ContainerExecutor does not order environment map

2018-02-22 Thread Jim Brennan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16372939#comment-16372939
 ] 

Jim Brennan commented on YARN-5714:
---

Uploaded a new patch that addresses the check-style issues.

> ContainerExecutor does not order environment map
> 
>
> Key: YARN-5714
> URL: https://issues.apache.org/jira/browse/YARN-5714
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.4.1, 2.5.2, 2.7.3, 2.6.4, 3.0.0-alpha1
> Environment: all (linux and windows alike)
>Reporter: Remi Catherinot
>Assignee: Remi Catherinot
>Priority: Trivial
>  Labels: oct16-medium
> Attachments: YARN-5714.001.patch, YARN-5714.002.patch, 
> YARN-5714.003.patch, YARN-5714.004.patch, YARN-5714.005.patch, 
> YARN-5714.006.patch, YARN-5714.007.patch, YARN-5714.008.patch
>
>   Original Estimate: 120h
>  Remaining Estimate: 120h
>
> when dumping the launch container script, environment variables are dumped 
> based on the order internally used by the map implementation (hash based). It 
> does not take into consideration that some env varibales may refer each 
> other, and so that some env variables must be declared before those 
> referencing them.
> In my case, i ended up having LD_LIBRARY_PATH which was depending on 
> HADOOP_COMMON_HOME being dumped before HADOOP_COMMON_HOME. Thus it had a 
> wrong value and so native libraries weren't loaded. jobs were running but not 
> at their best efficiency. This is just a use case falling into that bug, but 
> i'm sure others may happen as well.
> I already have a patch running in my production environment, i just estimate 
> to 5 days for packaging the patch in the right fashion for JIRA + try my best 
> to add tests.
> Note : the patch is not OS aware with a default empty implementation. I will 
> only implement the unix version on a 1st release. I'm not used to windows env 
> variables syntax so it will take me more time/research for it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-6858) Attribute Manager to store and provide the attributes in RM

2018-02-22 Thread Bibin A Chundatt (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16372899#comment-16372899
 ] 

Bibin A Chundatt edited comment on YARN-6858 at 2/22/18 3:02 PM:
-

[~Naganarasimha]
Minor comment from my side

# Store event if fired for each iteration nodeAttributeMapping . Should be 
outside the loop
{code}
if (null != dispatcher) {
  dispatcher.getEventHandler()
  .handle(new NodeAttributesStoreEvent(nodeAttributeMapping, op));
}
{code}




was (Author: bibinchundatt):
[~Naganarasimha]
Minor comment from my side
{code}
{code}

> Attribute Manager to store and provide the attributes in RM
> ---
>
> Key: YARN-6858
> URL: https://issues.apache.org/jira/browse/YARN-6858
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api, capacityscheduler, client
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
>Priority: Major
> Attachments: YARN-6858-YARN-3409.001.patch, 
> YARN-6858-YARN-3409.002.patch, YARN-6858-YARN-3409.003.patch, 
> YARN-6858-YARN-3409.004.patch, YARN-6858-YARN-3409.005.patch, 
> YARN-6858-YARN-3409.006.patch, YARN-6858-YARN-3409.007.patch, 
> YARN-6858-YARN-3409.008.patch
>
>
> Similar to CommonNodeLabelsManager we need to have a centralized manager for 
> Node Attributes too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6858) Attribute Manager to store and provide the attributes in RM

2018-02-22 Thread Bibin A Chundatt (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16372899#comment-16372899
 ] 

Bibin A Chundatt commented on YARN-6858:


[~Naganarasimha]
Minor comment from my side
{code}
{code}

> Attribute Manager to store and provide the attributes in RM
> ---
>
> Key: YARN-6858
> URL: https://issues.apache.org/jira/browse/YARN-6858
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api, capacityscheduler, client
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
>Priority: Major
> Attachments: YARN-6858-YARN-3409.001.patch, 
> YARN-6858-YARN-3409.002.patch, YARN-6858-YARN-3409.003.patch, 
> YARN-6858-YARN-3409.004.patch, YARN-6858-YARN-3409.005.patch, 
> YARN-6858-YARN-3409.006.patch, YARN-6858-YARN-3409.007.patch, 
> YARN-6858-YARN-3409.008.patch
>
>
> Similar to CommonNodeLabelsManager we need to have a centralized manager for 
> Node Attributes too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7835) [Atsv2] Race condition in NM while publishing events if second attempt launched on same node

2018-02-22 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16372827#comment-16372827
 ] 

Haibo Chen commented on YARN-7835:
--

+1 LGTM

> [Atsv2] Race condition in NM while publishing events if second attempt 
> launched on same node
> 
>
> Key: YARN-7835
> URL: https://issues.apache.org/jira/browse/YARN-7835
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Critical
> Attachments: YARN-7835.001.patch, YARN-7835.002.patch, 
> YARN-7835.003.patch, YARN-7835.004.patch
>
>
> It is observed race condition that if master container is killed for some 
> reason and launched on same node then NMTimelinePublisher doesn't add 
> timelineClient. But once completed container for 1st attempt has come then 
> NMTimelinePublisher removes the timelineClient. 
>  It causes all subsequent event publishing from different client fails to 
> publish with exception Application is not found. !



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7933) [atsv2 read acls] Add TimelineWriter#writeDomain

2018-02-22 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16372762#comment-16372762
 ] 

genericqa commented on YARN-7933:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
30s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 47s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
10s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api in 
trunk has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
52s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  9m 
48s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 56s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 17 new + 4 unchanged - 0 fixed = 21 total (was 4) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
11s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 16 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 59s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
43s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice
 generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
42s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
15s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
1s{color} | {color:green} hadoop-yarn-server-timelineservice in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
30s{color} | {color:green} hadoop-yarn-server-timelineservice-hbase-client in 
the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 85m 26s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | 
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice
 |
|  |  Dead store to response in 

[jira] [Commented] (YARN-6858) Attribute Manager to store and provide the attributes in RM

2018-02-22 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16372754#comment-16372754
 ] 

Sunil G commented on YARN-6858:
---

+1 Committing later today if no objections.

> Attribute Manager to store and provide the attributes in RM
> ---
>
> Key: YARN-6858
> URL: https://issues.apache.org/jira/browse/YARN-6858
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api, capacityscheduler, client
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
>Priority: Major
> Attachments: YARN-6858-YARN-3409.001.patch, 
> YARN-6858-YARN-3409.002.patch, YARN-6858-YARN-3409.003.patch, 
> YARN-6858-YARN-3409.004.patch, YARN-6858-YARN-3409.005.patch, 
> YARN-6858-YARN-3409.006.patch, YARN-6858-YARN-3409.007.patch, 
> YARN-6858-YARN-3409.008.patch
>
>
> Similar to CommonNodeLabelsManager we need to have a centralized manager for 
> Node Attributes too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7921) Transform a PlacementConstraint to a string expression

2018-02-22 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16372720#comment-16372720
 ] 

genericqa commented on YARN-7921:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
26s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 50s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
4s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api in 
trunk has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
33s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 51s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 30s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
41s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
16s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 67m 40s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7921 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12911532/YARN-7921.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux c76b0b5ae6c2 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 3132709 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| findbugs | 

[jira] [Commented] (YARN-7959) Add .vm extension to PlacementConstraints.md to ensure proper filtering

2018-02-22 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16372718#comment-16372718
 ] 

Weiwei Yang commented on YARN-7959:
---

+[~leftnoteasy]

This can be verified by

{{mvn clean site -Preleasedocs; mvn site:stage 
-DstagingDirectory=/tmp/hadoop-site}}

> Add .vm extension to PlacementConstraints.md to ensure proper filtering
> ---
>
> Key: YARN-7959
> URL: https://issues.apache.org/jira/browse/YARN-7959
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Critical
> Attachments: YARN-7959.001.patch
>
>
> Rename PlacementConstraints.md to PlacementConstraints.md.vm to make sure 
> ${project.version} is automatically substituted while generating the site 
> docs. More info please see 
> [mvn-site-plugin|https://maven.apache.org/plugins/maven-site-plugin/examples/creating-content.html#Filtering].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7959) Add .vm extension to PlacementConstraints.md to ensure proper filtering

2018-02-22 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16372709#comment-16372709
 ] 

genericqa commented on YARN-7959:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
31s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
24m 40s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 46s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 35m 56s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7959 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12911535/YARN-7959.001.patch |
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux 16f1c874f365 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 3132709 |
| maven | version: Apache Maven 3.3.9 |
| Max. process+thread count | 408 (vs. ulimit of 5500) |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/19778/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add .vm extension to PlacementConstraints.md to ensure proper filtering
> ---
>
> Key: YARN-7959
> URL: https://issues.apache.org/jira/browse/YARN-7959
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Critical
> Attachments: YARN-7959.001.patch
>
>
> Rename PlacementConstraints.md to PlacementConstraints.md.vm to make sure 
> ${project.version} is automatically substituted while generating the site 
> docs. More info please see 
> [mvn-site-plugin|https://maven.apache.org/plugins/maven-site-plugin/examples/creating-content.html#Filtering].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7933) [atsv2 read acls] Add TimelineWriter#writeDomain

2018-02-22 Thread Rohith Sharma K S (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S updated YARN-7933:

Attachment: YARN-7933.01.patch

> [atsv2 read acls] Add TimelineWriter#writeDomain 
> -
>
> Key: YARN-7933
> URL: https://issues.apache.org/jira/browse/YARN-7933
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Vrushali C
>Assignee: Rohith Sharma K S
>Priority: Major
> Attachments: YARN-7933.01.patch
>
>
>  
> Add an API TimelineWriter#writeDomain for writing the domain info 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7959) Add .vm extension to PlacementConstraints.md to ensure proper filtering

2018-02-22 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated YARN-7959:
--
Attachment: YARN-7959.001.patch

> Add .vm extension to PlacementConstraints.md to ensure proper filtering
> ---
>
> Key: YARN-7959
> URL: https://issues.apache.org/jira/browse/YARN-7959
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Critical
> Attachments: YARN-7959.001.patch
>
>
> Rename PlacementConstraints.md to PlacementConstraints.md.vm to make sure 
> ${project.version} is automatically substituted while generating the site 
> docs. More info please see 
> [mvn-site-plugin|https://maven.apache.org/plugins/maven-site-plugin/examples/creating-content.html#Filtering].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7959) Add .vm extension to PlacementConstraints.md to ensure proper filtering

2018-02-22 Thread Weiwei Yang (JIRA)
Weiwei Yang created YARN-7959:
-

 Summary: Add .vm extension to PlacementConstraints.md to ensure 
proper filtering
 Key: YARN-7959
 URL: https://issues.apache.org/jira/browse/YARN-7959
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: documentation
Reporter: Weiwei Yang
Assignee: Weiwei Yang


Rename PlacementConstraints.md to PlacementConstraints.md.vm to make sure 
${project.version} is automatically substituted while generating the site docs. 
More info please see 
[mvn-site-plugin|https://maven.apache.org/plugins/maven-site-plugin/examples/creating-content.html#Filtering].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7921) Transform a PlacementConstraint to a string expression

2018-02-22 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated YARN-7921:
--
Attachment: YARN-7921.001.patch

> Transform a PlacementConstraint to a string expression
> --
>
> Key: YARN-7921
> URL: https://issues.apache.org/jira/browse/YARN-7921
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Major
> Attachments: YARN-7921.001.patch
>
>
> Purpose:
> Let placement constraint viewable on UI or log, e.g print app placement 
> constraint in RM app page. Help user to use constraints and analysis 
> placement issues easier.
> Propose:
> Like what was added for DS, toString is a reversed process of 
> {{PlacementConstraintParser}} that transforms a PlacementConstraint to a 
> string, using same syntax. E.g
> {code}
> AbstractConstraint constraintExpr = targetIn(NODE, allocationTag("hbase-m"));
> constraint.toString();
> // This prints: IN,NODE,hbase-m
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6858) Attribute Manager to store and provide the attributes in RM

2018-02-22 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16372635#comment-16372635
 ] 

genericqa commented on YARN-6858:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
25s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} YARN-3409 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
26s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
31s{color} | {color:green} YARN-3409 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
30s{color} | {color:green} YARN-3409 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
59s{color} | {color:green} YARN-3409 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
13s{color} | {color:green} YARN-3409 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  1s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
21s{color} | {color:green} YARN-3409 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
48s{color} | {color:green} YARN-3409 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
31s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 58s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 5 new + 115 unchanged - 1 fixed = 120 total (was 116) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 18s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
45s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
15s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 19m 
46s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 64m 55s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}159m 45s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesNodeLabels |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-6858 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12911512/YARN-6858-YARN-3409.008.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  

[jira] [Commented] (YARN-5028) RMStateStore should trim down app state for completed applications

2018-02-22 Thread Gergo Repas (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16372608#comment-16372608
 ] 

Gergo Repas commented on YARN-5028:
---

[~yufeigu] Thanks for the commit and reviews. Thanks, but I don't need this 
change in branch 2.
Thanks [~snemeth] and [~rohithsharma] for the reviews.

> RMStateStore should trim down app state for completed applications
> --
>
> Key: YARN-5028
> URL: https://issues.apache.org/jira/browse/YARN-5028
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Affects Versions: 2.8.0
>Reporter: Karthik Kambatla
>Assignee: Gergo Repas
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: YARN-5028.000.patch, YARN-5028.001.patch, 
> YARN-5028.002.patch, YARN-5028.003.patch, YARN-5028.004.patch, 
> YARN-5028.005.patch, YARN-5028.006.patch, YARN-5028.007.patch
>
>
> RMStateStore stores enough information to recover applications in case of a 
> restart. The store also retains this information for completed applications 
> to serve their status to REST, WebUI, Java and CLI clients. We don't need all 
> the information we store today to serve application status; for instance, we 
> don't need the {{ApplicationSubmissionContext}}. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7738) CapacityScheduler: Support refresh maximum allocation for multiple resource types

2018-02-22 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16372543#comment-16372543
 ] 

Yufei Gu commented on YARN-7738:


Thanks [~leftnoteasy]. If that is so, is it reasonable to make sure 
resource-types.xml to be loaded before any scheduler refresh? So that we don't 
need to reload maximum allocation any more.

> CapacityScheduler: Support refresh maximum allocation for multiple resource 
> types
> -
>
> Key: YARN-7738
> URL: https://issues.apache.org/jira/browse/YARN-7738
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Sumana Sathish
>Assignee: Wangda Tan
>Priority: Blocker
> Fix For: 3.1.0
>
> Attachments: YARN-7738.001.patch, YARN-7738.002.patch, 
> YARN-7738.003.patch, YARN-7738.004.patch
>
>
> Currently CapacityScheduler fails to refresh maximum allocation for multiple 
> resource types.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7920) Simplify configuration for PlacementConstraints

2018-02-22 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16372529#comment-16372529
 ] 

Weiwei Yang commented on YARN-7920:
---

Hi [~leftnoteasy]

I think we should have consistent docs, if you run a grep you'll see currently 
only {{PlacementConstraints.md}} is not following this convention. I can 
provide a quick fix to rename it back and get this in 3.1.0, are you OK with 
that?

 

> Simplify configuration for PlacementConstraints
> ---
>
> Key: YARN-7920
> URL: https://issues.apache.org/jira/browse/YARN-7920
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Blocker
> Fix For: 3.1.0
>
> Attachments: YARN-7920.001.patch, YARN-7920.002.patch, 
> YARN-7920.003.patch, YARN-7920.004.patch, YARN-7920.005.patch, 
> YARN-7920.006.patch
>
>
> Currently it is very confusing to have the two configs in two different files 
> (yarn-site.xml and capacity-scheduler.xml). 
>  
> Maybe a better approach is: we can delete the scheduling-request.allowed in 
> CS, and update placement-constraints configs in yarn-site.xml a bit: 
>  
> - Remove placement-constraints.enabled, and add a new 
> placement-constraints.handler, by default is none, and other acceptable 
> values are a. external-processor (since algorithm is too generic to me), b. 
> scheduler. 
> - And add a new PlacementProcessor just to pass SchedulingRequest to 
> scheduler without any modifications.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7920) Simplify configuration for PlacementConstraints

2018-02-22 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16372527#comment-16372527
 ] 

Wangda Tan commented on YARN-7920:
--

[~cheersyang], I think it should be fine, user should know that 
${project.version} need to be properly substituted.

> Simplify configuration for PlacementConstraints
> ---
>
> Key: YARN-7920
> URL: https://issues.apache.org/jira/browse/YARN-7920
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Blocker
> Fix For: 3.1.0
>
> Attachments: YARN-7920.001.patch, YARN-7920.002.patch, 
> YARN-7920.003.patch, YARN-7920.004.patch, YARN-7920.005.patch, 
> YARN-7920.006.patch
>
>
> Currently it is very confusing to have the two configs in two different files 
> (yarn-site.xml and capacity-scheduler.xml). 
>  
> Maybe a better approach is: we can delete the scheduling-request.allowed in 
> CS, and update placement-constraints configs in yarn-site.xml a bit: 
>  
> - Remove placement-constraints.enabled, and add a new 
> placement-constraints.handler, by default is none, and other acceptable 
> values are a. external-processor (since algorithm is too generic to me), b. 
> scheduler. 
> - And add a new PlacementProcessor just to pass SchedulingRequest to 
> scheduler without any modifications.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org