[jira] [Created] (YARN-7243) Moving logging APIs over to slf4j in hadoop-yarn-server-resourcemanager

2017-09-21 Thread Yeliang Cang (JIRA)
Yeliang Cang created YARN-7243:
--

 Summary: Moving logging APIs over to slf4j in 
hadoop-yarn-server-resourcemanager
 Key: YARN-7243
 URL: https://issues.apache.org/jira/browse/YARN-7243
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Yeliang Cang
Assignee: Yeliang Cang






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6871) Add additional deSelects params in getAppReport

2017-09-21 Thread Tanuj Nayak (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tanuj Nayak updated YARN-6871:
--
Attachment: YARN-6871.008.patch

> Add additional deSelects params in getAppReport
> ---
>
> Key: YARN-6871
> URL: https://issues.apache.org/jira/browse/YARN-6871
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: resourcemanager, router
>Reporter: Giovanni Matteo Fumarola
>Assignee: Tanuj Nayak
> Attachments: YARN-6871.002.patch, YARN-6871.003.patch, 
> YARN-6871.004.patch, YARN-6871.005.patch, YARN-6871.006.patch, 
> YARN-6871.007.patch, YARN-6871.008.patch, YARN-6871.proto.patch
>
>
> This jira tracks the effort to add additional deSelect params to the 
> GetAppReport to make it lighter and faster.
> With the current one we are facing a scalability issues.
> E.g. with ~500 applications running the AppReport can reach up to 300MB in 
> size due to the {{ResourceRequest}} in the {{AppInfo}}.
> Yarn RM will return the new result faster and it will use less compute cycles 
> to create the report and it will improve the YARN RM and Client's 
> performances.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6916) Moving logging APIs over to slf4j in hadoop-yarn-server-common

2017-09-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16176007#comment-16176007
 ] 

Hadoop QA commented on YARN-6916:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
 7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common: 
The patch generated 0 new + 110 unchanged - 3 fixed = 110 total (was 113) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
50s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 24m  7s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | YARN-6916 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12887852/YARN-6916.005.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux aa8c33a39981 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / c71d137 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/17584/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/17584/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Moving logging APIs over to slf4j in hadoop-yarn-server-common
> --
>
> Key: YARN-6916
> URL: https://issues.apache.org/jira/browse/YARN-6916
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Rep

[jira] [Commented] (YARN-7237) Cleanup usages of ResourceProfiles

2017-09-21 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16176004#comment-16176004
 ] 

Daniel Templeton commented on YARN-7237:


I agree with you.  The overrides and the maximums are awkward.  I'll review 
when I get a chance.

> Cleanup usages of ResourceProfiles
> --
>
> Key: YARN-7237
> URL: https://issues.apache.org/jira/browse/YARN-7237
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Critical
> Attachments: YARN-7237.001.patch, YARN-7237.002.patch, 
> YARN-7237.003.patch
>
>
> While doing tests, there're a couple of issues:
> 1) When use {{ProfileCapability#getProfileCapabilityOverride}}, it does 
> overwrite of whatever specified in resource-profiles.json when value >= 0. 
> Which is different from javadocs of {{ProfileCapability}} 
> bq. For example, if you have a resource profile "small" that maps to <4096M, 
> 2 cores, 1 gpu> and you set the capability override to <8192M, 0 cores, 0 
> gpu>, then the actual resource allocation on the ResourceManager will be 
> <8192M, 2 cores, 1 gpu>
> To me, the correct behavior should do overwrite when value > 0. The reason 
> is, by default resource value will be set to 0, For example, assume we have a 
> profile {{"a" = (mem=3, vcore=5, res_1=7)}}, and create a 
> capability-overwrite (capability = new resource(8). The final result should 
> be (mem=8, vcore=5, res_1=7), instead of (mem=8, vcore=0, res_1=0).
> 2) ResourceProfileManager now loads minimum/maximum profile from config file 
> (resource-profiles.json), to me this is not correct because minimum/maximum 
> allocation for each resource types are already specified inside 
> {{resource-types.xml}}. We should always use 
> {{ResourceUtils#getResourceTypesMinimum/MaximumAllocation}} to get from 
> resource-types.xml and yarn-site.xml. This value will be added to profiles so 
> client can get these configs.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6504) Add support for resource profiles in MapReduce

2017-09-21 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6504?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16176002#comment-16176002
 ] 

Daniel Templeton commented on YARN-6504:


Can we maybe get around the MR override capability by setting the defaults in 
the MR configuration to -1?

> Add support for resource profiles in MapReduce
> --
>
> Key: YARN-6504
> URL: https://issues.apache.org/jira/browse/YARN-6504
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Varun Vasudev
>Assignee: Varun Vasudev
> Attachments: YARN-6504-YARN-3926.001.patch, 
> YARN-6504.YARN-3926.002.patch, YARN-6504.YARN-3926.003.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6943) Update Yarn to YARN in documentation

2017-09-21 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16175948#comment-16175948
 ] 

Akira Ajisaka commented on YARN-6943:
-

Would you undo the following changes as well?
{code}
-* Please use the Yarn and Bower command-line tools to add new dependencies. 
And the tool version must be same as those defined in Prerequisites section.
+* Please use the YARN and Bower command-line tools to add new dependencies. 
And the tool version must be same as those defined in Prerequisites section.
{code}
{code}
-  
+  
{code}
The following should be "YARN UI has replaced NPM with Yarn package manager. 
And hence Yarn would be used to manage dependencies defined in package.json."
{code}
-Yarn UI has replaced NPM with Yarn package manager. And hence Yarn would be 
used to manage dependencies defined in package.json.
+YARN UI has replaced NPM with YARN package manager. And hence YARN would be 
used to manage dependencies defined in package.json.
{code}
In addition, would you undo the changes under 
{{hadoop-yarn-project/hadoop-yarn/dev-support/jdiff/}} directory?

> Update Yarn to YARN in documentation
> 
>
> Key: YARN-6943
> URL: https://issues.apache.org/jira/browse/YARN-6943
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Miklos Szegedi
>Assignee: Chetna Chaudhari
>Priority: Minor
>  Labels: newbie
> Attachments: YARN-6943-1.patch, YARN-6943-2.patch
>
>
> Based on the discussion with [~templedf] in YARN-6757 the official case of 
> YARN is YARN, not Yarn, so we should update all the md files.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6623) Add support to turn off launching privileged containers in the container-executor

2017-09-21 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16175908#comment-16175908
 ] 

Junping Du commented on YARN-6623:
--

bq. I plan to commit the patch to trunk/branch-3.0 tomorrow if no more opposite 
opinions.
I think branch-2 also need this as all docker related patch should also get 
landed on branch-2. CC [~asuresh].

> Add support to turn off launching privileged containers in the 
> container-executor
> -
>
> Key: YARN-6623
> URL: https://issues.apache.org/jira/browse/YARN-6623
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Reporter: Varun Vasudev
>Assignee: Varun Vasudev
>Priority: Blocker
> Attachments: YARN-6623.001.patch, YARN-6623.002.patch, 
> YARN-6623.003.patch, YARN-6623.004.patch, YARN-6623.005.patch, 
> YARN-6623.006.patch, YARN-6623.007.patch, YARN-6623.008.patch, 
> YARN-6623.009.patch, YARN-6623.010.patch
>
>
> Currently, launching privileged containers is controlled by the NM. We should 
> add a flag to the container-executor.cfg allowing admins to disable launching 
> privileged containers at the container-executor level.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3661) Basic Federation UI

2017-09-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16175801#comment-16175801
 ] 

Hadoop QA commented on YARN-3661:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
24s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  1m 11s{color} 
| {color:red} hadoop-yarn-server-router in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
18s{color} | {color:green} hadoop-yarn-site in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 45m 52s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.router.webapp.TestRouterWebServicesREST |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | YARN-3661 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12888412/YARN-3661-011.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux b8ada91ea802 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / bfd1a72 |
| Default Java | 1.8.0_144 |

[jira] [Assigned] (YARN-7201) Add an apache httpd example YARN service

2017-09-21 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He reassigned YARN-7201:
-

Assignee: Billie Rinaldi

> Add an apache httpd example YARN service
> 
>
> Key: YARN-7201
> URL: https://issues.apache.org/jira/browse/YARN-7201
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
>Assignee: Billie Rinaldi
> Attachments: YARN-7201.yarn-native-services.001.patch, 
> YARN-7201.yarn-native-services.002.patch, 
> YARN-7201.yarn-native-services.003.patch, 
> YARN-7201.yarn-native-services.004.patch, 
> YARN-7201.yarn-native-services.005.patch, 
> YARN-7201.yarn-native-services.006.patch, 
> YARN-7201.yarn-native-services.007.patch, 
> YARN-7201.yarn-native-services.008.patch
>
>
> Add an apache httpd example service



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6595) [API] Add Placement Constraints at the application level

2017-09-21 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16175795#comment-16175795
 ] 

Arun Suresh commented on YARN-6595:
---

[~kkaranasos], do give the latest patch a quick look.

> [API] Add Placement Constraints at the application level
> 
>
> Key: YARN-6595
> URL: https://issues.apache.org/jira/browse/YARN-6595
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Konstantinos Karanasos
>Assignee: Arun Suresh
> Attachments: YARN-6595-YARN-6592.001.patch
>
>
> This JIRA allows placement constraints to be specified at the application 
> level.
> This will be used for placement constraints between different components of 
> the application.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Reopened] (YARN-6691) Update YARN daemon startup/shutdown scripts to include Router service

2017-09-21 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan reopened YARN-6691:
--

[~giovanni.fumarola], can you kindly provide a patch for branch-2.

> Update YARN daemon startup/shutdown scripts to include Router service
> -
>
> Key: YARN-6691
> URL: https://issues.apache.org/jira/browse/YARN-6691
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Subru Krishnan
>Assignee: Giovanni Matteo Fumarola
> Fix For: 3.0.0-beta1
>
> Attachments: YARN-6691-YARN-2915.v1.patch, 
> YARN-6691-YARN-2915.v2.patch, YARN-6691-YARN-2915.v3.patch, 
> YARN-6691-YARN-2915.v4.patch
>
>
> YARN-5410 introduce a new YARN service, i.e. Router. This jira proposes to 
> update YARN daemon startup/shutdown scripts to include Router service.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6691) Update YARN daemon startup/shutdown scripts to include Router service

2017-09-21 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated YARN-6691:
-
Parent Issue: YARN-5597  (was: YARN-2915)

> Update YARN daemon startup/shutdown scripts to include Router service
> -
>
> Key: YARN-6691
> URL: https://issues.apache.org/jira/browse/YARN-6691
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Subru Krishnan
>Assignee: Giovanni Matteo Fumarola
> Fix For: 3.0.0-beta1
>
> Attachments: YARN-6691-YARN-2915.v1.patch, 
> YARN-6691-YARN-2915.v2.patch, YARN-6691-YARN-2915.v3.patch, 
> YARN-6691-YARN-2915.v4.patch
>
>
> YARN-5410 introduce a new YARN service, i.e. Router. This jira proposes to 
> update YARN daemon startup/shutdown scripts to include Router service.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-3661) Basic Federation UI

2017-09-21 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated YARN-3661:
-
Parent Issue: YARN-5597  (was: YARN-2915)

> Basic Federation UI 
> 
>
> Key: YARN-3661
> URL: https://issues.apache.org/jira/browse/YARN-3661
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Giovanni Matteo Fumarola
>Assignee: Íñigo Goiri
> Attachments: YARN-3661-000.patch, YARN-3661-001.patch, 
> YARN-3661-002.patch, YARN-3661-003.patch, YARN-3661-004.patch, 
> YARN-3661-005.patch, YARN-3661-006.patch, YARN-3661-007.patch, 
> YARN-3661-008.patch, YARN-3661-009.patch, YARN-3661-010.patch, 
> YARN-3661-011.patch
>
>
> The UIs provided by each RM, provide a correct "local" view of what is 
> running in a sub-cluster. In the context of federation we need new 
> UIs that can track load, jobs, users across sub-clusters.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7095) Federation: routing getNode/getNodes/getMetrics REST invocations transparently to multiple RMs

2017-09-21 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7095?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated YARN-7095:
-
Parent Issue: YARN-2915  (was: YARN-5597)

> Federation: routing getNode/getNodes/getMetrics REST invocations 
> transparently to multiple RMs
> --
>
> Key: YARN-7095
> URL: https://issues.apache.org/jira/browse/YARN-7095
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>Assignee: Giovanni Matteo Fumarola
> Fix For: 3.0.0-beta1
>
> Attachments: YARN-7095.v1-rebase.patch, YARN-7095.v2.patch, 
> YARN-7095.v3.patch, YARN-7095.v4.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6923) Metrics for Federation Router

2017-09-21 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated YARN-6923:
-
Parent Issue: YARN-2915  (was: YARN-5597)

> Metrics for Federation Router
> -
>
> Key: YARN-6923
> URL: https://issues.apache.org/jira/browse/YARN-6923
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>Assignee: Giovanni Matteo Fumarola
> Fix For: 3.0.0-beta1
>
> Attachments: YARN-6923.v1.patch, YARN-6923.v2.patch
>
>
> This JIRA proposes addition of metrics for Federation StateStore



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7010) Federation: routing REST invocations transparently to multiple RMs (part 2 - getApps)

2017-09-21 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated YARN-7010:
-
Parent Issue: YARN-2915  (was: YARN-5597)

> Federation: routing REST invocations transparently to multiple RMs (part 2 - 
> getApps)
> -
>
> Key: YARN-7010
> URL: https://issues.apache.org/jira/browse/YARN-7010
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>Assignee: Giovanni Matteo Fumarola
> Fix For: 3.0.0-beta1
>
> Attachments: YARN-7010.v0.patch, YARN-7010.v1.patch, 
> YARN-7010.v2.patch, YARN-7010.v3.patch, YARN-7010.v4.patch, YARN-7010.v5.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5603) Metrics for Federation StateStore

2017-09-21 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated YARN-5603:
-
Parent Issue: YARN-2915  (was: YARN-5597)

> Metrics for Federation StateStore
> -
>
> Key: YARN-5603
> URL: https://issues.apache.org/jira/browse/YARN-5603
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Subru Krishnan
>Assignee: Ellen Hui
> Fix For: 3.0.0-beta1
>
> Attachments: 0001-Add-Federation-Client-metrics.patch, 
> YARN-5603.001.patch, YARN-5603.002.patch, YARN-5603.003.patch
>
>
> This JIRA proposes addition of metrics for Federation StateStore



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6900) ZooKeeper based implementation of the FederationStateStore

2017-09-21 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated YARN-6900:
-
Target Version/s:   (was: 3.0.0-beta1)

> ZooKeeper based implementation of the FederationStateStore
> --
>
> Key: YARN-6900
> URL: https://issues.apache.org/jira/browse/YARN-6900
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: federation, nodemanager, resourcemanager
>Reporter: Subru Krishnan
>Assignee: Íñigo Goiri
> Fix For: 3.0.0-beta1
>
> Attachments: YARN-6900-002.patch, YARN-6900-003.patch, 
> YARN-6900-004.patch, YARN-6900-005.patch, YARN-6900-006.patch, 
> YARN-6900-007.patch, YARN-6900-008.patch, YARN-6900-009.patch, 
> YARN-6900-010.patch, YARN-6900-011.patch, YARN-6900-YARN-2915-000.patch, 
> YARN-6900-YARN-2915-001.patch
>
>
> YARN-5408 defines the unified {{FederationStateStore}} API. Currently we only 
> support SQL based stores, this JIRA tracks adding a ZooKeeper based 
> implementation for simplifying deployment as it's already popularly used for 
> {{RMStateStore}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6900) ZooKeeper based implementation of the FederationStateStore

2017-09-21 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated YARN-6900:
-
Parent Issue: YARN-2915  (was: YARN-5597)

> ZooKeeper based implementation of the FederationStateStore
> --
>
> Key: YARN-6900
> URL: https://issues.apache.org/jira/browse/YARN-6900
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: federation, nodemanager, resourcemanager
>Reporter: Subru Krishnan
>Assignee: Íñigo Goiri
> Fix For: 3.0.0-beta1
>
> Attachments: YARN-6900-002.patch, YARN-6900-003.patch, 
> YARN-6900-004.patch, YARN-6900-005.patch, YARN-6900-006.patch, 
> YARN-6900-007.patch, YARN-6900-008.patch, YARN-6900-009.patch, 
> YARN-6900-010.patch, YARN-6900-011.patch, YARN-6900-YARN-2915-000.patch, 
> YARN-6900-YARN-2915-001.patch
>
>
> YARN-5408 defines the unified {{FederationStateStore}} API. Currently we only 
> support SQL based stores, this JIRA tracks adding a ZooKeeper based 
> implementation for simplifying deployment as it's already popularly used for 
> {{RMStateStore}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6996) Change javax.cache library implementation from JSR107 to Apache Geronimo

2017-09-21 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated YARN-6996:
-
Fix Version/s: 2.9.0

> Change javax.cache library implementation from JSR107 to Apache Geronimo
> 
>
> Key: YARN-6996
> URL: https://issues.apache.org/jira/browse/YARN-6996
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0-beta1
>Reporter: Ray Chiang
>Assignee: Ray Chiang
>Priority: Blocker
> Fix For: 2.9.0, 3.0.0-beta1
>
> Attachments: YARN-6996.001.patch
>
>
> With YARN Federation, we added YARN-3672, which adds the following to 
> {noformat}
> javax.cache
> cache-api
> {noformat}
> This third-party library has some murky license history, as documented in 
> this [really long comment 
> thread|https://github.com/jsr107/jsr107spec/issues/333].  The summary of the 
> thread is that "the library is officially APL (take our word for it), but 
> there hasn't been a subsequent release with the license file change".
> LEGAL-325 has been filed to discuss the validity of this license for Apache.
> Before we get to final Hadoop 3 release, I'm wondering if anyone else has 
> concerns about using this library.  Just from looking at the various javax 
> Maven artifacts in our pom.xml files, I see a lot of other javax.* library 
> entries (although we may not ship the .jars if they're part of the Java 
> runtime).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6896) Federation: routing REST invocations transparently to multiple RMs (part 1 - basic execution)

2017-09-21 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6896?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated YARN-6896:
-
Parent Issue: YARN-2915  (was: YARN-5597)

> Federation: routing REST invocations transparently to multiple RMs (part 1 - 
> basic execution)
> -
>
> Key: YARN-6896
> URL: https://issues.apache.org/jira/browse/YARN-6896
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>Assignee: Giovanni Matteo Fumarola
> Fix For: 3.0.0-beta1
>
> Attachments: YARN-6896.proto.patch, YARN-6896.v1.patch, 
> YARN-6896.v2.patch, YARN-6896.v3.patch, YARN-6896.v4.patch
>
>
> This JIRA tracks the design/implementation of the layer for routing 
> RMWebServicesProtocol requests to the appropriate RM(s) in a federated YARN 
> cluster.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6970) Add PoolInitializationException as retriable exception in FederationFacade

2017-09-21 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated YARN-6970:
-
Parent Issue: YARN-2915  (was: YARN-5597)

> Add PoolInitializationException as retriable exception in FederationFacade
> --
>
> Key: YARN-6970
> URL: https://issues.apache.org/jira/browse/YARN-6970
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: federation
>Reporter: Giovanni Matteo Fumarola
>Assignee: Giovanni Matteo Fumarola
> Fix For: 3.0.0-beta1
>
> Attachments: YARN-6970.v1.patch
>
>
> During an execution we found that Hikari can throw a 
> PoolInitializationException. It has to be an retriable exception in 
> {{FederationStateStoreFacade}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-3661) Basic Federation UI

2017-09-21 Thread Carlo Curino (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16175771#comment-16175771
 ] 

Carlo Curino edited comment on YARN-3661 at 9/22/17 1:29 AM:
-

Thanks [~elgoiri]

+1 pending: 
# YETUS, 
# One "*" missing for Application Submitted
# application "STATE" which in my test cluster still shows up as scrambled 



was (Author: curino):
+1 pending YETUS

> Basic Federation UI 
> 
>
> Key: YARN-3661
> URL: https://issues.apache.org/jira/browse/YARN-3661
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Giovanni Matteo Fumarola
>Assignee: Íñigo Goiri
> Attachments: YARN-3661-000.patch, YARN-3661-001.patch, 
> YARN-3661-002.patch, YARN-3661-003.patch, YARN-3661-004.patch, 
> YARN-3661-005.patch, YARN-3661-006.patch, YARN-3661-007.patch, 
> YARN-3661-008.patch, YARN-3661-009.patch, YARN-3661-010.patch, 
> YARN-3661-011.patch
>
>
> The UIs provided by each RM, provide a correct "local" view of what is 
> running in a sub-cluster. In the context of federation we need new 
> UIs that can track load, jobs, users across sub-clusters.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7237) Cleanup usages of ResourceProfiles

2017-09-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16175777#comment-16175777
 ] 

Hadoop QA commented on YARN-7237:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 11 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
27s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
25s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 59s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 23 new + 185 unchanged - 1 fixed = 208 total (was 186) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
22s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
35s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
40s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
47s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 45m  4s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 21m  
7s{color} | {color:green} hadoop-yarn-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}141m 14s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerAllocation |
|   | hadoop.yarn.server.resourcemanager.scheduler.fair.TestFSAppStarvation |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | YARN-7237 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12888384/YARN-7237.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 53ff72d4604a 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /

[jira] [Commented] (YARN-3661) Basic Federation UI

2017-09-21 Thread Carlo Curino (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16175771#comment-16175771
 ] 

Carlo Curino commented on YARN-3661:


+1 pending YETUS

> Basic Federation UI 
> 
>
> Key: YARN-3661
> URL: https://issues.apache.org/jira/browse/YARN-3661
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Giovanni Matteo Fumarola
>Assignee: Íñigo Goiri
> Attachments: YARN-3661-000.patch, YARN-3661-001.patch, 
> YARN-3661-002.patch, YARN-3661-003.patch, YARN-3661-004.patch, 
> YARN-3661-005.patch, YARN-3661-006.patch, YARN-3661-007.patch, 
> YARN-3661-008.patch, YARN-3661-009.patch, YARN-3661-010.patch, 
> YARN-3661-011.patch
>
>
> The UIs provided by each RM, provide a correct "local" view of what is 
> running in a sub-cluster. In the context of federation we need new 
> UIs that can track load, jobs, users across sub-clusters.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-3661) Basic Federation UI

2017-09-21 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/YARN-3661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated YARN-3661:
--
Attachment: YARN-3661-011.patch

> Basic Federation UI 
> 
>
> Key: YARN-3661
> URL: https://issues.apache.org/jira/browse/YARN-3661
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Giovanni Matteo Fumarola
>Assignee: Íñigo Goiri
> Attachments: YARN-3661-000.patch, YARN-3661-001.patch, 
> YARN-3661-002.patch, YARN-3661-003.patch, YARN-3661-004.patch, 
> YARN-3661-005.patch, YARN-3661-006.patch, YARN-3661-007.patch, 
> YARN-3661-008.patch, YARN-3661-009.patch, YARN-3661-010.patch, 
> YARN-3661-011.patch
>
>
> The UIs provided by each RM, provide a correct "local" view of what is 
> running in a sub-cluster. In the context of federation we need new 
> UIs that can track load, jobs, users across sub-clusters.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3661) Basic Federation UI

2017-09-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16175768#comment-16175768
 ] 

Hadoop QA commented on YARN-3661:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
10s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
51s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
18s{color} | {color:red} hadoop-yarn-server-router in the patch failed. {color} 
|
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  3m 
57s{color} | {color:red} hadoop-yarn in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  3m 57s{color} 
| {color:red} hadoop-yarn in the patch failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
22s{color} | {color:red} hadoop-yarn-server-router in the patch failed. {color} 
|
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
18s{color} | {color:red} hadoop-yarn-server-router in the patch failed. {color} 
|
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
18s{color} | {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-router 
generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 18s{color} 
| {color:red} hadoop-yarn-server-router in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m  
9s{color} | {color:green} hadoop-yarn-site in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 48m 17s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | YARN-3661 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12888405/YARN-3661-010.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 919c5c2ba5bf 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/pers

[jira] [Commented] (YARN-7045) Remove FSLeafQueue#addAppSchedulable

2017-09-21 Thread Sen Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16175765#comment-16175765
 ] 

Sen Zhao commented on YARN-7045:


Thanks for your review and commit, [~yufeigu]

> Remove FSLeafQueue#addAppSchedulable
> 
>
> Key: YARN-7045
> URL: https://issues.apache.org/jira/browse/YARN-7045
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Reporter: Yufei Gu
>Assignee: Sen Zhao
>  Labels: newbie++
> Fix For: 2.9.0, 3.1.0
>
> Attachments: YARN-7045.001.patch, YARN-7045.002.patch
>
>
> It is only for test, and not necessary be there since we got method 
> {{addApp}}. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-2280) Resource manager web service fields are not accessible

2017-09-21 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated YARN-2280:
-
Fix Version/s: 2.9.0

> Resource manager web service fields are not accessible
> --
>
> Key: YARN-2280
> URL: https://issues.apache.org/jira/browse/YARN-2280
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Affects Versions: 2.4.0, 2.4.1
>Reporter: Krisztian Horvath
>Assignee: Krisztian Horvath
>Priority: Trivial
> Fix For: 2.9.0, 3.0.0-alpha1
>
> Attachments: YARN-2280.patch
>
>
> Using the resource manager's rest api 
> (org.apache.hadoop.yarn.server.resourcemanager.webapp.RMWebServices) some 
> rest call returns a class where the fields after the unmarshal cannot be 
> accessible. For example SchedulerTypeInfo -> schedulerInfo. Using the same 
> classes on client side these fields only accessible via reflection.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7237) Cleanup usages of ResourceProfiles

2017-09-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16175760#comment-16175760
 ] 

Hadoop QA commented on YARN-7237:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 11 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
19s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  3m 
40s{color} | {color:red} hadoop-yarn in trunk failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
55s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  9m 
39s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 58s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 19 new + 186 unchanged - 1 fixed = 205 total (was 187) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
12s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
34s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
47s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
57s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 46m 22s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 21m 
49s{color} | {color:green} hadoop-yarn-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}135m 31s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerAllocation |
|   | hadoop.yarn.server.resourcemanager.scheduler.fair.TestFSAppStarvation |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | YARN-7237 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12888380/YARN-7237.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux fbb4ab2f842d 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personalit

[jira] [Commented] (YARN-6623) Add support to turn off launching privileged containers in the container-executor

2017-09-21 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16175755#comment-16175755
 ] 

Wangda Tan commented on YARN-6623:
--

Thanks [~vvasudev] and all other folks for the hard work to finish / review the 
patch. I just checked the latest patch, it looks fine and all new added config 
knobs are properly designed. So I'm +1 to the latest patch, it looks like this 
patch needs rebase on top of YARN-7034.

Is there any other comments? I plan to commit the patch to trunk/branch-3.0 
tomorrow if no more opposite opinions.

> Add support to turn off launching privileged containers in the 
> container-executor
> -
>
> Key: YARN-6623
> URL: https://issues.apache.org/jira/browse/YARN-6623
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Reporter: Varun Vasudev
>Assignee: Varun Vasudev
>Priority: Blocker
> Attachments: YARN-6623.001.patch, YARN-6623.002.patch, 
> YARN-6623.003.patch, YARN-6623.004.patch, YARN-6623.005.patch, 
> YARN-6623.006.patch, YARN-6623.007.patch, YARN-6623.008.patch, 
> YARN-6623.009.patch, YARN-6623.010.patch
>
>
> Currently, launching privileged containers is controlled by the NM. We should 
> add a flag to the container-executor.cfg allowing admins to disable launching 
> privileged containers at the container-executor level.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7009) TestNMClient.testNMClientNoCleanupOnStop is flaky by design

2017-09-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16175737#comment-16175737
 ] 

Hadoop QA commented on YARN-7009:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 30m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  6m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 10m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
43s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
39s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 25m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 25m 
58s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
4m  5s{color} | {color:orange} root: The patch generated 8 new + 305 unchanged 
- 2 fixed = 313 total (was 307) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  7m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
4s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 14m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
53s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
36s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  5m 
43s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 24m 42s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 56m  9s{color} 
| {color:red} hadoop-yarn-client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  5m 41s{color} 
| {color:red} hadoop-sls in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  2m 
37s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}271m 25s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.nodemanager.TestNodeManagerResync |
|   | hadoop.yarn.server.nodemanager.TestNodeStatusUpdater |
|   | 
hadoop.yarn.server.nodemanager.containermanager.scheduler.TestContainerSchedulerQueuing
 |
|   | hadoop.yarn.client.TestApplicationClientProtocolOnHA |
|   | hadoop.yarn.sls.TestReservationSystemInvariants |
|   | hadoop.yarn.sls.TestSLSRunner |
| Timed out junit tests | 
org.apache.hadoop.yarn.server.nodemanager.containermanager.TestContainerManager 
|
|   | 
org.apache.hadoop.yarn.client.api.impl.TestOpportunisticContainerAlloca

[jira] [Updated] (YARN-3661) Basic Federation UI

2017-09-21 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/YARN-3661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated YARN-3661:
--
Attachment: YARN-3661-010.patch

> Basic Federation UI 
> 
>
> Key: YARN-3661
> URL: https://issues.apache.org/jira/browse/YARN-3661
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Giovanni Matteo Fumarola
>Assignee: Íñigo Goiri
> Attachments: YARN-3661-000.patch, YARN-3661-001.patch, 
> YARN-3661-002.patch, YARN-3661-003.patch, YARN-3661-004.patch, 
> YARN-3661-005.patch, YARN-3661-006.patch, YARN-3661-007.patch, 
> YARN-3661-008.patch, YARN-3661-009.patch, YARN-3661-010.patch
>
>
> The UIs provided by each RM, provide a correct "local" view of what is 
> running in a sub-cluster. In the context of federation we need new 
> UIs that can track load, jobs, users across sub-clusters.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6991) "Kill application" button does not show error if other user tries to kill the application for secure cluster

2017-09-21 Thread Suma Shivaprasad (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suma Shivaprasad updated YARN-6991:
---
Attachment: YARN-6550.branch-2.001.patch

Uploading patch for branch-2

> "Kill application" button does not show error if other user tries to kill the 
> application for secure cluster
> 
>
> Key: YARN-6991
> URL: https://issues.apache.org/jira/browse/YARN-6991
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Sumana Sathish
>Assignee: Suma Shivaprasad
> Fix For: 3.0.0-beta1
>
> Attachments: YARN-6550.branch-2.001.patch, YARN-6991.001.patch, 
> YARN-6991.002.patch, YARN-6991.003.patch
>
>
> 1. Submit an application by user 1
> 2. log into RM UI as user 2
> 3. Kill the application submitted by user 1
> 4. Even though application does not get killed, there is no error/info dialog 
> box being shown to let the user that "user doesnot have permissions to kill 
> application of other user"



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6550) Capture launch_container.sh logs

2017-09-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6550?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16175671#comment-16175671
 ] 

Hadoop QA commented on YARN-6550:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 22s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 The patch generated 30 new + 140 unchanged - 1 fixed = 170 total (was 141) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 15m  
8s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 41m 37s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | YARN-6550 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12888385/YARN-6550.011.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 83bfa42067e3 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / bfd1a72 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/17580/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/17580/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/17580/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Capture launch_container.sh logs
> 
>
> Key: YARN-6550
> URL: https://issues.apache.org/jira/browse/YARN-6550
> Project: Hadoop YARN
>  

[jira] [Commented] (YARN-6142) Support rolling upgrade between 2.x and 3.x

2017-09-21 Thread Ray Chiang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6142?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16175664#comment-16175664
 ] 

Ray Chiang commented on YARN-6142:
--

Quick summary.  Filed YARN-7219 for the only follow up issue in investigating 
protobuf issues.

> Support rolling upgrade between 2.x and 3.x
> ---
>
> Key: YARN-6142
> URL: https://issues.apache.org/jira/browse/YARN-6142
> Project: Hadoop YARN
>  Issue Type: Task
>  Components: rolling upgrade
>Affects Versions: 3.0.0-alpha2
>Reporter: Andrew Wang
>Assignee: Ray Chiang
>Priority: Blocker
>
> Counterpart JIRA to HDFS-11096. We need to:
> * examine YARN and MR's  JACC report for binary and source incompatibilities
> * run the [PB 
> differ|https://issues.apache.org/jira/browse/HDFS-11096?focusedCommentId=15816405&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15816405]
>  that Sean wrote for HDFS-11096 for the YARN PBs.
> * sanity test some rolling upgrades between 2.x and 3.x. Ideally these are 
> automated and something we can run upstream.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6142) Support rolling upgrade between 2.x and 3.x

2017-09-21 Thread Ray Chiang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6142?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16175663#comment-16175663
 ] 

Ray Chiang commented on YARN-6142:
--

Security.proto
* Two new messages

yarn_protos.proto
* ResourceUtilizationProto
** YARN-4293.  2.8.0 and later.
* ContainerStateProto added 2 new enum
** YARN-4597.  2.8.0 and later.
* ContainerProto added 3 optional fields
** execution_type
*** YARN-5127.  2.9.0 and later.
** allocation_request_id
*** YARN-4887.  2.9.0 and later.
** version
*** YARN-5221.  2.8.0 and later.
* FinalApplicationStatusProto added 1 new enum
** YARN-4207.  2.8.0 and later.
* ApplicationResourceUsageReportProto added 4 optional fields
** queue_usage_percentage
*** YARN-4285.  2.8.0 and later.
** cluster_usage_percentage
*** YARN-4285.  2.8.0 and later.
** preempted_memory_seconds
*** YARN-4218.  2.8.0 and later.
** preempted_vcore_seconds
*** YARN-4218.  2.8.0 and later.
* ApplicationReportProto added 6 optional fields
** log_aggregation_status
*** YARN-1402.  2.8.0 and later.
** unmanaged_application
*** YARN-3543.  2.8.0 and later.
** priority
*** YARN-3948.  2.8.0 and later.
** appNodeLabelExpression
*** YARN-3717.  2.8.0 and later.
** amNodeLabelExpression
*** YARN-3717.  2.8.0 and later.
** appTimeouts
*** YARN-5965.  2.9.0 and later.
* New message AppTimeoutsMapProto
** YARN-5965.  2.9.0 and later.
* New message ApplicationTimeoutProto
** YARN-5965.  2.9.0 and later.
* New enum LogAggregationStatusProto
** YARN-1402.  2.8.0 and later.
* ApplicationAttemptReportProto added 2 optional fields
** start_time
*** YARN-3451.  2.8.0 and later.
** finish_time
*** YARN-3451.  2.8.0 and later.
* NodeStateProto added 2 new enums
** NS_DECOMMISSIONING
*** YARN-3225.  2.8.0 and later.
** NS_SHUTDOWN
*** YARN-41.  2.8.0 and later.
* NodeReportProto added 2 optional fields
** containers_utilization
*** YARN-4293.  2.8.0 and later.
** node_utilization
*** YARN-4293.  2.8.0 and later.
* New message NodeLabelProto
* New enum ContainerTypeProto
* New enum ExecutionTypeProto
* ResourceRequestProto added 2 optional fields
** execution_type_request
*** YARN-5180.  2.9.0 and later.
** allocation_request_id
*** YARN-4888.  2.9.0 and later.
* New message ExecutionTypeRequestProto
** YARN-4888.  2.9.0 and later.
* ApplicationSubmissionContextProto changed 1 field to repeated, added 1 
optional field
** am_container_resource_request
*** YARN-6050.  2.9.0 and later.
** application_timeouts
*** YARN-4205.  2.9.0 and later.
* New enum ApplicationTimeoutTypeProto
** YARN-4205.  2.9.0 and later.
* New message ApplicationTimeoutMapProto
** YARN-4205.  2.9.0 and later.
* New message ApplicationUpdateTimeoutMapProto
** YARN-4205.  2.9.0 and later.
* LogAggregationContextProto added 2 optional fields
** log_aggregation_policy_class_name
*** YARN-221.  2.8.0 and later.
** log_aggregation_policy_parameters
*** YARN-221.  2.8.0 and later.
* YarnClusterMetricsProto added 5 optional fields
** All fields YARN-3348
* QueueStateProto added 1 new enum
** YARN-5756.  2.9.0 and later.
* New message QueueStatisticsProto
** YARN-3348.  2.8.0 and later.
* QueueInfoProto added 3 optional fields
** queueStatistics
** preemptionDisabled
** queueConfigurationsMap
* New message QueueConfigurationsProto
** YARN-6164.  2.9.0 and later.
* New message QueueConfigurationsMapRoto
** YARN-6164.  2.9.0 and later.
* New enum SignalContainerCommandProto
** YARN-1897.  2.8.0 and later.
* ReservationDefinitionProto added 2 optional fields
** recurrence_expression
*** YARN-5327.  2.9.0 and later.
** priority
*** YARN-5384.  2.9.0 and later.
* New message ResourceAllocationRequestProto
** YARN-4340.  2.8.0 and later.
* New message ReservationAllocationStateProto
** YARN-4340.  2.8.0 and later.
* ContainerLaunchContextProto added 2 optional fields
** container_retry_context
*** YARN-3998.  2.9.0 and later.
** tokens_conf
*** YARN-5910.  2.9.0 and later.
* ContainerStatusProto added 3 new optional fields
** capability
*** YARN-3866.  2.8.0 and later.
** executionType
*** YARN-2882.  2.9.0 and later.
** container_attributes
*** YARN-5430.  2.9.0 and later.
* Message ContainerResourceIncreaseRequestProto moved to 
yarn_service_protos.proto
** YARN-3866.  2.8.0 and later.
*** Still in 2.8.x
* Message ContainerResourceIncreaseProto moved to yarn_service_protos.proto
** YARN-3866.  2.8.0 and later.
*** Still in 2.8.x
* Message ContainerResourceDecreaseProto moved to yarn_service_protos.proto
** YARN-3866.  2.8.0 and later.
*** Still in 2.8.x
* New message ContainerRetryContextProto
** YARN-3998.  2.9.0 and later.

yarn_server_common_service_protos.proto
* New message RemoteNodeProto
* New message RegisterDistributedSchedulingAMResponseProto
* New message DistributedSchedulingAllocateResponseProto
* New message DistributedSchedulingAllocateRequestProto
* New message NodeLabelsProto
* RegisterNodeManagerRequestProto added 2 optional fields
* RegisterNodeManage

[jira] [Assigned] (YARN-7219) Fix AllocateRequestProto difference between branch-2/branch-2.8 and trunk

2017-09-21 Thread Ray Chiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ray Chiang reassigned YARN-7219:


Assignee: Ray Chiang

> Fix AllocateRequestProto difference between branch-2/branch-2.8 and trunk
> -
>
> Key: YARN-7219
> URL: https://issues.apache.org/jira/browse/YARN-7219
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Affects Versions: 3.0.0-beta1
>Reporter: Ray Chiang
>Assignee: Ray Chiang
>Priority: Critical
>
> For yarn_service_protos.proto, we have the following code in
> (branch-2.8.0, branch-2.8, branch-2)
> {noformat}
> message AllocateRequestProto {
>   repeated ResourceRequestProto ask = 1;
>   repeated ContainerIdProto release = 2;
>   optional ResourceBlacklistRequestProto blacklist_request = 3;
>   optional int32 response_id = 4;
>   optional float progress = 5;
>   repeated ContainerResourceIncreaseRequestProto increase_request = 6;
>   repeated UpdateContainerRequestProto update_requests = 7;
> }
> {noformat}
> For yarn_service_protos.proto, we have the following code in
> (trunk)
> {noformat}
> message AllocateRequestProto {
>   repeated ResourceRequestProto ask = 1;
>   repeated ContainerIdProto release = 2;
>   optional ResourceBlacklistRequestProto blacklist_request = 3;
>   optional int32 response_id = 4;
>   optional float progress = 5;
>   repeated UpdateContainerRequestProto update_requests = 6;
> }
> {noformat}
> Notes
> * YARN-3866 was the original JIRA for container resizing.
> * YARN-5221 is what introduced the incompatible change.
> * In branch-2/branch-2.8/branch-2.8.0, this protobuf change was undone by 
> "Addendum patch to YARN-3866: fix incompatible API change."
> * There was a similar API fix done in YARN-6071.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7226) Whitelisted variables do not support delayed variable expansion

2017-09-21 Thread Sidharta Seethana (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16175635#comment-16175635
 ] 

Sidharta Seethana commented on YARN-7226:
-

{quote}
As I understand it, a whitelisted variable is simply an environment variable 
that will be propagated from the nodemanager's environment to the container's 
environment unless the container has specified it's own value for that variable.
{quote}

[~jlowe],
Using docker containers introduces a new way in which environment variables for 
a container can be specified - this is through the docker image itself, which 
is a fairly common scenario. For example, an image used for running MR tasks 
could specify its own JAVA_HOME that needs to be used instead of the JAVA_HOME 
that is specified in the nodemanager's environment. Since 
{{launch_container.sh}} runs inside the docker container, using the specified 
docker image, using {{putEnvIfAbsent}} for whitelisted env vars doesn't do the 
right thing in this case. 

> Whitelisted variables do not support delayed variable expansion
> ---
>
> Key: YARN-7226
> URL: https://issues.apache.org/jira/browse/YARN-7226
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.9.0, 2.8.1, 3.0.0-alpha4
>Reporter: Jason Lowe
>Assignee: Jason Lowe
> Attachments: YARN-7226.001.patch, YARN-7226.002.patch
>
>
> The nodemanager supports a configurable list of environment variables, via 
> yarn.nodemanager.env-whitelist, that will be propagated to the container's 
> environment unless those variables were specified in the container launch 
> context.  Unfortunately the handling of these whitelisted variables prevents 
> using delayed variable expansion.  For example, if a user shipped their own 
> version of hadoop with their job via the distributed cache and specified:
> {noformat}
> HADOOP_COMMON_HOME={{PWD}}/my-private-hadoop/
> {noformat}
>  as part of their job, the variable will be set as the *literal* string:
> {noformat}
> $PWD/my-private-hadoop/
> {noformat}
> rather than having $PWD expand to the container's current directory as it 
> does for any other, non-whitelisted variable being set to the same value.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4266) Allow users to enter containers as UID:GID pair instead of by username

2017-09-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16175626#comment-16175626
 ] 

Hudson commented on YARN-4266:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12941 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/12941/])
YARN-4266. Allow users to enter containers as UID:GID pair instead of by 
(jlowe: rev bfd1a72ba8fbb06da73fede2a85e0b544d6ab43f)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/TestDockerContainerRuntime.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/DockerLinuxContainerRuntime.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/docker/DockerRunCommand.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java


> Allow users to enter containers as UID:GID pair instead of by username
> --
>
> Key: YARN-4266
> URL: https://issues.apache.org/jira/browse/YARN-4266
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Sidharta Seethana
>Assignee: luhuichun
> Fix For: 2.9.0, 3.0.0-beta1
>
> Attachments: YARN-4266.001.patch, YARN-4266.001.patch, 
> YARN-4266.002.patch, YARN-4266.003.patch, YARN-4266.004.patch, 
> YARN-4266.005.patch, YARN-4266.006.patch, 
> YARN-4266_Allow_whitelisted_users_to_disable_user_re-mapping.pdf, 
> YARN-4266_Allow_whitelisted_users_to_disable_user_re-mapping_v2.pdf, 
> YARN-4266_Allow_whitelisted_users_to_disable_user_re-mapping_v3.pdf, 
> YARN-4266-branch-2.8.001.patch
>
>
> Docker provides a mechanism (the --user switch) that enables us to specify 
> the user the container processes should run as. We use this mechanism today 
> when launching docker containers . In non-secure mode, we run the docker 
> container based on 
> `yarn.nodemanager.linux-container-executor.nonsecure-mode.local-user` and in 
> secure mode, as the submitting user. However, this mechanism breaks down with 
> a large number of 'pre-created' images which don't necessarily have the users 
> available within the image. Examples of such images include shared images 
> that need to be used by multiple users. We need a way in which we can allow a 
> pre-defined set of users to run containers based on existing images, without 
> using the --user switch. There are some implications of disabling this user 
> squashing that we'll need to work through : log aggregation, artifact 
> deletion etc.,



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6570) No logs were found for running application, running container

2017-09-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16175616#comment-16175616
 ] 

Hadoop QA commented on YARN-6570:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2.8 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
51s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
14s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
35s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} branch-2.8 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
52s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 in branch-2.8 has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} branch-2.8 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  2m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
26s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
53s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 41m 55s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:c2d96dd |
| JIRA Issue | YARN-6570 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12887966/YARN-6570-branch-2.8.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  |
| uname | Linux fcb44ab3eaf2 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | branch-2.8 / 1b3503e |
| Default Java | 1.7.0_151 |
| findbugs | v3.0.0 |
| findbugs | 
https://builds.apache.org/job/PreCommit-YARN-Build/17577/artifact/patchprocess/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager-warnings.html
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/17577/testReport/ |
| modules | C: hadoop-yarn-pr

[jira] [Commented] (YARN-7201) Add an apache httpd example YARN service

2017-09-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16175608#comment-16175608
 ] 

Hadoop QA commented on YARN-7201:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} yarn-native-services Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
57s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} yarn-native-services passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
13s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 16m 17s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | YARN-7201 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12888381/YARN-7201.yarn-native-services.008.patch
 |
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux 636d4fa92a0e 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | yarn-native-services / 3ae5a47 |
| asflicense | 
https://builds.apache.org/job/PreCommit-YARN-Build/17578/artifact/patchprocess/patch-asflicense-problems.txt
 |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site U: 
hadoop-yarn-project/hadoop-yarn |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/17578/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add an apache httpd example YARN service
> 
>
> Key: YARN-7201
> URL: https://issues.apache.org/jira/browse/YARN-7201
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
> Attachments: YARN-7201.yarn-native-services.001.patch, 
> YARN-7201.yarn-native-services.002.patch, 
> YARN-7201.yarn-native-services.003.patch, 
> YARN-7201.yarn-native-services.004.patch, 
> YARN-7201.yarn-native-services.005.patch, 
> YARN-7201.yarn-native-services.006.patch, 
> YARN-7201.yarn-native-services.007.patch, 
> YARN-7201.yarn-native-services.008.patch
>
>
> Add an apache httpd example service



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7237) Cleanup usages of ResourceProfiles

2017-09-21 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-7237:
-
Attachment: YARN-7237.003.patch

Added a minor cleanup for RPMImpl, removed resourceTypeInfo since it is not 
used by anybody.

> Cleanup usages of ResourceProfiles
> --
>
> Key: YARN-7237
> URL: https://issues.apache.org/jira/browse/YARN-7237
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Critical
> Attachments: YARN-7237.001.patch, YARN-7237.002.patch, 
> YARN-7237.003.patch
>
>
> While doing tests, there're a couple of issues:
> 1) When use {{ProfileCapability#getProfileCapabilityOverride}}, it does 
> overwrite of whatever specified in resource-profiles.json when value >= 0. 
> Which is different from javadocs of {{ProfileCapability}} 
> bq. For example, if you have a resource profile "small" that maps to <4096M, 
> 2 cores, 1 gpu> and you set the capability override to <8192M, 0 cores, 0 
> gpu>, then the actual resource allocation on the ResourceManager will be 
> <8192M, 2 cores, 1 gpu>
> To me, the correct behavior should do overwrite when value > 0. The reason 
> is, by default resource value will be set to 0, For example, assume we have a 
> profile {{"a" = (mem=3, vcore=5, res_1=7)}}, and create a 
> capability-overwrite (capability = new resource(8). The final result should 
> be (mem=8, vcore=5, res_1=7), instead of (mem=8, vcore=0, res_1=0).
> 2) ResourceProfileManager now loads minimum/maximum profile from config file 
> (resource-profiles.json), to me this is not correct because minimum/maximum 
> allocation for each resource types are already specified inside 
> {{resource-types.xml}}. We should always use 
> {{ResourceUtils#getResourceTypesMinimum/MaximumAllocation}} to get from 
> resource-types.xml and yarn-site.xml. This value will be added to profiles so 
> client can get these configs.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6550) Capture launch_container.sh logs

2017-09-21 Thread Suma Shivaprasad (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6550?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suma Shivaprasad updated YARN-6550:
---
Attachment: YARN-6550.011.patch

Thanks for the review [~wangda] Updated patch with comments addressed.

> Capture launch_container.sh logs
> 
>
> Key: YARN-6550
> URL: https://issues.apache.org/jira/browse/YARN-6550
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-beta1
>Reporter: Wangda Tan
>Assignee: Suma Shivaprasad
> Attachments: YARN-6550.002.patch, YARN-6550.003.patch, 
> YARN-6550.005.patch, YARN-6550.006.patch, YARN-6550.007.patch, 
> YARN-6550.008.patch, YARN-6550.009.patch, YARN-6550.010.patch, 
> YARN-6550.011.patch, YARN-6550.011.patch, YARN-6550.patch
>
>
> launch_container.sh which generated by NM will do a bunch of things (like 
> create link, etc.) while launch a process. No logs captured until {{exec}} is 
> called. We need capture all failures of launch_container.sh for easier 
> troubleshooting.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7201) Add an apache httpd example YARN service

2017-09-21 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16175597#comment-16175597
 ] 

Eric Yang commented on YARN-7201:
-

+1 for patch 008.

> Add an apache httpd example YARN service
> 
>
> Key: YARN-7201
> URL: https://issues.apache.org/jira/browse/YARN-7201
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
> Attachments: YARN-7201.yarn-native-services.001.patch, 
> YARN-7201.yarn-native-services.002.patch, 
> YARN-7201.yarn-native-services.003.patch, 
> YARN-7201.yarn-native-services.004.patch, 
> YARN-7201.yarn-native-services.005.patch, 
> YARN-7201.yarn-native-services.006.patch, 
> YARN-7201.yarn-native-services.007.patch, 
> YARN-7201.yarn-native-services.008.patch
>
>
> Add an apache httpd example service



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7201) Add an apache httpd example YARN service

2017-09-21 Thread Billie Rinaldi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Billie Rinaldi updated YARN-7201:
-
Attachment: YARN-7201.yarn-native-services.008.patch

Here's a new patch that expands the httpd example to include 2 httpd instances 
and an additional httpd proxy instance.

> Add an apache httpd example YARN service
> 
>
> Key: YARN-7201
> URL: https://issues.apache.org/jira/browse/YARN-7201
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
> Attachments: YARN-7201.yarn-native-services.001.patch, 
> YARN-7201.yarn-native-services.002.patch, 
> YARN-7201.yarn-native-services.003.patch, 
> YARN-7201.yarn-native-services.004.patch, 
> YARN-7201.yarn-native-services.005.patch, 
> YARN-7201.yarn-native-services.006.patch, 
> YARN-7201.yarn-native-services.007.patch, 
> YARN-7201.yarn-native-services.008.patch
>
>
> Add an apache httpd example service



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7237) Cleanup usages of ResourceProfiles

2017-09-21 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-7237:
-
Attachment: YARN-7237.002.patch

Attached ver.2 patch.

> Cleanup usages of ResourceProfiles
> --
>
> Key: YARN-7237
> URL: https://issues.apache.org/jira/browse/YARN-7237
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Critical
> Attachments: YARN-7237.001.patch, YARN-7237.002.patch
>
>
> While doing tests, there're a couple of issues:
> 1) When use {{ProfileCapability#getProfileCapabilityOverride}}, it does 
> overwrite of whatever specified in resource-profiles.json when value >= 0. 
> Which is different from javadocs of {{ProfileCapability}} 
> bq. For example, if you have a resource profile "small" that maps to <4096M, 
> 2 cores, 1 gpu> and you set the capability override to <8192M, 0 cores, 0 
> gpu>, then the actual resource allocation on the ResourceManager will be 
> <8192M, 2 cores, 1 gpu>
> To me, the correct behavior should do overwrite when value > 0. The reason 
> is, by default resource value will be set to 0, For example, assume we have a 
> profile {{"a" = (mem=3, vcore=5, res_1=7)}}, and create a 
> capability-overwrite (capability = new resource(8). The final result should 
> be (mem=8, vcore=5, res_1=7), instead of (mem=8, vcore=0, res_1=0).
> 2) ResourceProfileManager now loads minimum/maximum profile from config file 
> (resource-profiles.json), to me this is not correct because minimum/maximum 
> allocation for each resource types are already specified inside 
> {{resource-types.xml}}. We should always use 
> {{ResourceUtils#getResourceTypesMinimum/MaximumAllocation}} to get from 
> resource-types.xml and yarn-site.xml. This value will be added to profiles so 
> client can get these configs.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6626) Embed REST API service into RM

2017-09-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16175576#comment-16175576
 ] 

Hadoop QA commented on YARN-6626:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} yarn-native-services Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
1s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
13s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
43s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 5s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
9s{color} | {color:green} yarn-native-services passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
13s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api in 
yarn-native-services has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
30s{color} | {color:green} yarn-native-services passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m  
0s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m  1s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 4 new + 215 unchanged - 0 fixed = 219 total (was 215) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
6s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 2 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
16s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 37s{color} 
| {color:red} hadoop-yarn-api in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 45m 50s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
21s{color} | {color:green} hadoop-yarn-services-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
21s{color} | {color:green} hadoop-yarn-services-api in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}109m 36s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | 
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
|  |  Nullcheck of RMWebApp.rm at line 68 of value previously dereferenced in 
org.apache.hadoop.yarn.server.resourcemanager.webapp.RMWebApp.setup()  

[jira] [Created] (YARN-7242) Support support specify values of different resource types in DistributedShell for easier testing

2017-09-21 Thread Wangda Tan (JIRA)
Wangda Tan created YARN-7242:


 Summary: Support support specify values of different resource 
types in DistributedShell for easier testing
 Key: YARN-7242
 URL: https://issues.apache.org/jira/browse/YARN-7242
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Wangda Tan


Currently, DS supports specify resource profile, it's better to allow user to 
directly specify resource keys/values from command line.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7242) Support support specify values of different resource types in DistributedShell for easier testing

2017-09-21 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7242?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-7242:
-
Labels: newbie  (was: )

> Support support specify values of different resource types in 
> DistributedShell for easier testing
> -
>
> Key: YARN-7242
> URL: https://issues.apache.org/jira/browse/YARN-7242
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Wangda Tan
>  Labels: newbie
>
> Currently, DS supports specify resource profile, it's better to allow user to 
> directly specify resource keys/values from command line.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6570) No logs were found for running application, running container

2017-09-21 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16175566#comment-16175566
 ] 

Junping Du commented on YARN-6570:
--

The failed tests are passed locally. Trigger another run for jenkins test.

> No logs were found for running application, running container
> -
>
> Key: YARN-6570
> URL: https://issues.apache.org/jira/browse/YARN-6570
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Reporter: Sumana Sathish
>Assignee: Junping Du
>Priority: Critical
> Fix For: 2.9.0, 3.0.0-beta1, 3.1.0
>
> Attachments: YARN-6570-branch-2.8.001.patch, 
> YARN-6570-branch-2.8.002.patch, YARN-6570.poc.patch, YARN-6570-v2.patch, 
> YARN-6570-v3.patch
>
>
> 1.Obtain running containers from the following CLI for running application:
>  yarn  container -list appattempt
> 2. Couldnot fetch logs 
> {code}
> Can not find any log file matching the pattern: ALL for the container
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6953) Clean up ResourceUtils.setMinimumAllocationForMandatoryResources() and setMaximumAllocationForMandatoryResources()

2017-09-21 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16175558#comment-16175558
 ] 

Wangda Tan commented on YARN-6953:
--

[~maniraj...@gmail.com], could you rebase patch to latest trunk?

> Clean up ResourceUtils.setMinimumAllocationForMandatoryResources() and 
> setMaximumAllocationForMandatoryResources()
> --
>
> Key: YARN-6953
> URL: https://issues.apache.org/jira/browse/YARN-6953
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: YARN-3926
>Reporter: Daniel Templeton
>Assignee: Manikandan R
>Priority: Minor
>  Labels: newbie
> Attachments: YARN-6953-YARN-3926.001.patch, 
> YARN-6953-YARN-3926.002.patch, YARN-6953-YARN-3926.003.patch, 
> YARN-6953-YARN-3926.004.patch, YARN-6953-YARN-3926.005.patch, 
> YARN-6953-YARN-3926.006.patch, YARN-6953-YARN-3926-WIP.patch
>
>
> The {{setMinimumAllocationForMandatoryResources()}} and 
> {{setMaximumAllocationForMandatoryResources()}} methods are quite convoluted. 
>  They'd be much simpler if they just handled CPU and memory manually instead 
> of trying to be clever about doing it in a loop.  There are also issues, such 
> as the log warning always talking about memory or the last element of the 
> inner array being a copy of the first element.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6991) "Kill application" button does not show error if other user tries to kill the application for secure cluster

2017-09-21 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16175546#comment-16175546
 ] 

Wangda Tan commented on YARN-6991:
--

And forgot to mention: [~suma.shivaprasad], should we backport this patch to 
branch-2.8/branch-2? If so, could you provide patch on these branches?

> "Kill application" button does not show error if other user tries to kill the 
> application for secure cluster
> 
>
> Key: YARN-6991
> URL: https://issues.apache.org/jira/browse/YARN-6991
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Sumana Sathish
>Assignee: Suma Shivaprasad
> Fix For: 3.0.0-beta1
>
> Attachments: YARN-6991.001.patch, YARN-6991.002.patch, 
> YARN-6991.003.patch
>
>
> 1. Submit an application by user 1
> 2. log into RM UI as user 2
> 3. Kill the application submitted by user 1
> 4. Even though application does not get killed, there is no error/info dialog 
> box being shown to let the user that "user doesnot have permissions to kill 
> application of other user"



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7046) Add closing logic to configuration store

2017-09-21 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16175545#comment-16175545
 ] 

Wangda Tan commented on YARN-7046:
--

+1, committing.

> Add closing logic to configuration store
> 
>
> Key: YARN-7046
> URL: https://issues.apache.org/jira/browse/YARN-7046
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
> Attachments: YARN-7046-YARN-5734.001.patch, 
> YARN-7046-YARN-5734.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7241) Merge YARN-5734 to trunk/branch-2

2017-09-21 Thread Jonathan Hung (JIRA)
Jonathan Hung created YARN-7241:
---

 Summary: Merge YARN-5734 to trunk/branch-2
 Key: YARN-7241
 URL: https://issues.apache.org/jira/browse/YARN-7241
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Jonathan Hung
Assignee: Jonathan Hung


Ticket for jenkins pre-commit for full diff.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7240) Add more states and transitions to stabilize the NM Container state machine

2017-09-21 Thread Arun Suresh (JIRA)
Arun Suresh created YARN-7240:
-

 Summary: Add more states and transitions to stabilize the NM 
Container state machine
 Key: YARN-7240
 URL: https://issues.apache.org/jira/browse/YARN-7240
 Project: Hadoop YARN
  Issue Type: Improvement
Reporter: Arun Suresh
Assignee: kartheek muthyala


There seem to be a few intermediate states that can be added to improve the 
stability of the NM container state machine.

For. eg:
* The REINITIALIZING should probably be split into REINITIALIZING and 
REINITIALIZING_AWAITING_KILL. 
* Container updates are currently handled in the ContainerScheduler, but it 
would probably be better to have it plumbed through the container state machine 
as a new state, say UPDATING and a new container event.

The plan is to add some extra tests too to try and test every transition.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7237) Cleanup usages of ResourceProfiles

2017-09-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16175488#comment-16175488
 ] 

Hadoop QA commented on YARN-7237:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 10 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
33s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
10s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m  4s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 18 new + 169 unchanged - 1 fixed = 187 total (was 170) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
20s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
35s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
43s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
45s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 45m 21s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 31m 33s{color} 
| {color:red} hadoop-yarn-client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
28s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}152m  6s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.TestApplicationMasterService |
|   | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerAllocation |
| Timed out junit tests | org.apache.hadoop.yarn.client.api.impl.TestAMRMClient 
|
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | YARN-7237 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12888352/YARN-7237.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 4d062538a9d7 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_

[jira] [Commented] (YARN-7034) DefaultLinuxContainerRuntime and DockerLinuxContainerRuntime sends client environment variables to container-executor

2017-09-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16175467#comment-16175467
 ] 

Hudson commented on YARN-7034:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12940 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/12940/])
YARN-7034. DefaultLinuxContainerRuntime and DockerLinuxContainerRuntime 
(junping_du: rev e5e1851d803bf8d8b96fec1b5c0058014e9329d0)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/privileged/PrivilegedOperationExecutor.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestLinuxContainerExecutorWithMocks.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/DockerLinuxContainerRuntime.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/DefaultLinuxContainerRuntime.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/TestDockerContainerRuntime.java


> DefaultLinuxContainerRuntime and DockerLinuxContainerRuntime sends client 
> environment variables to container-executor
> -
>
> Key: YARN-7034
> URL: https://issues.apache.org/jira/browse/YARN-7034
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: Miklos Szegedi
>Assignee: Miklos Szegedi
>Priority: Blocker
> Attachments: YARN-7034.000.patch, YARN-7034.001.patch, 
> YARN-7034.002.patch, YARN-7034.003.patch, YARN-7034.004.patch, 
> YARN-7034.005.patch, YARN-7034.006.patch, YARN-7034.branch-2.000.patch, 
> YARN-7034.branch-2.004.patch, YARN-7034.branch-2.005.patch, 
> YARN-7034.branch-2.006.patch, YARN-7034.branch-2.8.000.patch, 
> YARN-7034.branch-2.8.004.patch, YARN-7034.branch-2.8.005.patch, 
> YARN-7034.branch-2.8.006.patch
>
>
> This behavior is unnecessary since there is nothing that is used from the 
> environment right now. One option is to whitelist these variables before 
> passing them. Are there any known use cases for this to justify?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-6962) Add support for updateContainers when allocating using FederationInterceptor

2017-09-21 Thread Botong Huang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16172158#comment-16172158
 ] 

Botong Huang edited comment on YARN-6962 at 9/21/17 8:58 PM:
-

testContainerUpdateExecTypeOpportunisticToGuaranteed failure is not related, 
and being handled in YARN-7196. 


was (Author: botong):
testContainerUpdateExecTypeOpportunisticToGuaranteed failure is not related, 
and being handled in Yarn-7196. 

> Add support for updateContainers when allocating using FederationInterceptor
> 
>
> Key: YARN-6962
> URL: https://issues.apache.org/jira/browse/YARN-6962
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Botong Huang
>Assignee: Botong Huang
>Priority: Minor
> Attachments: YARN-6962.v1.patch, YARN-6962.v2.patch
>
>
> Container update is introduced in YARN-5221. Federation Interceptor needs to 
> support it when splitting (merging) the allocate request (response).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7201) Add an apache httpd example YARN service

2017-09-21 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16175438#comment-16175438
 ] 

Eric Yang commented on YARN-7201:
-

+1 for patch 007 to reflect the current state of the code base.

> Add an apache httpd example YARN service
> 
>
> Key: YARN-7201
> URL: https://issues.apache.org/jira/browse/YARN-7201
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
> Attachments: YARN-7201.yarn-native-services.001.patch, 
> YARN-7201.yarn-native-services.002.patch, 
> YARN-7201.yarn-native-services.003.patch, 
> YARN-7201.yarn-native-services.004.patch, 
> YARN-7201.yarn-native-services.005.patch, 
> YARN-7201.yarn-native-services.006.patch, 
> YARN-7201.yarn-native-services.007.patch
>
>
> Add an apache httpd example service



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6626) Embed REST API service into RM

2017-09-21 Thread Eric Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-6626:

Attachment: YARN-6626.yarn-native-services.002.patch

Added ability to load ApiServer to RM as a web filter using reflection.  This 
is done to avoid Maven cyclic dependency on hadoop-yarn-services-api module.

> Embed REST API service into RM
> --
>
> Key: YARN-6626
> URL: https://issues.apache.org/jira/browse/YARN-6626
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gour Saha
> Fix For: yarn-native-services
>
> Attachments: YARN-6626.yarn-native-services.001.patch, 
> YARN-6626.yarn-native-services.002.patch
>
>
> As of now the deployment model of the Native Services REST API service is 
> standalone. There are several cross-cutting solutions that can be inherited 
> for free (kerberos, HA, ACLs, trusted proxy support, etc.) by the REST API 
> service if it is embedded into the RM process. In fact we can expose the REST 
> API via the same port as RM UI (8088 default). The URI path 
> /services/v1/applications will distinguish the REST API calls from other RM 
> APIs.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6550) Capture launch_container.sh logs

2017-09-21 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6550?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16175426#comment-16175426
 ] 

Wangda Tan commented on YARN-6550:
--

Thanks [~suma.shivaprasad], 

+1 to skip printing stack trace for non zero exit code.

In general patch LGTM, only one naming suggestions in generated script for 
readability:
- {{export STDOUT}} to {{PRELAUNCH_OUT}}, and {{STDERR}} to {{PRELAUNCH_ERR}}

> Capture launch_container.sh logs
> 
>
> Key: YARN-6550
> URL: https://issues.apache.org/jira/browse/YARN-6550
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-beta1
>Reporter: Wangda Tan
>Assignee: Suma Shivaprasad
> Attachments: YARN-6550.002.patch, YARN-6550.003.patch, 
> YARN-6550.005.patch, YARN-6550.006.patch, YARN-6550.007.patch, 
> YARN-6550.008.patch, YARN-6550.009.patch, YARN-6550.010.patch, 
> YARN-6550.011.patch, YARN-6550.patch
>
>
> launch_container.sh which generated by NM will do a bunch of things (like 
> create link, etc.) while launch a process. No logs captured until {{exec}} is 
> called. We need capture all failures of launch_container.sh for easier 
> troubleshooting.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6623) Add support to turn off launching privileged containers in the container-executor

2017-09-21 Thread Eric Badger (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16175416#comment-16175416
 ] 

Eric Badger commented on YARN-6623:
---

Given that docker is alpha in 2.8.2, I agree with [~vinodkv] that we should not 
backport this to 2.8.2, as it makes significant changes to the 
container-executor. Nobody should even attempt seriously running docker on 
2.8.2 without backporting most, if not everything that's in 2.9 and doing a 
large security review over their configurations and setup. So, in my mind, some 
docker code may be "in" 2.8.2, but it really isn't supported at all until 2.9+. 
The only reason I can think of to put this in 2.8.2 would be to get the changes 
to container-executor, but it's not obvious to me why we would need those. 

> Add support to turn off launching privileged containers in the 
> container-executor
> -
>
> Key: YARN-6623
> URL: https://issues.apache.org/jira/browse/YARN-6623
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Reporter: Varun Vasudev
>Assignee: Varun Vasudev
>Priority: Blocker
> Attachments: YARN-6623.001.patch, YARN-6623.002.patch, 
> YARN-6623.003.patch, YARN-6623.004.patch, YARN-6623.005.patch, 
> YARN-6623.006.patch, YARN-6623.007.patch, YARN-6623.008.patch, 
> YARN-6623.009.patch, YARN-6623.010.patch
>
>
> Currently, launching privileged containers is controlled by the NM. We should 
> add a flag to the container-executor.cfg allowing admins to disable launching 
> privileged containers at the container-executor level.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5195) RM intermittently crashed with NPE while handling APP_ATTEMPT_REMOVED event when async-scheduling enabled in CapacityScheduler

2017-09-21 Thread Jonathan Hung (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16175407#comment-16175407
 ] 

Jonathan Hung commented on YARN-5195:
-

Thanks [~jlowe] for verifying! (Also for the review/commit.)

> RM intermittently crashed with NPE while handling APP_ATTEMPT_REMOVED event 
> when async-scheduling enabled in CapacityScheduler
> --
>
> Key: YARN-5195
> URL: https://issues.apache.org/jira/browse/YARN-5195
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.7.2
>Reporter: Karam Singh
>Assignee: sandflee
> Fix For: 2.9.0, 3.0.0-alpha1
>
> Attachments: YARN-5195.01.patch, YARN-5195.02.patch, 
> YARN-5195.03.patch, YARN-5195-branch-2.7.001.patch, 
> YARN-5195-branch-2.8.001.patch, YARN-5195-branch-2.8.001.patch
>
>
> While running gridmix experiments one time came across incident where RM went 
> down with following exception
> {noformat}
> 2016-05-28 15:45:24,459 [ResourceManager Event Processor] FATAL 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error in 
> handling event type APP_ATTEMPT_REMOVED to the scheduler
> java.lang.NullPointerException
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue.completedContainer(LeafQueue.java:1282)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.completedContainerInternal(CapacityScheduler.java:1469)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.AbstractYarnScheduler.completedContainer(AbstractYarnScheduler.java:497)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.doneApplicationAttempt(CapacityScheduler.java:860)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.handle(CapacityScheduler.java:1319)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.handle(CapacityScheduler.java:127)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$SchedulerEventDispatcher$EventProcessor.run(ResourceManager.java:704)
> at java.lang.Thread.run(Thread.java:745)
> 2016-05-28 15:45:24,460 [ApplicationMasterLauncher #49] INFO 
> org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Cleaning 
> master appattempt_1464449118385_0006_01
> 2016-05-28 15:45:24,460 [ResourceManager Event Processor] INFO 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Exiting, bbye..
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6620) [YARN-6223] NM Java side code changes to support isolate GPU devices by using CGroups

2017-09-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16175399#comment-16175399
 ] 

Hadoop QA commented on YARN-6620:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 15 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
53s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
57s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
39s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 11s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 80 new + 478 unchanged - 24 fixed = 558 total (was 502) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 0s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m 
28s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 523 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
12s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
56s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
40s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
53s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 14m 
42s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 84m 14s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | 
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
|  |  Load of known null value in 
org.apache.hadoop.yarn.server.nodemanager.containermanager.resourceplugin.ResourcePluginManager.initialize(Context)
  At ResourcePluginManager.java:in 
org.apache.hadoop.yarn.server.nodemanager.containermanager.resourceplugin.ResourcePluginManager.initialize(Context

[jira] [Commented] (YARN-5195) RM intermittently crashed with NPE while handling APP_ATTEMPT_REMOVED event when async-scheduling enabled in CapacityScheduler

2017-09-21 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16175395#comment-16175395
 ] 

Jason Lowe commented on YARN-5195:
--

The unit test failures are similar to the branch-2.7 case -- known issues with 
the Jenkins test environment on those branches.  The unit tests pass locally 
for me with the patch applied.

Committing.

> RM intermittently crashed with NPE while handling APP_ATTEMPT_REMOVED event 
> when async-scheduling enabled in CapacityScheduler
> --
>
> Key: YARN-5195
> URL: https://issues.apache.org/jira/browse/YARN-5195
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.7.2
>Reporter: Karam Singh
>Assignee: sandflee
> Fix For: 2.9.0, 3.0.0-alpha1
>
> Attachments: YARN-5195.01.patch, YARN-5195.02.patch, 
> YARN-5195.03.patch, YARN-5195-branch-2.7.001.patch, 
> YARN-5195-branch-2.8.001.patch, YARN-5195-branch-2.8.001.patch
>
>
> While running gridmix experiments one time came across incident where RM went 
> down with following exception
> {noformat}
> 2016-05-28 15:45:24,459 [ResourceManager Event Processor] FATAL 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error in 
> handling event type APP_ATTEMPT_REMOVED to the scheduler
> java.lang.NullPointerException
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue.completedContainer(LeafQueue.java:1282)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.completedContainerInternal(CapacityScheduler.java:1469)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.AbstractYarnScheduler.completedContainer(AbstractYarnScheduler.java:497)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.doneApplicationAttempt(CapacityScheduler.java:860)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.handle(CapacityScheduler.java:1319)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.handle(CapacityScheduler.java:127)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$SchedulerEventDispatcher$EventProcessor.run(ResourceManager.java:704)
> at java.lang.Thread.run(Thread.java:745)
> 2016-05-28 15:45:24,460 [ApplicationMasterLauncher #49] INFO 
> org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Cleaning 
> master appattempt_1464449118385_0006_01
> 2016-05-28 15:45:24,460 [ResourceManager Event Processor] INFO 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Exiting, bbye..
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6623) Add support to turn off launching privileged containers in the container-executor

2017-09-21 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16175386#comment-16175386
 ] 

Junping Du commented on YARN-6623:
--

bq. We have already documented that the docker feature is alpha, not for 
production use (and documenting more). Given this, I don't think we should add 
more risk to 2.8.2.
That's also my initial thinking, but [~shaneku...@gmail.com] convinced me 
offline that this is important for 2.8.2 even as an alpha feature - indeed 
still alpha for 2.9 and 3.0 and seems to affect non-docker container runtime. 
So I change my mind to support this backport. [~vvasudev], 
[~shaneku...@gmail.com] and [~miklos.szeg...@cloudera.com], what do you guys 
think?

> Add support to turn off launching privileged containers in the 
> container-executor
> -
>
> Key: YARN-6623
> URL: https://issues.apache.org/jira/browse/YARN-6623
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Reporter: Varun Vasudev
>Assignee: Varun Vasudev
>Priority: Blocker
> Attachments: YARN-6623.001.patch, YARN-6623.002.patch, 
> YARN-6623.003.patch, YARN-6623.004.patch, YARN-6623.005.patch, 
> YARN-6623.006.patch, YARN-6623.007.patch, YARN-6623.008.patch, 
> YARN-6623.009.patch, YARN-6623.010.patch
>
>
> Currently, launching privileged containers is controlled by the NM. We should 
> add a flag to the container-executor.cfg allowing admins to disable launching 
> privileged containers at the container-executor level.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6968) Hardcoded absolute pathname in DockerLinuxContainerRuntime

2017-09-21 Thread Eric Badger (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16175372#comment-16175372
 ] 

Eric Badger commented on YARN-6968:
---

Woot! Glad to get this in. Thanks [~jlowe] and [~miklos.szeg...@cloudera.com]!

> Hardcoded absolute pathname in DockerLinuxContainerRuntime
> --
>
> Key: YARN-6968
> URL: https://issues.apache.org/jira/browse/YARN-6968
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: Miklos Szegedi
>Assignee: Eric Badger
> Fix For: 2.9.0, 3.0.0-beta1
>
> Attachments: YARN-6968.001.patch, YARN-6968.002.patch, 
> YARN-6968.003.patch, YARN-6968.004.patch
>
>
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.DockerLinuxContainerRuntime.launchContainer(ContainerRuntimeContext)
>  has a hardcoded absolute pathname that is being flagged by findbugs.
> This could be done after YARN-6757 is checked in.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2037) Add restart support for Unmanaged AMs

2017-09-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16175365#comment-16175365
 ] 

Hadoop QA commented on YARN-2037:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
11s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 0 new + 157 unchanged - 3 fixed = 157 total (was 160) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 45m 46s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 68m 39s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerAllocation |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | YARN-2037 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12888101/YARN-2037.v1.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 27431876ea9a 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / b9e423f |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/17571/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/17571/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/17571/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add restart support for Unmanaged AMs
> -
>
>   

[jira] [Commented] (YARN-7045) Remove FSLeafQueue#addAppSchedulable

2017-09-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16175363#comment-16175363
 ] 

Hudson commented on YARN-7045:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12939 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/12939/])
YARN-7045. Remove FSLeafQueue#addAppSchedulable. (Contributed by Sen (yufei: 
rev a92ef030a2707182e90acee644e47c8ef7e1fd8d)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestFSLeafQueue.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSLeafQueue.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestFairScheduler.java


> Remove FSLeafQueue#addAppSchedulable
> 
>
> Key: YARN-7045
> URL: https://issues.apache.org/jira/browse/YARN-7045
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Reporter: Yufei Gu
>Assignee: Sen Zhao
>  Labels: newbie++
> Fix For: 2.9.0, 3.1.0
>
> Attachments: YARN-7045.001.patch, YARN-7045.002.patch
>
>
> It is only for test, and not necessary be there since we got method 
> {{addApp}}. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6968) Hardcoded absolute pathname in DockerLinuxContainerRuntime

2017-09-21 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16175360#comment-16175360
 ] 

Miklos Szegedi commented on YARN-6968:
--

Thank you, [~ebadger] and [~jlowe] for working on this! It is really nice to 
see findbugs check being all green.

> Hardcoded absolute pathname in DockerLinuxContainerRuntime
> --
>
> Key: YARN-6968
> URL: https://issues.apache.org/jira/browse/YARN-6968
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: Miklos Szegedi
>Assignee: Eric Badger
> Fix For: 2.9.0, 3.0.0-beta1
>
> Attachments: YARN-6968.001.patch, YARN-6968.002.patch, 
> YARN-6968.003.patch, YARN-6968.004.patch
>
>
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.DockerLinuxContainerRuntime.launchContainer(ContainerRuntimeContext)
>  has a hardcoded absolute pathname that is being flagged by findbugs.
> This could be done after YARN-6757 is checked in.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7239) Possible launch/cleanup race condition in ContainersLauncher

2017-09-21 Thread Miklos Szegedi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Szegedi updated YARN-7239:
-
Description: 
ContainersLauncher.handle() submits the launch job and then adds the job into 
the collection risking that the cleanup will miss it and return. This should be 
in reversed order in all 3 instances:
{code}
containerLauncher.submit(launch);
running.put(containerId, launch);
{code}
The cleanup code that the above code is racing with:
{code}
ContainerLaunch runningContainer = running.get(containerId);
if (runningContainer == null) {
  // Container not launched. So nothing needs to be done.
  LOG.info("Container " + containerId + " not running, nothing to 
signal.");
  return;
}
...
{code}


  was:
ContainersLauncher.handle() submits the launch job and then adds the job into 
the collection risking that the cleanup will miss it and return. This should be 
in reversed order in all 3 instances:
{code}
containerLauncher.submit(launch);
running.put(containerId, launch);
{code}
The cleanup code the above code is racing with:
{code}
ContainerLaunch runningContainer = running.get(containerId);
if (runningContainer == null) {
  // Container not launched. So nothing needs to be done.
  LOG.info("Container " + containerId + " not running, nothing to 
signal.");
  return;
}
...
{code}



> Possible launch/cleanup race condition in ContainersLauncher
> 
>
> Key: YARN-7239
> URL: https://issues.apache.org/jira/browse/YARN-7239
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Miklos Szegedi
>  Labels: newbie
>
> ContainersLauncher.handle() submits the launch job and then adds the job into 
> the collection risking that the cleanup will miss it and return. This should 
> be in reversed order in all 3 instances:
> {code}
> containerLauncher.submit(launch);
> running.put(containerId, launch);
> {code}
> The cleanup code that the above code is racing with:
> {code}
> ContainerLaunch runningContainer = running.get(containerId);
> if (runningContainer == null) {
>   // Container not launched. So nothing needs to be done.
>   LOG.info("Container " + containerId + " not running, nothing to 
> signal.");
>   return;
> }
> ...
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7009) TestNMClient.testNMClientNoCleanupOnStop is flaky by design

2017-09-21 Thread Miklos Szegedi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Szegedi updated YARN-7009:
-
Attachment: YARN-7009.004.patch

Thank you, [~asuresh] for the review. I updated the patch. In general I think 
lambdas improve the readability of the code but indeed it causes additional 
work, when the code overlaps with branch-2.

> TestNMClient.testNMClientNoCleanupOnStop is flaky by design
> ---
>
> Key: YARN-7009
> URL: https://issues.apache.org/jira/browse/YARN-7009
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Miklos Szegedi
>Assignee: Miklos Szegedi
> Attachments: YARN-7009.000.patch, YARN-7009.001.patch, 
> YARN-7009.002.patch, YARN-7009.003.patch, YARN-7009.004.patch
>
>
> The sleeps to wait for a transition to reinit and than back to running is not 
> long enough, it can miss the reinit event.
> {code}
> java.lang.AssertionError: Exception is not expected: 
> org.apache.hadoop.yarn.exceptions.YarnException: Cannot perform RE_INIT on 
> [container_1502735389852_0001_01_01]. Current state is [REINITIALIZING, 
> isReInitializing=true].
>   at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.preReInitializeOrLocalizeCheck(ContainerManagerImpl.java:1772)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.reInitializeContainer(ContainerManagerImpl.java:1697)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.reInitializeContainer(ContainerManagerImpl.java:1668)
>   at 
> org.apache.hadoop.yarn.api.impl.pb.service.ContainerManagementProtocolPBServiceImpl.reInitializeContainer(ContainerManagementProtocolPBServiceImpl.java:214)
>   at 
> org.apache.hadoop.yarn.proto.ContainerManagementProtocol$ContainerManagementProtocolService$2.callBlockingMethod(ContainerManagementProtocol.java:237)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1962)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TestNMClient.testReInitializeContainer(TestNMClient.java:567)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TestNMClient.testContainerManagement(TestNMClient.java:405)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TestNMClient.testNMClientNoCleanupOnStop(TestNMClient.java:214)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> Caused by: org.apache.hadoop.yarn.exceptions.YarnException: Cannot perform 
> RE_INIT on [container_1502735389852_0001_01_01]. Current state is 
> [REINITIALIZING, isReInitializing=true].
>   at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.preReInitializeOrLocalizeCheck(ContainerManagerImpl.java:1772)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.reInitializeContainer(ContainerManagerImpl.java:1697)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.reInitializeContainer(ContainerManagerImpl.java:1668)
>   at 
> org.apache.hadoop.yarn.api.impl.pb.service.ContainerManagementProtocolPBServiceImpl.reInitializeContainer(ContainerManagementProtocolPBServiceImpl.java:214)
>   at 
> org.apache.hadoop.yarn.proto.ContainerManagementProtocol$ContainerManagementProtocolService$2.callBlockingMethod(ContainerManagementProtocol.java:237)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
>   at org.apache.h

[jira] [Commented] (YARN-7045) Remove FSLeafQueue#addAppSchedulable

2017-09-21 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16175349#comment-16175349
 ] 

Yufei Gu commented on YARN-7045:


+1. 
Thanks for the patch, [~Sen Zhao]. Committed to trunk and branch-2.

> Remove FSLeafQueue#addAppSchedulable
> 
>
> Key: YARN-7045
> URL: https://issues.apache.org/jira/browse/YARN-7045
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Reporter: Yufei Gu
>Assignee: Sen Zhao
>  Labels: newbie++
> Fix For: 2.9.0, 3.1.0
>
> Attachments: YARN-7045.001.patch, YARN-7045.002.patch
>
>
> It is only for test, and not necessary be there since we got method 
> {{addApp}}. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6991) "Kill application" button does not show error if other user tries to kill the application for secure cluster

2017-09-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16175332#comment-16175332
 ] 

Hudson commented on YARN-6991:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12938 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/12938/])
YARN-6991. "Kill application" button does not show error if other user (wangda: 
rev 263e2c692a4b0013766cd1f6b6d7ed674b2b1040)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/AppBlock.java


> "Kill application" button does not show error if other user tries to kill the 
> application for secure cluster
> 
>
> Key: YARN-6991
> URL: https://issues.apache.org/jira/browse/YARN-6991
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Sumana Sathish
>Assignee: Suma Shivaprasad
> Fix For: 3.0.0-beta1
>
> Attachments: YARN-6991.001.patch, YARN-6991.002.patch, 
> YARN-6991.003.patch
>
>
> 1. Submit an application by user 1
> 2. log into RM UI as user 2
> 3. Kill the application submitted by user 1
> 4. Even though application does not get killed, there is no error/info dialog 
> box being shown to let the user that "user doesnot have permissions to kill 
> application of other user"



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7239) Possible launch/cleanup race condition in ContainersLauncher

2017-09-21 Thread Miklos Szegedi (JIRA)
Miklos Szegedi created YARN-7239:


 Summary: Possible launch/cleanup race condition in 
ContainersLauncher
 Key: YARN-7239
 URL: https://issues.apache.org/jira/browse/YARN-7239
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Miklos Szegedi


ContainersLauncher.handle() submits the launch job and then adds the job into 
the collection risking that the cleanup will miss it and return. This should be 
in reversed order in all 3 instances:
{code}
containerLauncher.submit(launch);
running.put(containerId, launch);
{code}
The cleanup code the above code is racing with:
{code}
ContainerLaunch runningContainer = running.get(containerId);
if (runningContainer == null) {
  // Container not launched. So nothing needs to be done.
  LOG.info("Container " + containerId + " not running, nothing to 
signal.");
  return;
}
...
{code}




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7238) Documentation for API based scheduler configuration management

2017-09-21 Thread Jonathan Hung (JIRA)
Jonathan Hung created YARN-7238:
---

 Summary: Documentation for API based scheduler configuration 
management
 Key: YARN-7238
 URL: https://issues.apache.org/jira/browse/YARN-7238
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Jonathan Hung
Assignee: Jonathan Hung


Documentation for configurations to set / how to use scheduler configuration 
mutation API.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6623) Add support to turn off launching privileged containers in the container-executor

2017-09-21 Thread Vinod Kumar Vavilapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16175308#comment-16175308
 ] 

Vinod Kumar Vavilapalli commented on YARN-6623:
---

[~djp], this is too big a change and a patch to put into 2.8.2. We have already 
documented that the docker feature is alpha, not for production use (and 
documenting more). Given this, I don't think we should add more risk to 2.8.2.

> Add support to turn off launching privileged containers in the 
> container-executor
> -
>
> Key: YARN-6623
> URL: https://issues.apache.org/jira/browse/YARN-6623
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Reporter: Varun Vasudev
>Assignee: Varun Vasudev
>Priority: Blocker
> Attachments: YARN-6623.001.patch, YARN-6623.002.patch, 
> YARN-6623.003.patch, YARN-6623.004.patch, YARN-6623.005.patch, 
> YARN-6623.006.patch, YARN-6623.007.patch, YARN-6623.008.patch, 
> YARN-6623.009.patch, YARN-6623.010.patch
>
>
> Currently, launching privileged containers is controlled by the NM. We should 
> add a flag to the container-executor.cfg allowing admins to disable launching 
> privileged containers at the container-executor level.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6953) Clean up ResourceUtils.setMinimumAllocationForMandatoryResources() and setMaximumAllocationForMandatoryResources()

2017-09-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16175294#comment-16175294
 ] 

Hadoop QA commented on YARN-6953:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  4s{color} 
| {color:red} YARN-6953 does not apply to YARN-3926. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-6953 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12885075/YARN-6953-YARN-3926.006.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/17574/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Clean up ResourceUtils.setMinimumAllocationForMandatoryResources() and 
> setMaximumAllocationForMandatoryResources()
> --
>
> Key: YARN-6953
> URL: https://issues.apache.org/jira/browse/YARN-6953
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: YARN-3926
>Reporter: Daniel Templeton
>Assignee: Manikandan R
>Priority: Minor
>  Labels: newbie
> Attachments: YARN-6953-YARN-3926.001.patch, 
> YARN-6953-YARN-3926.002.patch, YARN-6953-YARN-3926.003.patch, 
> YARN-6953-YARN-3926.004.patch, YARN-6953-YARN-3926.005.patch, 
> YARN-6953-YARN-3926.006.patch, YARN-6953-YARN-3926-WIP.patch
>
>
> The {{setMinimumAllocationForMandatoryResources()}} and 
> {{setMaximumAllocationForMandatoryResources()}} methods are quite convoluted. 
>  They'd be much simpler if they just handled CPU and memory manually instead 
> of trying to be clever about doing it in a loop.  There are also issues, such 
> as the log warning always talking about memory or the last element of the 
> inner array being a copy of the first element.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6620) [YARN-6223] NM Java side code changes to support isolate GPU devices by using CGroups

2017-09-21 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-6620:
-
Attachment: YARN-6620.010.patch

Attached 010 patch, I tested in my local environment with 2 GPU devices, it can 
do proper GPU isolation on top of YARN-3926. (Will add draft of documentation 
soon).

> [YARN-6223] NM Java side code changes to support isolate GPU devices by using 
> CGroups
> -
>
> Key: YARN-6620
> URL: https://issues.apache.org/jira/browse/YARN-6620
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-6620.001.patch, YARN-6620.002.patch, 
> YARN-6620.003.patch, YARN-6620.004.patch, YARN-6620.005.patch, 
> YARN-6620.006-WIP.patch, YARN-6620.007.patch, YARN-6620.008.patch, 
> YARN-6620.009.patch, YARN-6620.010.patch
>
>
> This JIRA plan to add support of:
> 1) GPU configuration for NodeManagers
> 2) Isolation in CGroups. (Java side).
> 3) NM restart and recovery allocated GPU devices



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6962) Add support for updateContainers when allocating using FederationInterceptor

2017-09-21 Thread Subru Krishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16175289#comment-16175289
 ] 

Subru Krishnan commented on YARN-6962:
--

Thanks [~botong] for adding the test, +1 on the latest patch.

> Add support for updateContainers when allocating using FederationInterceptor
> 
>
> Key: YARN-6962
> URL: https://issues.apache.org/jira/browse/YARN-6962
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Botong Huang
>Assignee: Botong Huang
>Priority: Minor
> Attachments: YARN-6962.v1.patch, YARN-6962.v2.patch
>
>
> Container update is introduced in YARN-5221. Federation Interceptor needs to 
> support it when splitting (merging) the allocate request (response).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7237) Cleanup usages of ResourceProfiles

2017-09-21 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16175284#comment-16175284
 ] 

Wangda Tan commented on YARN-7237:
--

[~sunilg], could you please help review the patch when you have time?

> Cleanup usages of ResourceProfiles
> --
>
> Key: YARN-7237
> URL: https://issues.apache.org/jira/browse/YARN-7237
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Critical
> Attachments: YARN-7237.001.patch
>
>
> While doing tests, there're a couple of issues:
> 1) When use {{ProfileCapability#getProfileCapabilityOverride}}, it does 
> overwrite of whatever specified in resource-profiles.json when value >= 0. 
> Which is different from javadocs of {{ProfileCapability}} 
> bq. For example, if you have a resource profile "small" that maps to <4096M, 
> 2 cores, 1 gpu> and you set the capability override to <8192M, 0 cores, 0 
> gpu>, then the actual resource allocation on the ResourceManager will be 
> <8192M, 2 cores, 1 gpu>
> To me, the correct behavior should do overwrite when value > 0. The reason 
> is, by default resource value will be set to 0, For example, assume we have a 
> profile {{"a" = (mem=3, vcore=5, res_1=7)}}, and create a 
> capability-overwrite (capability = new resource(8). The final result should 
> be (mem=8, vcore=5, res_1=7), instead of (mem=8, vcore=0, res_1=0).
> 2) ResourceProfileManager now loads minimum/maximum profile from config file 
> (resource-profiles.json), to me this is not correct because minimum/maximum 
> allocation for each resource types are already specified inside 
> {{resource-types.xml}}. We should always use 
> {{ResourceUtils#getResourceTypesMinimum/MaximumAllocation}} to get from 
> resource-types.xml and yarn-site.xml. This value will be added to profiles so 
> client can get these configs.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7237) Cleanup usages of ResourceProfiles

2017-09-21 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-7237:
-
Attachment: YARN-7237.001.patch

> Cleanup usages of ResourceProfiles
> --
>
> Key: YARN-7237
> URL: https://issues.apache.org/jira/browse/YARN-7237
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Critical
> Attachments: YARN-7237.001.patch
>
>
> While doing tests, there're a couple of issues:
> 1) When use {{ProfileCapability#getProfileCapabilityOverride}}, it does 
> overwrite of whatever specified in resource-profiles.json when value >= 0. 
> Which is different from javadocs of {{ProfileCapability}} 
> bq. For example, if you have a resource profile "small" that maps to <4096M, 
> 2 cores, 1 gpu> and you set the capability override to <8192M, 0 cores, 0 
> gpu>, then the actual resource allocation on the ResourceManager will be 
> <8192M, 2 cores, 1 gpu>
> To me, the correct behavior should do overwrite when value > 0. The reason 
> is, by default resource value will be set to 0, For example, assume we have a 
> profile {{"a" = (mem=3, vcore=5, res_1=7)}}, and create a 
> capability-overwrite (capability = new resource(8). The final result should 
> be (mem=8, vcore=5, res_1=7), instead of (mem=8, vcore=0, res_1=0).
> 2) ResourceProfileManager now loads minimum/maximum profile from config file 
> (resource-profiles.json), to me this is not correct because minimum/maximum 
> allocation for each resource types are already specified inside 
> {{resource-types.xml}}. We should always use 
> {{ResourceUtils#getResourceTypesMinimum/MaximumAllocation}} to get from 
> resource-types.xml and yarn-site.xml. This value will be added to profiles so 
> client can get these configs.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7237) Cleanup usages of ResourceProfiles

2017-09-21 Thread Wangda Tan (JIRA)
Wangda Tan created YARN-7237:


 Summary: Cleanup usages of ResourceProfiles
 Key: YARN-7237
 URL: https://issues.apache.org/jira/browse/YARN-7237
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Wangda Tan
Assignee: Wangda Tan
Priority: Critical


While doing tests, there're a couple of issues:
1) When use {{ProfileCapability#getProfileCapabilityOverride}}, it does 
overwrite of whatever specified in resource-profiles.json when value >= 0. 
Which is different from javadocs of {{ProfileCapability}} 

bq. For example, if you have a resource profile "small" that maps to <4096M, 2 
cores, 1 gpu> and you set the capability override to <8192M, 0 cores, 0 gpu>, 
then the actual resource allocation on the ResourceManager will be <8192M, 2 
cores, 1 gpu>

To me, the correct behavior should do overwrite when value > 0. The reason is, 
by default resource value will be set to 0, For example, assume we have a 
profile {{"a" = (mem=3, vcore=5, res_1=7)}}, and create a capability-overwrite 
(capability = new resource(8). The final result should be (mem=8, vcore=5, 
res_1=7), instead of (mem=8, vcore=0, res_1=0).

2) ResourceProfileManager now loads minimum/maximum profile from config file 
(resource-profiles.json), to me this is not correct because minimum/maximum 
allocation for each resource types are already specified inside 
{{resource-types.xml}}. We should always use 
{{ResourceUtils#getResourceTypesMinimum/MaximumAllocation}} to get from 
resource-types.xml and yarn-site.xml. This value will be added to profiles so 
client can get these configs.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6623) Add support to turn off launching privileged containers in the container-executor

2017-09-21 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16175249#comment-16175249
 ] 

Miklos Szegedi commented on YARN-6623:
--

Indeed, I verified and we still need the strlen in the first case, and I do not 
see an overflow possibility of tmp_buffer_2, so those should be okay.

> Add support to turn off launching privileged containers in the 
> container-executor
> -
>
> Key: YARN-6623
> URL: https://issues.apache.org/jira/browse/YARN-6623
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Reporter: Varun Vasudev
>Assignee: Varun Vasudev
>Priority: Blocker
> Attachments: YARN-6623.001.patch, YARN-6623.002.patch, 
> YARN-6623.003.patch, YARN-6623.004.patch, YARN-6623.005.patch, 
> YARN-6623.006.patch, YARN-6623.007.patch, YARN-6623.008.patch, 
> YARN-6623.009.patch, YARN-6623.010.patch
>
>
> Currently, launching privileged containers is controlled by the NM. We should 
> add a flag to the container-executor.cfg allowing admins to disable launching 
> privileged containers at the container-executor level.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6546) SLS is slow while loading 10k queues

2017-09-21 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16175225#comment-16175225
 ] 

Miklos Szegedi commented on YARN-6546:
--

Thank you for the patch [~yufeigu]. If I understand well we trace queues from 
now on, only, when they get an allocation in {{updateQueueMetrics}}. Could we 
call {{traceQueueIfNotTraced(queueName);}}, when an application is scheduled 
into the queue?

> SLS is slow while loading 10k queues
> 
>
> Key: YARN-6546
> URL: https://issues.apache.org/jira/browse/YARN-6546
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler-load-simulator
>Affects Versions: 3.0.0-alpha2
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: Desktop.png, YARN-6546.001.patch
>
>
> It takes a long time (more than 10 minutes) to load 10k queues in SLS. The 
> problem should be in {{com.codahale.metrics.CsvReporter}} based on the result 
> from profiler. SLS creates 14 .csv files for each leaf queue, and update them 
> constantly during execution. It is not necessary to log information for 
> inactive queues. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6623) Add support to turn off launching privileged containers in the container-executor

2017-09-21 Thread Varun Vasudev (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16175201#comment-16175201
 ] 

Varun Vasudev commented on YARN-6623:
-

{noformat}   
 How would you detect the condition where the buffer doesn't have enough 
size?

You copy at most bufflen-strlen(buff) characters including \0. As I said only 
one strlen is enough in this case.
{noformat}
I think I understand what you're saying. The current implementation checks if 
the buffer has enough space to hold the final string and will return an error 
if there isn't enough space. Without the additional strlen, how would I check 
that the buffer can fit the additional string?

{noformat}
381 quote_and_append_arg(&tmp_buffer, &tmp_buffer_size, " ", image_name); 

That space might need to be added to the quote_and_append_arg function for 
safety reasons.

I didn't get this. Can you please explain?

When we add a new arg then the space should be added by default in 
quote_and_append_arg every time.
{noformat}

Ah I see what you're saying. Thanks for the explanation. That line of code got 
removed in the latest patch to address the feedback from 
[~shaneku...@gmail.com] in his review.

{noformat}
Is there any benefit to strcpy + strcat?

There is no need to do an strlen. Was the intention maybe to bound by the size 
of tmp_buffer_2?
{noformat}

Yep. The thinking was to limit it to the size of tmp_buffer_2.

> Add support to turn off launching privileged containers in the 
> container-executor
> -
>
> Key: YARN-6623
> URL: https://issues.apache.org/jira/browse/YARN-6623
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Reporter: Varun Vasudev
>Assignee: Varun Vasudev
>Priority: Blocker
> Attachments: YARN-6623.001.patch, YARN-6623.002.patch, 
> YARN-6623.003.patch, YARN-6623.004.patch, YARN-6623.005.patch, 
> YARN-6623.006.patch, YARN-6623.007.patch, YARN-6623.008.patch, 
> YARN-6623.009.patch, YARN-6623.010.patch
>
>
> Currently, launching privileged containers is controlled by the NM. We should 
> add a flag to the container-executor.cfg allowing admins to disable launching 
> privileged containers at the container-executor level.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6623) Add support to turn off launching privileged containers in the container-executor

2017-09-21 Thread Varun Vasudev (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16175181#comment-16175181
 ] 

Varun Vasudev commented on YARN-6623:
-

[~djp] - makes sense to push it into 2.8.2; [~andrew.wang] - thanks for 
updating the target versions.

> Add support to turn off launching privileged containers in the 
> container-executor
> -
>
> Key: YARN-6623
> URL: https://issues.apache.org/jira/browse/YARN-6623
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Reporter: Varun Vasudev
>Assignee: Varun Vasudev
>Priority: Blocker
> Attachments: YARN-6623.001.patch, YARN-6623.002.patch, 
> YARN-6623.003.patch, YARN-6623.004.patch, YARN-6623.005.patch, 
> YARN-6623.006.patch, YARN-6623.007.patch, YARN-6623.008.patch, 
> YARN-6623.009.patch, YARN-6623.010.patch
>
>
> Currently, launching privileged containers is controlled by the NM. We should 
> add a flag to the container-executor.cfg allowing admins to disable launching 
> privileged containers at the container-executor level.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7135) Clean up lock-try order in common scheduler code

2017-09-21 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16175164#comment-16175164
 ] 

Wangda Tan commented on YARN-7135:
--

[~jojochuang], 

>From Java doc: 
>http://docs.oracle.com/javase/6/docs/api/java/util/concurrent/locks/Lock.html#lock()

{code} 
Implementation Considerations

A Lock implementation may be able to detect erroneous use of the lock, such as 
an invocation that would cause deadlock, and may throw an (unchecked) exception 
in such circumstances. The circumstances and the exception type must be 
documented by that Lock implementation.
{code}

Which means by design the ReentrantLock#lock won't throw any exception 
including unchecked, I'm +0 to this fix if it won't break anything.

> Clean up lock-try order in common scheduler code
> 
>
> Key: YARN-7135
> URL: https://issues.apache.org/jira/browse/YARN-7135
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: scheduler
>Affects Versions: 3.0.0-alpha4
>Reporter: Daniel Templeton
>Assignee: weiyuan
>  Labels: newbie
> Attachments: YARN-7135.001.patch, YARN-7135.002.patch, 
> YARN-7135.003.patch
>
>
> There are many places that follow the pattern:{code}try {
>   lock.lock();
>   ...
> } finally {
>   lock.unlock();
> }{code}
> There are a couple of reasons that's a bad idea.  The correct pattern 
> is:{code}lock.lock();
> try {
>   ...
> } finally {
>   lock.unlock();
> }{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5195) RM intermittently crashed with NPE while handling APP_ATTEMPT_REMOVED event when async-scheduling enabled in CapacityScheduler

2017-09-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16175147#comment-16175147
 ] 

Hadoop QA commented on YARN-5195:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 12m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2.8 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
24s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
37s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
10s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} branch-2.8 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 75m 39s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}105m 33s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.resourcemanager.TestAMAuthorization |
|   | hadoop.yarn.server.resourcemanager.TestClientRMTokens |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:c2d96dd |
| JIRA Issue | YARN-5195 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12888323/YARN-5195-branch-2.8.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 488fc670ea27 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | branch-2.8 / c0bb242 |
| Default Java | 1.7.0_151 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/17570/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/17570/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/17570/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> RM intermittently crashed with NPE while handling APP_ATTEMPT_REMOVED event 
> when async-scheduling enabled in CapacityScheduler
> 

[jira] [Comment Edited] (YARN-7102) NM heartbeat stuck when responseId overflows MAX_INT

2017-09-21 Thread Botong Huang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16175133#comment-16175133
 ] 

Botong Huang edited comment on YARN-7102 at 9/21/17 5:19 PM:
-

Thanks [~jlowe] for the review and good point. We don't want RM to resync all 
nodes if RM becomes slow. 

How about we take one step back, allowing {{request.responseId > 
lastResponseId}} as it is now? We simply fix the overflow problem without 
changing anything else. Specifically, add one check: if {{request.responseId == 
lastResponseId}} then skip other checks. This would be my initial proposal in 
YARN-6640 v1 patch: 

{code}
if (request.getResponseId() != lastResponse.getResponseId()) {
if ((request.getResponseId() + 1) == lastResponse.getResponseId()) {
  /* heartbeat one step old, simply return lastReponse */
  return lastResponse;
} else if (request.getResponseId() + 1 < 
lastResponse.getResponseId()) {
  (resync NM...)
}
}
   (process the heartbeat...)
{code}

There's still potential for the RM too slow causing NM resync, but only 
possible for the NMs whose reponseId just wrapped around. This should be fine I 
guess. 


was (Author: botong):
Thanks [~jlowe] for the review and good point. We don't want RM to resync all 
nodes if RM becomes slow. 

How about we take one step back, allowing {{request.responseId > 
lastResponseId}} as it is now? We simply fix the overflow problem without 
changing anything else. Specifically, add one check: if {{request.responseId == 
lastResponseId}} then skip other checks. This would be my initial proposal in 
YARN-6640 v1 patch: 

{code}
if (request.getResponseId() != lastResponse.getResponseId()) {
if ((request.getResponseId() + 1) == lastResponse.getResponseId()) {
  /* heartbeat one step old, simply return lastReponse */
  return lastResponse;
} else if (request.getResponseId() + 1 < 
lastResponse.getResponseId()) {
  (resync NM)
}
}
   (process the heartbeat...)
{code}

There's still potential for the RM too slow causing NM resync, but only 
possible for the NMs whose reponseId just wrapped around. This should be fine I 
guess. 

> NM heartbeat stuck when responseId overflows MAX_INT
> 
>
> Key: YARN-7102
> URL: https://issues.apache.org/jira/browse/YARN-7102
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Botong Huang
>Assignee: Botong Huang
>Priority: Critical
> Attachments: YARN-7102.v1.patch, YARN-7102.v2.patch, 
> YARN-7102.v3.patch, YARN-7102.v4.patch, YARN-7102.v5.patch, YARN-7102.v6.patch
>
>
> ResponseId overflow problem in NM-RM heartbeat. This is same as AM-RM 
> heartbeat in YARN-6640, please refer to YARN-6640 for details. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7102) NM heartbeat stuck when responseId overflows MAX_INT

2017-09-21 Thread Botong Huang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16175133#comment-16175133
 ] 

Botong Huang edited comment on YARN-7102 at 9/21/17 5:19 PM:
-

Thanks [~jlowe] for the review and good point. We don't want RM to resync all 
nodes if RM becomes slow. 

How about we take one step back, allowing {{request.responseId > 
lastResponseId}} as it is now? We simply fix the overflow problem without 
changing anything else. Specifically, add one check: if {{request.responseId == 
lastResponseId}} then skip other checks. This would be my initial proposal in 
YARN-6640 v1 patch: 

{code}
if (request.getResponseId() != lastResponse.getResponseId()) {
if ((request.getResponseId() + 1) == lastResponse.getResponseId()) {
  /* heartbeat one step old, simply return lastReponse */
  return lastResponse;
} else if (request.getResponseId() + 1 < 
lastResponse.getResponseId()) {
  (resync NM)
}
}
   (process the heartbeat...)
{code}

There's still potential for the RM too slow causing NM resync, but only 
possible for the NMs whose reponseId just wrapped around. This should be fine I 
guess. 


was (Author: botong):
Thanks [~jlowe] for the review and good point. We don't want RM to resync all 
nodes if RM becomes slow. 

How about we take one step back, allowing {{request.responseId > 
lastResponseId}} as it is now? We simply fix the overflow problem without 
changing anything else. Specifically, add one check: if {{request.responseId == 
lastResponseId}} then skip other checks. This would be my initial proposal in 
YARN-6640 v1 patch: 

{code}
if (request.getResponseId() != lastResponse.getResponseId()) {
if ((request.getResponseId() + 1) == lastResponse.getResponseId()) {
  /* heartbeat one step old, simply return lastReponse */
  return lastResponse;
} else if (request.getResponseId() + 1 < 
lastResponse.getResponseId()) {
  throw new InvalidApplicationMasterRequestException(message);
}
}
   (process the heartbeat...)
{code}

There's still potential for the RM too slow causing NM resync, but only 
possible for the NMs whose reponseId just wrapped around. This should be fine I 
guess. 

> NM heartbeat stuck when responseId overflows MAX_INT
> 
>
> Key: YARN-7102
> URL: https://issues.apache.org/jira/browse/YARN-7102
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Botong Huang
>Assignee: Botong Huang
>Priority: Critical
> Attachments: YARN-7102.v1.patch, YARN-7102.v2.patch, 
> YARN-7102.v3.patch, YARN-7102.v4.patch, YARN-7102.v5.patch, YARN-7102.v6.patch
>
>
> ResponseId overflow problem in NM-RM heartbeat. This is same as AM-RM 
> heartbeat in YARN-6640, please refer to YARN-6640 for details. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7210) Some NPE fixes in Registry DNS

2017-09-21 Thread Billie Rinaldi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Billie Rinaldi updated YARN-7210:
-
Summary: Some NPE fixes in Registry DNS  (was: Some fixes related to 
Registry DNS)

> Some NPE fixes in Registry DNS
> --
>
> Key: YARN-7210
> URL: https://issues.apache.org/jira/browse/YARN-7210
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-7210.yarn-native-services.01.patch, 
> YARN-7210.yarn-native-services.02.patch, 
> YARN-7210.yarn-native-services.03.patch, 
> YARN-7210.yarn-native-services.04.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7102) NM heartbeat stuck when responseId overflows MAX_INT

2017-09-21 Thread Botong Huang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16175133#comment-16175133
 ] 

Botong Huang commented on YARN-7102:


Thanks [~jlowe] for the review and good point. We don't want RM to resync all 
nodes if RM becomes slow. 

How about we take one step back, allowing {{request.responseId > 
lastResponseId}} as it is now? We simply fix the overflow problem without 
changing anything else. Specifically, add one check: if {{request.responseId == 
lastResponseId}} then skip other checks. This would be my initial proposal in 
YARN-6640 v1 patch: 

{code}
if (request.getResponseId() != lastResponse.getResponseId()) {
if ((request.getResponseId() + 1) == lastResponse.getResponseId()) {
  /* heartbeat one step old, simply return lastReponse */
  return lastResponse;
} else if (request.getResponseId() + 1 < 
lastResponse.getResponseId()) {
  throw new InvalidApplicationMasterRequestException(message);
}
}
   (process the heartbeat...)
{code}

There's still potential for the RM too slow causing NM resync, but only 
possible for the NMs whose reponseId just wrapped around. This should be fine I 
guess. 

> NM heartbeat stuck when responseId overflows MAX_INT
> 
>
> Key: YARN-7102
> URL: https://issues.apache.org/jira/browse/YARN-7102
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Botong Huang
>Assignee: Botong Huang
>Priority: Critical
> Attachments: YARN-7102.v1.patch, YARN-7102.v2.patch, 
> YARN-7102.v3.patch, YARN-7102.v4.patch, YARN-7102.v5.patch, YARN-7102.v6.patch
>
>
> ResponseId overflow problem in NM-RM heartbeat. This is same as AM-RM 
> heartbeat in YARN-6640, please refer to YARN-6640 for details. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7009) TestNMClient.testNMClientNoCleanupOnStop is flaky by design

2017-09-21 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16175131#comment-16175131
 ] 

Arun Suresh commented on YARN-7009:
---

Thanks for the update [~miklos.szeg...@cloudera.com],

Can we move the {{DebugSumContainerStateListener}} out of the {{NodeManager}} 
class and somewhere into the test package ? I like the use of the lambda 
expression - but we must ensure to replace it for the 2.x patch.

> TestNMClient.testNMClientNoCleanupOnStop is flaky by design
> ---
>
> Key: YARN-7009
> URL: https://issues.apache.org/jira/browse/YARN-7009
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Miklos Szegedi
>Assignee: Miklos Szegedi
> Attachments: YARN-7009.000.patch, YARN-7009.001.patch, 
> YARN-7009.002.patch, YARN-7009.003.patch
>
>
> The sleeps to wait for a transition to reinit and than back to running is not 
> long enough, it can miss the reinit event.
> {code}
> java.lang.AssertionError: Exception is not expected: 
> org.apache.hadoop.yarn.exceptions.YarnException: Cannot perform RE_INIT on 
> [container_1502735389852_0001_01_01]. Current state is [REINITIALIZING, 
> isReInitializing=true].
>   at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.preReInitializeOrLocalizeCheck(ContainerManagerImpl.java:1772)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.reInitializeContainer(ContainerManagerImpl.java:1697)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.reInitializeContainer(ContainerManagerImpl.java:1668)
>   at 
> org.apache.hadoop.yarn.api.impl.pb.service.ContainerManagementProtocolPBServiceImpl.reInitializeContainer(ContainerManagementProtocolPBServiceImpl.java:214)
>   at 
> org.apache.hadoop.yarn.proto.ContainerManagementProtocol$ContainerManagementProtocolService$2.callBlockingMethod(ContainerManagementProtocol.java:237)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1962)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TestNMClient.testReInitializeContainer(TestNMClient.java:567)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TestNMClient.testContainerManagement(TestNMClient.java:405)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TestNMClient.testNMClientNoCleanupOnStop(TestNMClient.java:214)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> Caused by: org.apache.hadoop.yarn.exceptions.YarnException: Cannot perform 
> RE_INIT on [container_1502735389852_0001_01_01]. Current state is 
> [REINITIALIZING, isReInitializing=true].
>   at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.preReInitializeOrLocalizeCheck(ContainerManagerImpl.java:1772)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.reInitializeContainer(ContainerManagerImpl.java:1697)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.reInitializeContainer(ContainerManagerImpl.java:1668)
>   at 
> org.apache.hadoop.yarn.api.impl.pb.service.ContainerManagementProtocolPBServiceImpl.reInitializeContainer(ContainerManagementProtocolPBServiceImpl.java:214)
>   at 
> org.apache.hadoop.yarn.proto.ContainerManagementProtocol$ContainerManagementProtocolService$2.callBlockingMethod(ContainerManagementProtocol.java:237)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBuf

[jira] [Commented] (YARN-7135) Clean up lock-try order in common scheduler code

2017-09-21 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16175060#comment-16175060
 ] 

Wei-Chiu Chuang commented on YARN-7135:
---

[~v123582]
the patch does not apply against trunk. Since the scope of the patch is wide, 
this is not unexpected. Please rebase the code and resubmit it.

[~leftnoteasy]
please correct me if I'm wrong. The fact that ReentrantLock.lock doesn't throw 
a checked exception does not imply it does not throw except at all. Having the 
lock() outside try{} block looks a good practice to me.

> Clean up lock-try order in common scheduler code
> 
>
> Key: YARN-7135
> URL: https://issues.apache.org/jira/browse/YARN-7135
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: scheduler
>Affects Versions: 3.0.0-alpha4
>Reporter: Daniel Templeton
>Assignee: weiyuan
>  Labels: newbie
> Attachments: YARN-7135.001.patch, YARN-7135.002.patch, 
> YARN-7135.003.patch
>
>
> There are many places that follow the pattern:{code}try {
>   lock.lock();
>   ...
> } finally {
>   lock.unlock();
> }{code}
> There are a couple of reasons that's a bad idea.  The correct pattern 
> is:{code}lock.lock();
> try {
>   ...
> } finally {
>   lock.unlock();
> }{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5195) RM intermittently crashed with NPE while handling APP_ATTEMPT_REMOVED event when async-scheduling enabled in CapacityScheduler

2017-09-21 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe updated YARN-5195:
-
Attachment: YARN-5195-branch-2.8.001.patch

Attaching the branch-2.8 patch again so the QA bot can comment on that as well.

The unit test failures for the branch-2.7 run are known issues with the Jenkins 
setup on that branch.

+1 for both patches.  I'll commit these later today if the Jenkins results for 
2.8 are OK as well.


> RM intermittently crashed with NPE while handling APP_ATTEMPT_REMOVED event 
> when async-scheduling enabled in CapacityScheduler
> --
>
> Key: YARN-5195
> URL: https://issues.apache.org/jira/browse/YARN-5195
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.7.2
>Reporter: Karam Singh
>Assignee: sandflee
> Fix For: 2.9.0, 3.0.0-alpha1
>
> Attachments: YARN-5195.01.patch, YARN-5195.02.patch, 
> YARN-5195.03.patch, YARN-5195-branch-2.7.001.patch, 
> YARN-5195-branch-2.8.001.patch, YARN-5195-branch-2.8.001.patch
>
>
> While running gridmix experiments one time came across incident where RM went 
> down with following exception
> {noformat}
> 2016-05-28 15:45:24,459 [ResourceManager Event Processor] FATAL 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error in 
> handling event type APP_ATTEMPT_REMOVED to the scheduler
> java.lang.NullPointerException
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue.completedContainer(LeafQueue.java:1282)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.completedContainerInternal(CapacityScheduler.java:1469)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.AbstractYarnScheduler.completedContainer(AbstractYarnScheduler.java:497)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.doneApplicationAttempt(CapacityScheduler.java:860)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.handle(CapacityScheduler.java:1319)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.handle(CapacityScheduler.java:127)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$SchedulerEventDispatcher$EventProcessor.run(ResourceManager.java:704)
> at java.lang.Thread.run(Thread.java:745)
> 2016-05-28 15:45:24,460 [ApplicationMasterLauncher #49] INFO 
> org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Cleaning 
> master appattempt_1464449118385_0006_01
> 2016-05-28 15:45:24,460 [ResourceManager Event Processor] INFO 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Exiting, bbye..
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7226) Whitelisted variables do not support delayed variable expansion

2017-09-21 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16174846#comment-16174846
 ] 

Jason Lowe commented on YARN-7226:
--

Pinging [~sidharta-s] and [~vvasudev] since the {{var=var:-value}} logic was 
added in YARN-3853 which broke this, and maybe I'm missing the use-case where 
this added logic is necessary.

> Whitelisted variables do not support delayed variable expansion
> ---
>
> Key: YARN-7226
> URL: https://issues.apache.org/jira/browse/YARN-7226
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.9.0, 2.8.1, 3.0.0-alpha4
>Reporter: Jason Lowe
>Assignee: Jason Lowe
> Attachments: YARN-7226.001.patch, YARN-7226.002.patch
>
>
> The nodemanager supports a configurable list of environment variables, via 
> yarn.nodemanager.env-whitelist, that will be propagated to the container's 
> environment unless those variables were specified in the container launch 
> context.  Unfortunately the handling of these whitelisted variables prevents 
> using delayed variable expansion.  For example, if a user shipped their own 
> version of hadoop with their job via the distributed cache and specified:
> {noformat}
> HADOOP_COMMON_HOME={{PWD}}/my-private-hadoop/
> {noformat}
>  as part of their job, the variable will be set as the *literal* string:
> {noformat}
> $PWD/my-private-hadoop/
> {noformat}
> rather than having $PWD expand to the container's current directory as it 
> does for any other, non-whitelisted variable being set to the same value.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7210) Some fixes related to Registry DNS

2017-09-21 Thread Billie Rinaldi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16174838#comment-16174838
 ] 

Billie Rinaldi commented on YARN-7210:
--

+1 for patch 04. I was able to run the failing unit tests locally, and in any 
case, this patch should not be affecting those unit tests.

> Some fixes related to Registry DNS
> --
>
> Key: YARN-7210
> URL: https://issues.apache.org/jira/browse/YARN-7210
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-7210.yarn-native-services.01.patch, 
> YARN-7210.yarn-native-services.02.patch, 
> YARN-7210.yarn-native-services.03.patch, 
> YARN-7210.yarn-native-services.04.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6968) Hardcoded absolute pathname in DockerLinuxContainerRuntime

2017-09-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16174789#comment-16174789
 ] 

Hudson commented on YARN-6968:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12934 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/12934/])
YARN-6968. Hardcoded absolute pathname in DockerLinuxContainerRuntime. (jlowe: 
rev 10d7493587643b52cee5fde87eca9ef99c422a70)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/DockerLinuxContainerRuntime.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/TestDockerContainerRuntime.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/CGroupsHandlerImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/CGroupsHandler.java


> Hardcoded absolute pathname in DockerLinuxContainerRuntime
> --
>
> Key: YARN-6968
> URL: https://issues.apache.org/jira/browse/YARN-6968
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: Miklos Szegedi
>Assignee: Eric Badger
> Fix For: 2.9.0, 3.0.0-beta1
>
> Attachments: YARN-6968.001.patch, YARN-6968.002.patch, 
> YARN-6968.003.patch, YARN-6968.004.patch
>
>
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.DockerLinuxContainerRuntime.launchContainer(ContainerRuntimeContext)
>  has a hardcoded absolute pathname that is being flagged by findbugs.
> This could be done after YARN-6757 is checked in.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



  1   2   >