[jira] [Commented] (YARN-6647) RM can crash during transitionToStandby due to InterruptedException

2017-11-20 Thread Bibin A Chundatt (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16260382#comment-16260382
 ] 

Bibin A Chundatt commented on YARN-6647:


{quote}
Shouldn't patch handle similar way for storeNewToken and updateStoredToken 
methods? Is it intentional or missed as part of patch?
{quote}
Yes .. For  {{storeNewToken}} and {{updateStoredToken}} are not related to 
{{ExpiredTokenRemover#ExpiredTokenRemover}} thread.
Could you point out to flow {{interrupt}} call will happen for 
{{storeNewToken}} and {{updateStoredToken}} . 

{quote}
AsyncDispatcher changes doesn't looks meaning full to me.
{quote}
{{InterruptException}} could also happen if put is done from thread whose state 
is {{Thread.isInterrupted()=true}}





> RM can crash during transitionToStandby due to InterruptedException
> ---
>
> Key: YARN-6647
> URL: https://issues.apache.org/jira/browse/YARN-6647
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 3.0.0-alpha4
>Reporter: Jason Lowe
>Assignee: Bibin A Chundatt
>Priority: Critical
> Attachments: YARN-6647.001.patch, YARN-6647.002.patch
>
>
> Noticed some tests were failing due to the JVM shutting down early.  I was 
> able to reproduce this occasionally with TestKillApplicationWithRMHA.  
> Stacktrace to follow.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6647) RM can crash during transitionToStandby due to InterruptedException

2017-11-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16260341#comment-16260341
 ] 

Hadoop QA commented on YARN-6647:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 59s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
7s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 in trunk has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
19s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 10s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
12s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 61m  0s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}129m 34s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestNodeLabelContainerAllocation
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-6647 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12898609/YARN-6647.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux a5f21a45e98f 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Bui

[jira] [Commented] (YARN-7545) RM REST API cluster apps missing documentation for "resourceRequests"

2017-11-20 Thread Grant Sohn (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16260337#comment-16260337
 ] 

Grant Sohn commented on YARN-7545:
--

An example in the example XML is:

{noformat}

  
4096
1
  
  
  0
  
0
  
  true
  *


  
4096
1
  
  
  0
  
20
  
  true
  host1.domain.com


  
4096
1
  
  
  0
  
20
  
  true
  host2.domain.com
{noformat}

> RM REST API cluster apps missing documentation for "resourceRequests"
> -
>
> Key: YARN-7545
> URL: https://issues.apache.org/jira/browse/YARN-7545
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: documentation, resourcemanager, restapi
>Affects Versions: 3.0.0
>Reporter: Grant Sohn
>Priority: Minor
>
> I see "resourceRequests" when running the RM Cluster Apps API however it is 
> not mentioned in the current docs though it's visible in the example output.
> http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/ResourceManagerRest.html#Cluster_Applications_API



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7545) RM REST API cluster apps missing documentation for "resourceRequests"

2017-11-20 Thread Grant Sohn (JIRA)
Grant Sohn created YARN-7545:


 Summary: RM REST API cluster apps missing documentation for 
"resourceRequests"
 Key: YARN-7545
 URL: https://issues.apache.org/jira/browse/YARN-7545
 Project: Hadoop YARN
  Issue Type: Bug
  Components: documentation, resourcemanager, restapi
Affects Versions: 3.0.0
Reporter: Grant Sohn
Priority: Minor


I see "resourceRequests" when running the RM Cluster Apps API however it is not 
mentioned in the current docs though it's visible in the example output.

http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/ResourceManagerRest.html#Cluster_Applications_API



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6647) RM can crash during transitionToStandby due to InterruptedException

2017-11-20 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16260310#comment-16260310
 ] 

Rohith Sharma K S commented on YARN-6647:
-

thanks Bibin for working on this JIRA. 
# Shouldn't patch handle similar way for storeNewToken and updateStoredToken 
methods? Is it intentional or missed as part of patch?
# AsyncDispatcher changes doesn't looks meaning full to me. I think better to 
keep as-is. 

> RM can crash during transitionToStandby due to InterruptedException
> ---
>
> Key: YARN-6647
> URL: https://issues.apache.org/jira/browse/YARN-6647
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 3.0.0-alpha4
>Reporter: Jason Lowe
>Assignee: Bibin A Chundatt
>Priority: Critical
> Attachments: YARN-6647.001.patch, YARN-6647.002.patch
>
>
> Noticed some tests were failing due to the JVM shutting down early.  I was 
> able to reproduce this occasionally with TestKillApplicationWithRMHA.  
> Stacktrace to follow.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7480) Render tooltips on columns where text is clipped

2017-11-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16260299#comment-16260299
 ] 

Hadoop QA commented on YARN-7480:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
23m 26s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 50s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
18s{color} | {color:red} The patch generated 3 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 34m 17s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7480 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12897518/YARN-7480.002.patch |
| Optional Tests |  asflicense  shadedclient  |
| uname | Linux 9112e435e2f5 4.4.0-89-generic #112-Ubuntu SMP Mon Jul 31 
19:38:41 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 659e85e |
| maven | version: Apache Maven 3.3.9 |
| asflicense | 
https://builds.apache.org/job/PreCommit-YARN-Build/18602/artifact/out/patch-asflicense-problems.txt
 |
| Max. process+thread count | 410 (vs. ulimit of 5000) |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/18602/console |
| Powered by | Apache Yetus 0.7.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Render tooltips on columns where text is clipped
> 
>
> Key: YARN-7480
> URL: https://issues.apache.org/jira/browse/YARN-7480
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Reporter: Vasudevan Skm
>Assignee: Vasudevan Skm
> Attachments: YARN-7480.001.patch, YARN-7480.002.patch
>
>
> In em-table, when text gets clipped the information is lost. Need to render a 
> tooltip to show the full text in these cases



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-7539) NullPointerException in timeline service v2

2017-11-20 Thread Rohith Sharma K S (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7539?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S resolved YARN-7539.
-
Resolution: Duplicate

This is dupe of YARN-6695. Closing it as duplicate

> NullPointerException in timeline service v2
> ---
>
> Key: YARN-7539
> URL: https://issues.apache.org/jira/browse/YARN-7539
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.1.0
> Environment: Hadoop 3.1 (trunk)
> MIT Kerberos
> CentOS Linux release 7.0.1406
>Reporter: Eric Yang
>
> Application submit through Docker YARN REST API is generating a null point 
> exception after job has been finished.  In Resource Manager log file, we see 
> this error:
> {code}
> 2017-11-20 18:41:46,395 ERROR 
> org.apache.hadoop.yarn.server.resourcemanager.metrics.TimelineServiceV2Publisher:
>  Error when publishing entity TimelineEntity[type='YARN_CONTAINER', 
> id='container_1511201172997_0003_01_13']
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.metrics.TimelineServiceV2Publisher.putEntity(TimelineServiceV2Publisher.java:452)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.metrics.TimelineServiceV2Publisher.access$100(TimelineServiceV2Publisher.java:72)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.metrics.TimelineServiceV2Publisher$TimelineV2EventHandler.handle(TimelineServiceV2Publisher.java:484)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.metrics.TimelineServiceV2Publisher$TimelineV2EventHandler.handle(TimelineServiceV2Publisher.java:473)
>   at 
> org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:197)
>   at 
> org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:126)
>   at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7537) [Atsv2] load hbase configuration from filesystem rather than URL

2017-11-20 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16260272#comment-16260272
 ] 

Rohith Sharma K S commented on YARN-7537:
-

No, it doesn't since URL does not defines hdfs as scheme because it has not 
registered with Internet Assigned Numbers Authority.  We get following error 
when we use hdfs as scheme or any unknown schemes for creating URL object. 
{noformat}
HW12723:hadoop rsharmaks$ hbase 
org.apache.hadoop.yarn.server.timelineservice.storage.TimelineSchemaCreator 
-create -s
2017-11-21 10:50:43,192 INFO  [main] storage.TimelineSchemaCreator: Starting 
the schema creation
2017-11-21 10:50:43,362 INFO  [main] common.HBaseTimelineStorageUtils: Using 
hbase configuration at hdfs://127.0.0.1:9000/atsv2/hbase-site.xml
Exception in thread "main" java.net.MalformedURLException: unknown protocol: 
hdfs
at java.net.URL.(URL.java:600)
at java.net.URL.(URL.java:490)
at java.net.URL.(URL.java:439)
at 
org.apache.hadoop.yarn.server.timelineservice.storage.common.HBaseTimelineStorageUtils.getTimelineServiceHBaseConf(HBaseTimelineStorageUtils.java:291)
at 
org.apache.hadoop.yarn.server.timelineservice.storage.TimelineSchemaCreator.main(TimelineSchemaCreator.java:80)
{noformat}

> [Atsv2] load hbase configuration from filesystem rather than URL
> 
>
> Key: YARN-7537
> URL: https://issues.apache.org/jira/browse/YARN-7537
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
> Attachments: YARN-7537.01.patch
>
>
> Currently HBaseTimelineStorageUtils#getTimelineServiceHBaseConf loads hbase 
> configurations using URL if *yarn.timeline-service.hbase.configuration.file* 
> is configured. But it is restricted to URLs only. This need to be changed to 
> load from file system. In deployment, hbase configuration can be kept under 
> filesystem so that it be utilized by all the NodeManager and ResourceManager.
> cc :/ [~vrushalic] [~varun_saxena]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7544) Configuration cleanup associated to support absolute resources in a queue

2017-11-20 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-7544:
--
Summary: Configuration cleanup associated to support absolute resources in 
a queue  (was: Configuration cleanup associated to support absolute resources 
in a queuec)

> Configuration cleanup associated to support absolute resources in a queue
> -
>
> Key: YARN-7544
> URL: https://issues.apache.org/jira/browse/YARN-7544
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Sunil G
>Assignee: Sunil G
>
> Its better to reuse existing *capacity* and *max-capacity* instead of 
> introducing new config items.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7544) Configuration cleanup associated to support absolute resources in a queuec

2017-11-20 Thread Sunil G (JIRA)
Sunil G created YARN-7544:
-

 Summary: Configuration cleanup associated to support absolute 
resources in a queuec
 Key: YARN-7544
 URL: https://issues.apache.org/jira/browse/YARN-7544
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: capacity scheduler
Reporter: Sunil G
Assignee: Sunil G


Its better to reuse existing *capacity* and *max-capacity* instead of 
introducing new config items.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-6647) RM can crash during transitionToStandby due to InterruptedException

2017-11-20 Thread Bibin A Chundatt (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16260244#comment-16260244
 ] 

Bibin A Chundatt edited comment on YARN-6647 at 11/21/17 4:34 AM:
--

Thank you [~jlowe] for looking into analysis

{quote}
IIUC its not the interrupted exception bubbling cased by Zk operation interrupt 
which is causing the issue. 
{quote}
Yes.I was trying to convey the same as you mentioned.Indirectly interrupted 
exception is causing the same no directly.

About curator what i understood after a quick look is earlier version was not 
setting {{Thread.currentThread().interrput()}} in {{RetryLoop}}, so bombing was 
not happening during put operation.

Following changes are done in patch attached.
# {{ExpiredTokenRemover}} related store call interrupts are handled
# Fixed checkstyle issue.
# AsyncDispatcher message change


was (Author: bibinchundatt):
Thank you [~jlowe] for looking into analysis

{quote}
IIUC its not the interrupted exception bubbling cased by Zk operation interrupt 
which is causing the issue. 
{quote}
Yes.I was trying to convey the same as you mentioned.Indirectly interrupted 
exception is causing the same no directly.

About curator what i understood after a quick look is earlier version was not 
setting {{Thread.currentThread().interrput()}} in {{RetryLoop}} so bombing was 
not happening which put operation.

Following changes are done in patch attached.
# {{ExpiredTokenRemover}} related store call interrupts are handled
# Fixed checkstyle issue.
# AsyncDispatcher message change

> RM can crash during transitionToStandby due to InterruptedException
> ---
>
> Key: YARN-6647
> URL: https://issues.apache.org/jira/browse/YARN-6647
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 3.0.0-alpha4
>Reporter: Jason Lowe
>Assignee: Bibin A Chundatt
>Priority: Critical
> Attachments: YARN-6647.001.patch, YARN-6647.002.patch
>
>
> Noticed some tests were failing due to the JVM shutting down early.  I was 
> able to reproduce this occasionally with TestKillApplicationWithRMHA.  
> Stacktrace to follow.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6647) RM can crash during transitionToStandby due to InterruptedException

2017-11-20 Thread Bibin A Chundatt (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bibin A Chundatt updated YARN-6647:
---
Attachment: YARN-6647.002.patch

Thank you [~jlowe] for looking into analysis

{quote}
IIUC its not the interrupted exception bubbling cased by Zk operation interrupt 
which is causing the issue. 
{quote}
Yes.I was trying to convey the same as you mentioned.Indirectly interrupted 
exception is causing the same no directly.

About curator what i understood after a quick look is earlier version was not 
setting {{Thread.currentThread().interrput()}} in {{RetryLoop}} so bombing was 
not happening which put operation.

Following changes are done in patch attached.
# {{ExpiredTokenRemover}} related store call interrupts are handled
# Fixed checkstyle issue.
# AsyncDispatcher message change

> RM can crash during transitionToStandby due to InterruptedException
> ---
>
> Key: YARN-6647
> URL: https://issues.apache.org/jira/browse/YARN-6647
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 3.0.0-alpha4
>Reporter: Jason Lowe
>Assignee: Bibin A Chundatt
>Priority: Critical
> Attachments: YARN-6647.001.patch, YARN-6647.002.patch
>
>
> Noticed some tests were failing due to the JVM shutting down early.  I was 
> able to reproduce this occasionally with TestKillApplicationWithRMHA.  
> Stacktrace to follow.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7363) ContainerLocalizer don't have a valid log4j config in case of Linux container executor

2017-11-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16260228#comment-16260228
 ] 

Hadoop QA commented on YARN-7363:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 12s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 14s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 17m 34s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 59m 41s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.nodemanager.TestLinuxContainerExecutorWithMocks |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7363 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12898601/YARN-7363.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux b402a0c05df7 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 659e85e |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/18599/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/18599/testReport/ |
| Max. process+thread count | 407 (vs. ulimit of 5000) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/

[jira] [Commented] (YARN-6750) Add a configuration to cap how much a NM can be overallocated

2017-11-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16260218#comment-16260218
 ] 

Hadoop QA commented on YARN-6750:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
10s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-1011 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
37s{color} | {color:green} YARN-1011 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
30s{color} | {color:green} YARN-1011 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 1s{color} | {color:green} YARN-1011 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
7s{color} | {color:green} YARN-1011 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 26s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
8s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 in YARN-1011 has 2 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
45s{color} | {color:green} YARN-1011 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
39s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 59s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 1 new + 466 unchanged - 0 fixed = 467 total (was 466) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 10s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
37s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
50s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 57m 
30s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}128m 40s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-6750 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12898588/YARN-6750-YARN-1011.02.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
uni

[jira] [Created] (YARN-7543) FileNotFoundException when creating a yarn service due to broken link under hadoop lib directory

2017-11-20 Thread Gour Saha (JIRA)
Gour Saha created YARN-7543:
---

 Summary: FileNotFoundException when creating a yarn service due to 
broken link under hadoop lib directory
 Key: YARN-7543
 URL: https://issues.apache.org/jira/browse/YARN-7543
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Gour Saha


The hadoop lib dir had a broken link to a ojdb jar which was not really 
required for a YARN service creation. The app submission failed with the below 
FNFE. Ideally it should be handled and app should be successfully submitted and 
let the app fail if it really needed the jar of the broken link -
{code}
[root@ctr-e134-1499953498516-324910-01-02 ~]# yarn app -launch gour-sleeper 
sleeper
WARNING: YARN_LOG_DIR has been replaced by HADOOP_LOG_DIR. Using value of 
YARN_LOG_DIR.
WARNING: YARN_LOGFILE has been replaced by HADOOP_LOGFILE. Using value of 
YARN_LOGFILE.
WARNING: YARN_PID_DIR has been replaced by HADOOP_PID_DIR. Using value of 
YARN_PID_DIR.
WARNING: YARN_OPTS has been replaced by HADOOP_OPTS. Using value of YARN_OPTS.
17/11/21 03:21:58 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
17/11/21 03:21:59 INFO client.RMProxy: Connecting to ResourceManager at 
ctr-e134-1499953498516-324910-01-03.example.com/172.27.47.1:8050
17/11/21 03:22:00 WARN shortcircuit.DomainSocketFactory: The short-circuit 
local reads feature cannot be used because libhadoop cannot be loaded.
17/11/21 03:22:00 INFO client.RMProxy: Connecting to ResourceManager at 
ctr-e134-1499953498516-324910-01-03.example.com/172.27.47.1:8050
17/11/21 03:22:00 INFO client.ServiceClient: Loading service definition from 
local FS: 
/usr/hdp/3.0.0.0-493/hadoop-yarn/yarn-service-examples/sleeper/sleeper.json
17/11/21 03:22:01 INFO client.ServiceClient: Persisted service gour-sleeper at 
hdfs://ctr-e134-1499953498516-324910-01-03.example.com:8020/user/hdfs/.yarn/services/gour-sleeper/gour-sleeper.json
17/11/21 03:22:01 INFO conf.Configuration: resource-types.xml not found
17/11/21 03:22:01 WARN client.ServiceClient: AM log4j property file doesn't 
exist: /usr/hdp/3.0.0.0-493/hadoop/conf/yarnservice-log4j.properties
17/11/21 03:22:01 INFO client.ServiceClient: Uploading all dependency jars to 
HDFS. For faster submission of apps, pre-upload dependency jars to HDFS using 
command: yarn app -enableFastLaunch
Exception in thread "main" java.io.FileNotFoundException: File 
/usr/hdp/3.0.0.0-493/hadoop/lib/ojdbc6.jar does not exist
at 
org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:641)
at 
org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:867)
at 
org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:631)
at 
org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:454)
at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:365)
at 
org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:2399)
at 
org.apache.hadoop.yarn.service.utils.CoreFileSystem.submitFile(CoreFileSystem.java:434)
at 
org.apache.hadoop.yarn.service.utils.ServiceUtils.putAllJars(ServiceUtils.java:409)
at 
org.apache.hadoop.yarn.service.provider.ProviderUtils.addAllDependencyJars(ProviderUtils.java:138)
at 
org.apache.hadoop.yarn.service.client.ServiceClient.addJarResource(ServiceClient.java:695)
at 
org.apache.hadoop.yarn.service.client.ServiceClient.submitApp(ServiceClient.java:553)
at 
org.apache.hadoop.yarn.service.client.ServiceClient.actionCreate(ServiceClient.java:212)
at 
org.apache.hadoop.yarn.service.client.ServiceClient.actionLaunch(ServiceClient.java:197)
at 
org.apache.hadoop.yarn.client.cli.ApplicationCLI.run(ApplicationCLI.java:447)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
at 
org.apache.hadoop.yarn.client.cli.ApplicationCLI.main(ApplicationCLI.java:111)
{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6704) Add Federation Interceptor restart when work preserving NM is enabled

2017-11-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6704?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16260210#comment-16260210
 ] 

Hadoop QA commented on YARN-6704:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  3m 
57s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 54s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
58s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 33s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server: The patch generated 1 new + 
17 unchanged - 0 fixed = 18 total (was 17) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  9s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m  
7s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 17m 
38s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 72m 25s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-6704 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12898593/YARN-6704.v5.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux c5f81fd8685a 4.4.0-89-generic #112-Ubuntu SMP Mon Jul 31 
19:38:41 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 659e85e |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://buil

[jira] [Commented] (YARN-6168) Restarted RM may not inform AM about all existing containers

2017-11-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16260204#comment-16260204
 ] 

Hadoop QA commented on YARN-6168:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
23s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  5s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
7s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api in 
trunk has 1 extant Findbugs warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
11s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 in trunk has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
42s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  7m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
11s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 56s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 1 new + 208 unchanged - 0 fixed = 209 total (was 208) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 35s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
38s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
42s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
14s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 60m 54s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}134m 51s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestNodeLabelContainerAllocation
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Cli

[jira] [Commented] (YARN-7337) Expose per-node over-allocation info in Node Report

2017-11-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16260199#comment-16260199
 ] 

Hadoop QA commented on YARN-7337:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 9 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-1011 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
27s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
52s{color} | {color:green} YARN-1011 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
43s{color} | {color:green} YARN-1011 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
51s{color} | {color:green} YARN-1011 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
39s{color} | {color:green} YARN-1011 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 49s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
3s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 in YARN-1011 has 2 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
37s{color} | {color:green} YARN-1011 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
 4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 11m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
58s{color} | {color:green} root: The patch generated 0 new + 824 unchanged - 42 
fixed = 824 total (was 866) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
8m 18s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m  
6s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
33s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
37s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
51s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 59m 42s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 20m 
59s{color} | {color:green} hadoop-yarn-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 
26s{color} | {color:green} hadoop-yarn-applications-distributedshell in the 
patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}126m 24s{color} 
| {color:red} hadoop-mapreduce-client-jobclient in the patch failed. {color} |
| {color:red}-1{color

[jira] [Updated] (YARN-7363) ContainerLocalizer don't have a valid log4j config in case of Linux container executor

2017-11-20 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-7363:
---
Attachment: YARN-7363.002.patch

> ContainerLocalizer don't have a valid log4j config in case of Linux container 
> executor
> --
>
> Key: YARN-7363
> URL: https://issues.apache.org/jira/browse/YARN-7363
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.1.0
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: YARN-7363.001.patch, YARN-7363.002.patch
>
>
> In case of Linux container executor, ContainerLocalizer run as a separated 
> process. It doesn't access a valid log4j.properties when the application user 
> is not in the "hadoop" group. The log4j.properties of node manager is in its 
> classpath, but it isn't readable by users not in hadoop group due to the 
> security concern. In that case, ContainerLocalizer doesn't have a valid log4j 
> configuration, and normally no log output.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6921) Allow resource request to opt out of oversubscription in Fair Scheduler

2017-11-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16260195#comment-16260195
 ] 

Hadoop QA commented on YARN-6921:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
9s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-1011 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
43s{color} | {color:green} YARN-1011 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
39s{color} | {color:green} YARN-1011 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} YARN-1011 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
43s{color} | {color:green} YARN-1011 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  6s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
5s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 in YARN-1011 has 2 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} YARN-1011 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 0 new + 208 unchanged - 1 fixed = 208 total (was 209) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 52s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 54m 40s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}100m 48s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestIncreaseAllocationExpirer
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-6921 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12898579/YARN-6921-YARN-1011.01.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux e753d074a5ea 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | YARN-1011 / f2b85d6 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-YARN-Build/18596/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-y

[jira] [Commented] (YARN-7520) Queue Ordering policy changes for ordering auto created leaf queues within Managed parent Queues

2017-11-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16260190#comment-16260190
 ] 

Hadoop QA commented on YARN-7520:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
8s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  6s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
11s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 in trunk has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 25s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 39 new + 76 unchanged - 27 fixed = 115 total (was 103) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  7s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
16s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 generated 2 new + 1 unchanged - 0 fixed = 3 total (was 1) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 55m 57s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}102m 18s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | 
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
|  |  
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.policy.PriorityUtilizationQueueOrderingPolicy$PriorityQueueComparator.compare(CSQueue,
 CSQueue) incorrectly handles float value  At 
PriorityUtilizationQueueOrderingPolicy.java:value  At 
PriorityUtilizationQueueOrderingPolicy.java:[line 133] |
|  |  
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.policy.PriorityUtilizationQueueOrderingPolicy$PriorityQueueComparator.compare(CSQueue,
 CSQueue) incorrectly handles float value  At 
PriorityUtilizationQueueOrderingPolicy.java:value  At 
PriorityUtilizationQueueOrderingPolicy.java:[line 135] |
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestNodeLabelContainerAllocation
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docke

[jira] [Commented] (YARN-7473) Implement Framework and policy for capacity management of auto created queues

2017-11-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16260177#comment-16260177
 ] 

Hadoop QA commented on YARN-7473:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  1s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
58s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 in trunk has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 23s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 89 new + 212 unchanged - 4 fixed = 301 total (was 216) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 16s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
9s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 generated 4 new + 1 unchanged - 0 fixed = 5 total (was 1) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 59m 35s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}100m 42s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | 
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
|  |  Unchecked/unconfirmed cast from 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue to 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ManagedParentQueue
 in 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.GuaranteedOrZeroCapacityOverTimePolicy.init(CapacitySchedulerContext,
 ParentQueue)  At 
GuaranteedOrZeroCapacityOverTimePolicy.java:org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ManagedParentQueue
 in 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.GuaranteedOrZeroCapacityOverTimePolicy.init(CapacitySchedulerContext,
 ParentQueue)  At GuaranteedOrZeroCapacityOverTimePolicy.java:[line 236] |
|  |  Null pointer dereference of childQueue in 
org.apache.hadoop.yarn.server.resourcema

[jira] [Commented] (YARN-6704) Add Federation Interceptor restart when work preserving NM is enabled

2017-11-20 Thread Botong Huang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6704?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16260158#comment-16260158
 ] 

Botong Huang commented on YARN-6704:


v5 patch uploaded. A big portion of the code around UAM reattach in v4 patch is 
refactored into YARN-6128. Now the set of running containers is recovered fully 
from home and secondary RMs. 

> Add Federation Interceptor restart when work preserving NM is enabled
> -
>
> Key: YARN-6704
> URL: https://issues.apache.org/jira/browse/YARN-6704
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Botong Huang
>Assignee: Botong Huang
> Attachments: YARN-6704-YARN-2915.v1.patch, 
> YARN-6704-YARN-2915.v2.patch, YARN-6704.v3.patch, YARN-6704.v4.patch, 
> YARN-6704.v5.patch
>
>
> YARN-1336 added the ability to restart NM without loosing any running 
> containers. {{AMRMProxy}} restart is added in YARN-6127. In a Federated YARN 
> environment, there's additional state in the {{FederationInterceptor}} to 
> allow for spanning across multiple sub-clusters, so we need to enhance 
> {{FederationInterceptor}} to support work-preserving restart.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6704) Add Federation Interceptor restart when work preserving NM is enabled

2017-11-20 Thread Botong Huang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6704?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Botong Huang updated YARN-6704:
---
Attachment: YARN-6704.v5.patch

> Add Federation Interceptor restart when work preserving NM is enabled
> -
>
> Key: YARN-6704
> URL: https://issues.apache.org/jira/browse/YARN-6704
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Botong Huang
>Assignee: Botong Huang
> Attachments: YARN-6704-YARN-2915.v1.patch, 
> YARN-6704-YARN-2915.v2.patch, YARN-6704.v3.patch, YARN-6704.v4.patch, 
> YARN-6704.v5.patch
>
>
> YARN-1336 added the ability to restart NM without loosing any running 
> containers. {{AMRMProxy}} restart is added in YARN-6127. In a Federated YARN 
> environment, there's additional state in the {{FederationInterceptor}} to 
> allow for spanning across multiple sub-clusters, so we need to enhance 
> {{FederationInterceptor}} to support work-preserving restart.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6669) Support security for YARN service framework

2017-11-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16260155#comment-16260155
 ] 

Hadoop QA commented on YARN-6669:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
29s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 34s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
22s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 57s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 29 new + 247 unchanged - 47 fixed = 276 total (was 294) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 34s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
0s{color} | {color:green} hadoop-yarn-registry in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
40s{color} | {color:green} hadoop-yarn-services-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
29s{color} | {color:green} hadoop-yarn-services-api in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 65m 46s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-6669 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12898571/YARN-6669.03.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux eff2498c6fb5 4.4.0-89-generic #112-Ubuntu SMP Mon Jul 31 
19:38:41 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 

[jira] [Updated] (YARN-6750) Add a configuration to cap how much a NM can be overallocated

2017-11-20 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6750?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated YARN-6750:
-
Attachment: YARN-6750-YARN-1011.02.patch

> Add a configuration to cap how much a NM can be overallocated
> -
>
> Key: YARN-6750
> URL: https://issues.apache.org/jira/browse/YARN-6750
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Reporter: Haibo Chen
>Assignee: Haibo Chen
> Attachments: YARN-6750-YARN-1011.00.patch, 
> YARN-6750-YARN-1011.01.patch, YARN-6750-YARN-1011.02.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6750) Add a configuration to cap how much a NM can be overallocated

2017-11-20 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16260130#comment-16260130
 ] 

Haibo Chen commented on YARN-6750:
--

The findbug warning is unrelated. Will address the checkstyle issue in a new 
patch.

> Add a configuration to cap how much a NM can be overallocated
> -
>
> Key: YARN-6750
> URL: https://issues.apache.org/jira/browse/YARN-6750
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Reporter: Haibo Chen
>Assignee: Haibo Chen
> Attachments: YARN-6750-YARN-1011.00.patch, 
> YARN-6750-YARN-1011.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6750) Add a configuration to cap how much a NM can be overallocated

2017-11-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16260129#comment-16260129
 ] 

Hadoop QA commented on YARN-6750:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
8s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-1011 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  3m  
6s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
22s{color} | {color:green} YARN-1011 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
19s{color} | {color:green} YARN-1011 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 2s{color} | {color:green} YARN-1011 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
2s{color} | {color:green} YARN-1011 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 14s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
1s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 in YARN-1011 has 2 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
44s{color} | {color:green} YARN-1011 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
35s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m  8s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 6 new + 466 unchanged - 0 fixed = 472 total (was 466) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 52s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
36s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
48s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 56m 
40s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}130m 51s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-6750 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12898562/YARN-6750-YARN-1011.01.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
uni

[jira] [Updated] (YARN-7541) Node updates don't update the maximum cluster capability for resources other than CPU and memory

2017-11-20 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton updated YARN-7541:
---
Attachment: YARN-7541.001.patch

> Node updates don't update the maximum cluster capability for resources other 
> than CPU and memory
> 
>
> Key: YARN-7541
> URL: https://issues.apache.org/jira/browse/YARN-7541
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: 3.0.0-beta1, 3.1.0
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Critical
> Attachments: YARN-7541.001.patch
>
>
> When I submit an MR job that asks for too much memory or CPU for the map or 
> reduce, the AM will fail because it recognizes that the request is too large. 
>  With any other resources, however, the resource requests instead be made and 
> remain pending forever.  Looks like we forgot to update the code that tracks 
> the maximum container allocation in {{ClusterNodeTracker}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7531) ResourceRequest.equal does not check ExecutionTypeRequest.enforceExecutionType()

2017-11-20 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16260120#comment-16260120
 ] 

Haibo Chen commented on YARN-7531:
--

The findbug warning is unrelated.

> ResourceRequest.equal does not check 
> ExecutionTypeRequest.enforceExecutionType()
> 
>
> Key: YARN-7531
> URL: https://issues.apache.org/jira/browse/YARN-7531
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: api
>Affects Versions: 3.0.0
>Reporter: Haibo Chen
>Assignee: Haibo Chen
> Attachments: YARN-7531.01.patch, YARN-7531.prelim.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7531) ResourceRequest.equal does not check ExecutionTypeRequest.enforceExecutionType()

2017-11-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16260114#comment-16260114
 ] 

Hadoop QA commented on YARN-7531:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 15m 
48s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
8s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 36s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
13s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api in 
trunk has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
14s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  9m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 43s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
41s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
12s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 94m 30s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7531 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12898566/YARN-7531.01.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux fdb0595a2ca7 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 60fc2a1 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-YARN-Build/18591/artifact/out/branch-find

[jira] [Updated] (YARN-7390) All reservation related test cases failed when TestYarnClient runs against Fair Scheduler.

2017-11-20 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-7390:
---
Attachment: YARN-7390.branch-2.001.patch

Uploaded patch for branch-2.

> All reservation related test cases failed when TestYarnClient runs against 
> Fair Scheduler.
> --
>
> Key: YARN-7390
> URL: https://issues.apache.org/jira/browse/YARN-7390
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler, reservation system
>Affects Versions: 2.9.0, 3.0.0, 3.1.0
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Fix For: 3.0.0, 3.1.0
>
> Attachments: YARN-7390.001.patch, YARN-7390.002.patch, 
> YARN-7390.003.patch, YARN-7390.004.patch, YARN-7390.005.patch, 
> YARN-7390.branch-2.001.patch
>
>
> All reservation related test cases failed when {{TestYarnClient}} runs 
> against Fair Scheduler. To reproduce it, you need to set scheduler class to 
> Fair Scheduler in yarn-default.xml.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6921) Allow resource request to opt out of oversubscription in Fair Scheduler

2017-11-20 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated YARN-6921:
-
Attachment: YARN-6921-YARN-1011.01.patch

> Allow resource request to opt out of oversubscription in Fair Scheduler
> ---
>
> Key: YARN-6921
> URL: https://issues.apache.org/jira/browse/YARN-6921
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler
>Affects Versions: 3.0.0-alpha3
>Reporter: Haibo Chen
>Assignee: Haibo Chen
> Attachments: YARN-6921-YARN-1011.00.patch, 
> YARN-6921-YARN-1011.01.patch
>
>
> Guaranteed container requests, enforce tag true or not, are by default 
> eligible for oversubscription, and thus can get OPPORTUNISTIC container 
> allocations. We should allow them to opt out if their enforce tag is set to 
> true.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6921) Allow resource request to opt out of oversubscription in Fair Scheduler

2017-11-20 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16260108#comment-16260108
 ] 

Haibo Chen commented on YARN-6921:
--

YARN-5652 tracks 
TestRMAdminService.testRefreshNodesResourceWithResourceReturnInRegistration, 
and 
TestCapacityOverTimePolicy.testAllocation that I can not reproduce locally, is 
unrelated.
I have addressed the checkstyle issue in the new patch

> Allow resource request to opt out of oversubscription in Fair Scheduler
> ---
>
> Key: YARN-6921
> URL: https://issues.apache.org/jira/browse/YARN-6921
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler
>Affects Versions: 3.0.0-alpha3
>Reporter: Haibo Chen
>Assignee: Haibo Chen
> Attachments: YARN-6921-YARN-1011.00.patch
>
>
> Guaranteed container requests, enforce tag true or not, are by default 
> eligible for oversubscription, and thus can get OPPORTUNISTIC container 
> allocations. We should allow them to opt out if their enforce tag is set to 
> true.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7542) NM recovers some Running Opportunistic Containers as SUSPEND

2017-11-20 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-7542:
--
Target Version/s: 2.9.1

> NM recovers some Running Opportunistic Containers as SUSPEND
> 
>
> Key: YARN-7542
> URL: https://issues.apache.org/jira/browse/YARN-7542
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Arun Suresh
>Assignee: Sampada Dehankar
>
> Steps to reproduce:
> * Start YARN cluster - Enable Opportunistic containers and set NM queue 
> length to something > 10. Also Enable work preserving restart
> * Start an MR job (without opportunistic containers)
> * Kill the NM and restart it again.
> * In the logs - it shows that some of the containers are in SUSPENDED state - 
> even though they are still running.
> [~sampada15] / [~kartheek], can you take a look at this ?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7541) Node updates don't update the maximum cluster capability for resources other than CPU and memory

2017-11-20 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton updated YARN-7541:
---
Description: When I submit an MR job that asks for too much memory or CPU 
for the map or reduce, the AM will fail because it recognizes that the request 
is too large.  With any other resources, however, the resource requests instead 
be made and remain pending forever.  Looks like we forgot to update the code 
that tracks the maximum container allocation in {{ClusterNodeTracker}}.

> Node updates don't update the maximum cluster capability for resources other 
> than CPU and memory
> 
>
> Key: YARN-7541
> URL: https://issues.apache.org/jira/browse/YARN-7541
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: 3.0.0-beta1, 3.1.0
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Critical
>
> When I submit an MR job that asks for too much memory or CPU for the map or 
> reduce, the AM will fail because it recognizes that the request is too large. 
>  With any other resources, however, the resource requests instead be made and 
> remain pending forever.  Looks like we forgot to update the code that tracks 
> the maximum container allocation in {{ClusterNodeTracker}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7520) Queue Ordering policy changes for ordering auto created leaf queues within Managed parent Queues

2017-11-20 Thread Suma Shivaprasad (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suma Shivaprasad updated YARN-7520:
---
Attachment: YARN-7520.2.patch

Updated patch with fixes for findbugs and checkstyle

> Queue Ordering policy changes for ordering auto created leaf queues within 
> Managed parent Queues
> 
>
> Key: YARN-7520
> URL: https://issues.apache.org/jira/browse/YARN-7520
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
> Attachments: YARN-7520.1.patch, YARN-7520.2.patch
>
>
> Queue Ordering policy currently uses priority, utilization and absolute 
> capacity for pre-configured parent queues to order leaf queues while 
> assigning containers. It needs modifications for auto created leaf queues 
> since they can have zero capacity



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-7541) Node updates don't update the maximum cluster capability for resources other than CPU and memory

2017-11-20 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton reassigned YARN-7541:
--

Assignee: Daniel Templeton

> Node updates don't update the maximum cluster capability for resources other 
> than CPU and memory
> 
>
> Key: YARN-7541
> URL: https://issues.apache.org/jira/browse/YARN-7541
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: 3.0.0-beta1, 3.1.0
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Critical
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7542) NM recovers some Running Opportunistic Containers as SUSPEND

2017-11-20 Thread Arun Suresh (JIRA)
Arun Suresh created YARN-7542:
-

 Summary: NM recovers some Running Opportunistic Containers as 
SUSPEND
 Key: YARN-7542
 URL: https://issues.apache.org/jira/browse/YARN-7542
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Arun Suresh
Assignee: Sampada Dehankar


Steps to reproduce:
* Start YARN cluster - Enable Opportunistic containers and set NM queue length 
to something > 10. Also Enable work preserving restart
* Start an MR job (without opportunistic containers)
* Kill the NM and restart it again.
* In the logs - it shows that some of the containers are in SUSPENDED state - 
even though they are still running.

[~sampada15] / [~kartheek], can you take a look at this ?




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7473) Implement Framework and policy for capacity management of auto created queues

2017-11-20 Thread Suma Shivaprasad (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suma Shivaprasad updated YARN-7473:
---
Attachment: YARN-7473.5.patch

Reverted ManagedParentQueue.initializeLeafQueue changes to not load node labels 
from leaf queue template and fixed a findbugs issue

> Implement Framework and policy for capacity management of auto created queues 
> --
>
> Key: YARN-7473
> URL: https://issues.apache.org/jira/browse/YARN-7473
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
> Attachments: YARN-7473.1.patch, YARN-7473.2.patch, YARN-7473.3.patch, 
> YARN-7473.4.patch, YARN-7473.5.patch
>
>
> This jira mainly addresses the following
>  
> 1.Support adding pluggable policies on parent queue for dynamically managing 
> capacity/state for leaf queues.
> 2. Implement  a default policy that manages capacity based on pending 
> applications and either grants guaranteed or zero capacity to queues based on 
> parent's available guaranteed capacity.
> 3. Integrate with SchedulingEditPolicy framework to trigger this periodically 
> and signal scheduler to take necessary actions for capacity/queue management.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7541) Node updates don't update the maximum cluster capability for resources other than CPU and memory

2017-11-20 Thread Daniel Templeton (JIRA)
Daniel Templeton created YARN-7541:
--

 Summary: Node updates don't update the maximum cluster capability 
for resources other than CPU and memory
 Key: YARN-7541
 URL: https://issues.apache.org/jira/browse/YARN-7541
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Affects Versions: 3.0.0-beta1, 3.1.0
Reporter: Daniel Templeton
Priority: Critical






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6168) Restarted RM may not inform AM about all existing containers

2017-11-20 Thread Chandni Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chandni Singh updated YARN-6168:

Attachment: YARN-6168.002.patch

Addressed [~jianhe] comments in patch 2.

> Restarted RM may not inform AM about all existing containers
> 
>
> Key: YARN-6168
> URL: https://issues.apache.org/jira/browse/YARN-6168
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Billie Rinaldi
>Assignee: Chandni Singh
> Attachments: YARN-6168.001.patch, YARN-6168.002.patch
>
>
> There appears to be a race condition when an RM is restarted. I had a 
> situation where the RMs and AM were down, but NMs and app containers were 
> still running. When I restarted the RM, the AM restarted, registered with the 
> RM, and received its list of existing containers before the NMs had reported 
> all of their containers to the RM. The AM was only told about some of the 
> app's existing containers.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5534) Allow user provided Docker volume mount list

2017-11-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16260078#comment-16260078
 ] 

Hadoop QA commented on YARN-5534:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 25s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
22s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 49s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 1 new + 30 unchanged - 0 fixed = 31 total (was 30) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
14s{color} | {color:red} hadoop-yarn-site in the patch failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 54s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 17m 
40s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
19s{color} | {color:green} hadoop-yarn-site in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 77m 22s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-5534 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12898561/YARN-5534.004.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  c

[jira] [Commented] (YARN-7520) Queue Ordering policy changes for ordering auto created leaf queues within Managed parent Queues

2017-11-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16260077#comment-16260077
 ] 

Hadoop QA commented on YARN-7520:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 10m  
0s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  9s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
59s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 in trunk has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 22s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 41 new + 80 unchanged - 24 fixed = 121 total (was 104) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 16s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
4s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 generated 2 new + 1 unchanged - 0 fixed = 3 total (was 1) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 55m  5s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}106m 43s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | 
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
|  |  
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.policy.PriorityUtilizationQueueOrderingPolicy$PriorityQueueComparator.compare(CSQueue,
 CSQueue) incorrectly handles float value  At 
PriorityUtilizationQueueOrderingPolicy.java:value  At 
PriorityUtilizationQueueOrderingPolicy.java:[line 130] |
|  |  
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.policy.PriorityUtilizationQueueOrderingPolicy$PriorityQueueComparator.compare(CSQueue,
 CSQueue) incorrectly handles float value  At 
PriorityUtilizationQueueOrderingPolicy.java:value  At 
PriorityUtilizationQueueOrderingPolicy.java:[line 132] |
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.reservation.TestCapacityOverTimePolicy |
|   | 
hadoop.yarn.server.resourcemanager.scheduler.capacit

[jira] [Updated] (YARN-6669) Support security for YARN service framework

2017-11-20 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-6669:
--
Attachment: YARN-6669.03.patch

Removed the RMRegistryOperationsService class and its associated UT, as this 
class is no longer needed 

> Support security for YARN service framework
> ---
>
> Key: YARN-6669
> URL: https://issues.apache.org/jira/browse/YARN-6669
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-6669.01.patch, YARN-6669.02.patch, 
> YARN-6669.03.patch, YARN-6669.yarn-native-services.01.patch, 
> YARN-6669.yarn-native-services.03.patch, 
> YARN-6669.yarn-native-services.04.patch, 
> YARN-6669.yarn-native-services.05.patch
>
>
> Changes include:
> -  Make registry client to programmatically generate the jaas conf for secure 
> access ZK quorum
> - Create a KerberosPrincipal resource object in REST API for user to supply 
> keberos keytab and principal 
> - User has two ways to configure:
> -- If keytab starts with "hdfs://",  the keytab will be localized by YARN
> -- If keytab starts with "file://", it is assumed that the keytab are 
> available on the localhost.
> - AM will use the keytab to log in
> - ServiceClient is changed to ask hdfs delegation token when submitting the 
> service
> - AM code will use the tokens when launching containers 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7533) Documentation for absolute resource support in CS

2017-11-20 Thread Eric Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16260064#comment-16260064
 ] 

Eric Payne commented on YARN-7533:
--

Hi [~sunilg]. Thanks for the patch. I have just a couple of grammatical 
suggestions:
 
- {{Resource Allocation}}, I would suggest to change it to the following:
{code}
+  * Resource Allocation using Absolute Resources configuration
+ `CapacityScheduler` supports configuraiton of absolute resources instead of 
providing Queue *capacity* in percentage. The following configurations could be 
used to configure absolute resources.
{code}

- {{yarn.scheduler.capacity..min-resource}}, something like:
{code}
+ | `yarn.scheduler.capacity..min-resource` | Absolute resource 
queue capacity minimum configuration. Default value is empty. 
[memory=10240,vcores=12] is a valid configuration which indicates 10GB Memory 
and 12 VCores.|
+ | `yarn.scheduler.capacity..max-resource` | Absolute resource 
queue capacity maximum configuration. Default value is empty. 
[memory=10240,vcores=12] is a valid configuration which indicates 10GB Memory 
and 12 VCores.|
{code}


> Documentation for absolute resource support in CS
> -
>
> Key: YARN-7533
> URL: https://issues.apache.org/jira/browse/YARN-7533
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Sunil G
>Assignee: Sunil G
> Attachments: YARN-7533.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7473) Implement Framework and policy for capacity management of auto created queues

2017-11-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16260060#comment-16260060
 ] 

Hadoop QA commented on YARN-7473:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 40s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
0s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 in trunk has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 27s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 89 new + 212 unchanged - 4 fixed = 301 total (was 216) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 49s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
9s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 generated 5 new + 1 unchanged - 0 fixed = 6 total (was 1) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 62m 24s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}104m  0s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | 
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
|  |  Unchecked/unconfirmed cast from 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue to 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ManagedParentQueue
 in 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.GuaranteedOrZeroCapacityOverTimePolicy.init(CapacitySchedulerContext,
 ParentQueue)  At 
GuaranteedOrZeroCapacityOverTimePolicy.java:org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ManagedParentQueue
 in 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.GuaranteedOrZeroCapacityOverTimePolicy.init(CapacitySchedulerContext,
 ParentQueue)  At GuaranteedOrZeroCapacityOverTimePolicy.java:[line 236] |
|  |  Null pointer dereference of childQueue in 
org.apache.hadoop.yarn.server.resourcema

[jira] [Commented] (YARN-7531) ResourceRequest.equal does not check ExecutionTypeRequest.enforceExecutionType()

2017-11-20 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16260053#comment-16260053
 ] 

Yufei Gu commented on YARN-7531:


+1, pending for Jenkins.

> ResourceRequest.equal does not check 
> ExecutionTypeRequest.enforceExecutionType()
> 
>
> Key: YARN-7531
> URL: https://issues.apache.org/jira/browse/YARN-7531
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: api
>Affects Versions: 3.0.0
>Reporter: Haibo Chen
>Assignee: Haibo Chen
> Attachments: YARN-7531.01.patch, YARN-7531.prelim.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7473) Implement Framework and policy for capacity management of auto created queues

2017-11-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16260048#comment-16260048
 ] 

Hadoop QA commented on YARN-7473:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
11s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  1s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
5s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 in trunk has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 26s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 89 new + 211 unchanged - 4 fixed = 300 total (was 215) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  3s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
15s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 generated 5 new + 1 unchanged - 0 fixed = 6 total (was 1) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 55m 45s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}102m 12s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | 
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
|  |  Unchecked/unconfirmed cast from 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue to 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ManagedParentQueue
 in 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.GuaranteedOrZeroCapacityOverTimePolicy.init(CapacitySchedulerContext,
 ParentQueue)  At 
GuaranteedOrZeroCapacityOverTimePolicy.java:org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ManagedParentQueue
 in 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.GuaranteedOrZeroCapacityOverTimePolicy.init(CapacitySchedulerContext,
 ParentQueue)  At GuaranteedOrZeroCapacityOverTimePolicy.java:[line 236] |
|  |  Null pointer dereference of childQueue in 
org.apache.hadoop.yarn.server.resourcema

[jira] [Commented] (YARN-6921) Allow resource request to opt out of oversubscription in Fair Scheduler

2017-11-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16260042#comment-16260042
 ] 

Hadoop QA commented on YARN-6921:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  9m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-1011 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
55s{color} | {color:green} YARN-1011 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
41s{color} | {color:green} YARN-1011 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} YARN-1011 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
40s{color} | {color:green} YARN-1011 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 49s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
6s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 in YARN-1011 has 2 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} YARN-1011 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 26s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 1 new + 208 unchanged - 1 fixed = 209 total (was 209) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  7s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 54m 56s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}110m 32s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.reservation.TestCapacityOverTimePolicy |
|   | hadoop.yarn.server.resourcemanager.TestRMAdminService |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-6921 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12898256/YARN-6921-YARN-1011.00.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 4820b2623928 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | YARN-1011 / f2b85d6 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-YARN-Build/18585/artifact/out/branch-findbugs-hado

[jira] [Updated] (YARN-7531) ResourceRequest.equal does not check ExecutionTypeRequest.enforceExecutionType()

2017-11-20 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7531?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated YARN-7531:
-
Attachment: YARN-7531.01.patch

Thanks [~yufeigu] for the review. I added a unit test that covers the bug.

> ResourceRequest.equal does not check 
> ExecutionTypeRequest.enforceExecutionType()
> 
>
> Key: YARN-7531
> URL: https://issues.apache.org/jira/browse/YARN-7531
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: api
>Affects Versions: 3.0.0
>Reporter: Haibo Chen
>Assignee: Haibo Chen
> Attachments: YARN-7531.01.patch, YARN-7531.prelim.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6669) Support security for YARN service framework

2017-11-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16259997#comment-16259997
 ] 

Hadoop QA commented on YARN-6669:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
25s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 42s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
17s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 52s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 26 new + 238 unchanged - 23 fixed = 264 total (was 261) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 19s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 59s{color} 
| {color:red} hadoop-yarn-registry in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
39s{color} | {color:green} hadoop-yarn-services-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
26s{color} | {color:green} hadoop-yarn-services-api in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 66m 23s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.registry.secure.TestSecureRMRegistryOperations |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-6669 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12898551/YARN-6669.02.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 5c2019c

[jira] [Updated] (YARN-6750) Add a configuration to cap how much a NM can be overallocated

2017-11-20 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6750?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated YARN-6750:
-
Attachment: YARN-6750-YARN-1011.01.patch

Thanks [~miklos.szeg...@cloudera.com] for the review. I address your comments 
in the new patch

> Add a configuration to cap how much a NM can be overallocated
> -
>
> Key: YARN-6750
> URL: https://issues.apache.org/jira/browse/YARN-6750
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Reporter: Haibo Chen
>Assignee: Haibo Chen
> Attachments: YARN-6750-YARN-1011.00.patch, 
> YARN-6750-YARN-1011.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5534) Allow user provided Docker volume mount list

2017-11-20 Thread Shane Kumpf (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shane Kumpf updated YARN-5534:
--
Attachment: YARN-5534.004.patch

> Allow user provided Docker volume mount list
> 
>
> Key: YARN-5534
> URL: https://issues.apache.org/jira/browse/YARN-5534
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: luhuichun
>Assignee: Shane Kumpf
> Attachments: YARN-5534.001.patch, YARN-5534.002.patch, 
> YARN-5534.003.patch, YARN-5534.004.patch
>
>
> YARN-6623 added support in container-executor for admin supplied Docker 
> volume whitelists. This allows controlling which host directories can be 
> mounted into Docker containers launched by YARN. A read-only and read-write 
> whitelist was added. We now need the ability for users to supply the mounts 
> they require for their application, which will be validated against the admin 
> whitelist in container-executor.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7390) All reservation related test cases failed when TestYarnClient runs against Fair Scheduler.

2017-11-20 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-7390:
---
Fix Version/s: (was: 3.0.1)
   3.1.0
   3.0.0

> All reservation related test cases failed when TestYarnClient runs against 
> Fair Scheduler.
> --
>
> Key: YARN-7390
> URL: https://issues.apache.org/jira/browse/YARN-7390
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler, reservation system
>Affects Versions: 2.9.0, 3.0.0, 3.1.0
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Fix For: 3.0.0, 3.1.0
>
> Attachments: YARN-7390.001.patch, YARN-7390.002.patch, 
> YARN-7390.003.patch, YARN-7390.004.patch, YARN-7390.005.patch
>
>
> All reservation related test cases failed when {{TestYarnClient}} runs 
> against Fair Scheduler. To reproduce it, you need to set scheduler class to 
> Fair Scheduler in yarn-default.xml.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7539) NullPointerException in timeline service v2

2017-11-20 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16259992#comment-16259992
 ] 

Eric Yang commented on YARN-7539:
-

Hi [~vrushalic], yes, the job failed because container can not be allocated.  
There are logs written locally to node manager's userlogs.  There was no logs 
generated in HDFS, and the error only show up once per failed job.

> NullPointerException in timeline service v2
> ---
>
> Key: YARN-7539
> URL: https://issues.apache.org/jira/browse/YARN-7539
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.1.0
> Environment: Hadoop 3.1 (trunk)
> MIT Kerberos
> CentOS Linux release 7.0.1406
>Reporter: Eric Yang
>
> Application submit through Docker YARN REST API is generating a null point 
> exception after job has been finished.  In Resource Manager log file, we see 
> this error:
> {code}
> 2017-11-20 18:41:46,395 ERROR 
> org.apache.hadoop.yarn.server.resourcemanager.metrics.TimelineServiceV2Publisher:
>  Error when publishing entity TimelineEntity[type='YARN_CONTAINER', 
> id='container_1511201172997_0003_01_13']
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.metrics.TimelineServiceV2Publisher.putEntity(TimelineServiceV2Publisher.java:452)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.metrics.TimelineServiceV2Publisher.access$100(TimelineServiceV2Publisher.java:72)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.metrics.TimelineServiceV2Publisher$TimelineV2EventHandler.handle(TimelineServiceV2Publisher.java:484)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.metrics.TimelineServiceV2Publisher$TimelineV2EventHandler.handle(TimelineServiceV2Publisher.java:473)
>   at 
> org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:197)
>   at 
> org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:126)
>   at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7390) All reservation related test cases failed when TestYarnClient runs against Fair Scheduler.

2017-11-20 Thread Subru Krishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16259991#comment-16259991
 ] 

Subru Krishnan commented on YARN-7390:
--

[~yufeigu]/[~haibo.chen], thanks for fixing this. Shouldn't it be included in 
branch-2/2.9 as you have 2.9.0 in the affect version? Will be great if you can 
run the test against branch-2/2.9 before pushing. Thanks!

> All reservation related test cases failed when TestYarnClient runs against 
> Fair Scheduler.
> --
>
> Key: YARN-7390
> URL: https://issues.apache.org/jira/browse/YARN-7390
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler, reservation system
>Affects Versions: 2.9.0, 3.0.0, 3.1.0
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Fix For: 3.0.1
>
> Attachments: YARN-7390.001.patch, YARN-7390.002.patch, 
> YARN-7390.003.patch, YARN-7390.004.patch, YARN-7390.005.patch
>
>
> All reservation related test cases failed when {{TestYarnClient}} runs 
> against Fair Scheduler. To reproduce it, you need to set scheduler class to 
> Fair Scheduler in yarn-default.xml.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5534) Allow user provided Docker volume mount list

2017-11-20 Thread Shane Kumpf (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16259990#comment-16259990
 ] 

Shane Kumpf commented on YARN-5534:
---

I've updated the title and description to better reflect the current scope of 
the issue. I'm attaching a new patch that adds support for the user provided 
mount list.

One item I did want to discuss. We do still have 
{{YARN_CONTAINER_RUNTIME_DOCKER_LOCAL_RESOURCE_MOUNTS}} which I believe we 
could do away with in favor of {{YARN_CONTAINER_RUNTIME_DOCKER_MOUNTS}}. The 
challenge is the additional validation currently being done for that option and 
it would be a change in behavior. I don't think the additional validation is 
necessary with the changes in YARN-6623, and any validation should be handled 
by c-e anyway. I don't expect 
{{YARN_CONTAINER_RUNTIME_DOCKER_LOCAL_RESOURCE_MOUNTS}} has been widely used 
outside of yarn native services, but I don't know for certain. /cc 
[~billie.rinaldi]

> Allow user provided Docker volume mount list
> 
>
> Key: YARN-5534
> URL: https://issues.apache.org/jira/browse/YARN-5534
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: luhuichun
>Assignee: Shane Kumpf
> Attachments: YARN-5534.001.patch, YARN-5534.002.patch, 
> YARN-5534.003.patch
>
>
> YARN-6623 added support in container-executor for admin supplied Docker 
> volume whitelists. This allows controlling which host directories can be 
> mounted into Docker containers launched by YARN. A read-only and read-write 
> whitelist was added. We now need the ability for users to supply the mounts 
> they require for their application, which will be validated against the admin 
> whitelist in container-executor.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7274) Ability to disable elasticity at leaf queue level

2017-11-20 Thread Zian Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16259978#comment-16259978
 ] 

Zian Chen commented on YARN-7274:
-

[~jlowe], thank you for your comments. As Wangda mentioned, we are on the same 
page for the solution, I'll change to code locally and do some quick test, 
meanwhile, let's keep an eye on other comments from the community as well and 
see if we missed any corner cases in our preliminary solution. Thanks!

> Ability to disable elasticity at leaf queue level
> -
>
> Key: YARN-7274
> URL: https://issues.apache.org/jira/browse/YARN-7274
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacityscheduler
>Reporter: Scott Brokaw
>Assignee: Zian Chen
>
> The 
> [documentation|https://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/CapacityScheduler.html]
>  defines yarn.scheduler.capacity..maximum-capacity as "Maximum 
> queue capacity in percentage (%) as a float. This limits the elasticity for 
> applications in the queue. Defaults to -1 which disables it."
> However, setting this value to -1 sets maximum capacity to 100% but I thought 
> (perhaps incorrectly) that the intention of the -1 setting is that it would 
> disable elasticity.  This is confirmed looking at the code:
> {code:java}
> public static final float MAXIMUM_CAPACITY_VALUE = 100;
> public static final float DEFAULT_MAXIMUM_CAPACITY_VALUE = -1.0f;
> ..
> maxCapacity = (maxCapacity == DEFAULT_MAXIMUM_CAPACITY_VALUE) ? 
> MAXIMUM_CAPACITY_VALUE : maxCapacity;
> {code}
> The sum of yarn.scheduler.capacity..capacity for all queues, at 
> each level, must be equal to 100 but for 
> yarn.scheduler.capacity..maximum-capacity this value is actually 
> a percentage of the entire cluster not just the parent queue.  Yet it can not 
> be set lower then the leaf queue's capacity setting. This seems to make it 
> impossible to disable elasticity at a leaf queue level.
> This improvement is proposing that YARN have the ability to have elasticity 
> disabled at a leaf queue level even if a parent queue permits elasticity by 
> having a yarn.scheduler.capacity..maximum-capacity greater then 
> it's yarn.scheduler.capacity..capacity



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6124) Make SchedulingEditPolicy can be enabled / disabled / updated with RMAdmin -refreshQueues

2017-11-20 Thread Zian Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16259975#comment-16259975
 ] 

Zian Chen commented on YARN-6124:
-

Thank you [~eepayne] for your comments. for AdminService#refreshQueues, the 
first comment, I totally agree with your suggestion since it calls all of the 
schedulers, not just capacity scheduler. I'll update the comments according to 
your suggestions.

For the reason why conf.size() is been called in AdminService#refreshQueues, 
the reason exactly comes from the exception which you have observed in your 
newest comment. Let me explain it in details,

I agree with your comments that 
SchedulingMonitorManager#updateSchedulingMonitors will call 
Configuration#getBoolean which will call Configuration#getProps. However,  if 
we look into the logic inside CapacityScheduler#reinitialize after applying 
YARN-6124.3.patch, we need to make sure we change Queue configuration without 
throwing any exception then we can enable preemption related configuration by 
calling super.reinitialize(which will call 
schedulingMonitorManager#updateSchedulingMonitors in the end), but when we 
change the Queue configuration, we will call csConfProvider#loadConfiguration, 
however at this point, if we don't call Configuration#getProps by any chance, 
the configuration will not really load into memory, meanwhile the input stream 
we specified for reading the Configuration inside AdminService#refreshQueues is 
already closed, which leads to the refreshQueues: 
com.ctc.wstx.exc.WstxIOException: Stream closed exception occurs. 

Because of this situation, we need to invoke Configuration#getProps right after 
we call Configuration#getConfiguration in order to load all the properties we 
need before the input stream close. Wangda and I also think there should be a 
better way to achieve this, so we went through all the method which invokes 
Configuration#getProps but we only find conf.size() which can be used for our 
purpose. That's why conf.size() is been called here. 

Hope this can explain the reason we call conf.size() here, we'll appreciate 
your help if you could find a better way to avoid the WstxIOException and load 
the configuration. Thank you so much for your help!


> Make SchedulingEditPolicy can be enabled / disabled / updated with RMAdmin 
> -refreshQueues
> -
>
> Key: YARN-6124
> URL: https://issues.apache.org/jira/browse/YARN-6124
> Project: Hadoop YARN
>  Issue Type: Task
>Reporter: Wangda Tan
>Assignee: Zian Chen
> Attachments: YARN-6124.wip.1.patch, YARN-6124.wip.2.patch, 
> YARN-6124.wip.3.patch
>
>
> Now enabled / disable / update SchedulingEditPolicy config requires restart 
> RM. This is inconvenient when admin wants to make changes to 
> SchedulingEditPolicies.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5534) Allow user provided Docker volume mount list

2017-11-20 Thread Shane Kumpf (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shane Kumpf updated YARN-5534:
--
Description: YARN-6623 added support in container-executor for admin 
supplied Docker volume whitelists. This allows controlling which host 
directories can be mounted into Docker containers launched by YARN. A read-only 
and read-write whitelist was added. We now need the ability for users to supply 
the mounts they require for their application, which will be validated against 
the admin whitelist in container-executor.  (was: Introduction 

Mounting files or directories from the host is one way of passing configuration 
and other information into a docker container. 
We could allow the user to set a list of mounts in the environment of 
ContainerLaunchContext (e.g. /dir1:/targetdir1,/dir2:/targetdir2). 
These would be mounted read-only to the specified target locations. This has 
been resolved in YARN-4595

2.Problem Definition

Bug mounting arbitrary volumes into a Docker container can be a security risk.

3.Possible solutions

one approach to provide safe mounts is to allow the cluster administrator to 
configure a set of parent directories as white list mounting directories.
 Add a property named yarn.nodemanager.volume-mounts.white-list, when container 
executor do mount checking, only the allowed directories or sub-directories can 
be mounted. )

> Allow user provided Docker volume mount list
> 
>
> Key: YARN-5534
> URL: https://issues.apache.org/jira/browse/YARN-5534
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: luhuichun
>Assignee: Shane Kumpf
> Attachments: YARN-5534.001.patch, YARN-5534.002.patch, 
> YARN-5534.003.patch
>
>
> YARN-6623 added support in container-executor for admin supplied Docker 
> volume whitelists. This allows controlling which host directories can be 
> mounted into Docker containers launched by YARN. A read-only and read-write 
> whitelist was added. We now need the ability for users to supply the mounts 
> they require for their application, which will be validated against the admin 
> whitelist in container-executor.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6128) Add support for AMRMProxy HA

2017-11-20 Thread Botong Huang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6128?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16259972#comment-16259972
 ] 

Botong Huang commented on YARN-6128:


Cool, thx [~subru]! 

> Add support for AMRMProxy HA
> 
>
> Key: YARN-6128
> URL: https://issues.apache.org/jira/browse/YARN-6128
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: amrmproxy, nodemanager
>Reporter: Subru Krishnan
>Assignee: Botong Huang
> Fix For: 3.1.0, 2.9.1
>
> Attachments: YARN-6128-branch-2.v10.patch, YARN-6128.v0.patch, 
> YARN-6128.v1.patch, YARN-6128.v1.patch, YARN-6128.v10.patch, 
> YARN-6128.v10.patch, YARN-6128.v2.patch, YARN-6128.v3.patch, 
> YARN-6128.v3.patch, YARN-6128.v4.patch, YARN-6128.v5.patch, 
> YARN-6128.v6.patch, YARN-6128.v7.patch, YARN-6128.v8.patch, YARN-6128.v9.patch
>
>
> YARN-556 added the ability for RM failover without loosing any running 
> applications. In a Federated YARN environment, there's additional state in 
> the {{AMRMProxy}} to allow for spanning across multiple sub-clusters, so we 
> need to enhance {{AMRMProxy}} to support HA.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6645) Bug fix in ContainerImpl when calling the symLink of LinuxContainerExecutor

2017-11-20 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated YARN-6645:
-
Fix Version/s: (was: 2.9.0)
   2.9.1

> Bug fix in ContainerImpl when calling the symLink of LinuxContainerExecutor
> ---
>
> Key: YARN-6645
> URL: https://issues.apache.org/jira/browse/YARN-6645
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: Bingxue Qiu
> Fix For: 2.9.1
>
> Attachments: error when creating symlink.png
>
>
> when creating symlink after the resource localized in our clusters , an 
> IOException has been thrown, because the nmPrivateDir doesn't exist. we add a 
> patch to fix it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7278) LinuxContainer in docker mode will be failed when nodemanager restart, because timeout for docker is too slow.

2017-11-20 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated YARN-7278:
-
Fix Version/s: (was: 2.9.0)
   2.9.1

> LinuxContainer in docker mode will be failed when nodemanager restart, 
> because timeout for docker is too slow.
> --
>
> Key: YARN-7278
> URL: https://issues.apache.org/jira/browse/YARN-7278
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.8.0
> Environment: CentOS
>Reporter: zhengchenyu
> Fix For: 2.9.1
>
>   Original Estimate: 1m
>  Remaining Estimate: 1m
>
> In our cluster, nodemanagere recovery is turn on, and we use LinuxConainer 
> with docker mode.
> Container may be failed when nodemanager restart, exception is below:
> {code}
> [2017-09-29T15:47:14.433+08:00] [INFO] 
> containermanager.monitor.ContainersMonitorImpl.run(ContainersMonitorImpl.java 
> 472) [Container Monitor] : Memory usage of ProcessTree 120523 for 
> container-id container_1506600355508_0023_01_04: -1B of 10 GB physical 
> memory used; -1B of 31 GB virtual memory used
> [2017-09-29T15:47:15.219+08:00] [ERROR] 
> containermanager.launcher.RecoveredContainerLaunch.call(RecoveredContainerLaunch.java
>  93) [ContainersLauncher #1] : Unable to recover container 
> container_1506600355508_0023_01_04
> java.io.IOException: Timeout while waiting for exit code from 
> container_1506600355508_0023_01_04
> [2017-09-29T15:47:15.220+08:00] [INFO] 
> containermanager.container.ContainerImpl.handle(ContainerImpl.java 1142) 
> [AsyncDispatcher event handler] : Container 
> container_1506600355508_0023_01_04 transitioned from RUNNING to 
> EXITED_WITH_FAILURE
> [2017-09-29T15:47:15.221+08:00] [INFO] 
> containermanager.launcher.ContainerLaunch.cleanupContainer(ContainerLaunch.java
>  440) [AsyncDispatcher event handler] : Cleaning up container 
> container_1506600355508_0023_01_04
> {code}
> I guess the proccess is done, but 2 seconde later( the variable is msecLeft), 
> the *.pid.exitcode wasn't created. Then I changed variable to 2ms, The 
> container is succeed when nodemanger is restart.
> So I think it is too short for docker container to complete the work.
> In docker mode of LinuxContainer, nm monitor the real task which is launched 
> by "docker run" command. Then "docker wait" command will wait for exitcode, 
> then "docker rm" will delete the docker container. Lastly, container-executor 
> will write the exit code. So if some docker command is slow enough, nm 
> wouldn't monitor the container. In fact, docker rm is always slow. 
> I think the exit code of docker rm dosen't matter with the real task, so I 
> think we could move the operation of write "*.pid.exitcode" before the 
> command of docker rm. Or monitor the docker wait proccess, but not the real 
> task.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5534) Allow user provided Docker volume mount list

2017-11-20 Thread Shane Kumpf (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shane Kumpf updated YARN-5534:
--
Summary: Allow user provided Docker volume mount list  (was: Allow 
whitelisted volume mounts )

> Allow user provided Docker volume mount list
> 
>
> Key: YARN-5534
> URL: https://issues.apache.org/jira/browse/YARN-5534
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: luhuichun
>Assignee: Shane Kumpf
> Attachments: YARN-5534.001.patch, YARN-5534.002.patch, 
> YARN-5534.003.patch
>
>
> Introduction 
> Mounting files or directories from the host is one way of passing 
> configuration and other information into a docker container. 
> We could allow the user to set a list of mounts in the environment of 
> ContainerLaunchContext (e.g. /dir1:/targetdir1,/dir2:/targetdir2). 
> These would be mounted read-only to the specified target locations. This has 
> been resolved in YARN-4595
> 2.Problem Definition
> Bug mounting arbitrary volumes into a Docker container can be a security risk.
> 3.Possible solutions
> one approach to provide safe mounts is to allow the cluster administrator to 
> configure a set of parent directories as white list mounting directories.
>  Add a property named yarn.nodemanager.volume-mounts.white-list, when 
> container executor do mount checking, only the allowed directories or 
> sub-directories can be mounted. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6606) The implementation of LocalizationStatus in ContainerStatusProto

2017-11-20 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6606?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated YARN-6606:
-
Fix Version/s: (was: 2.9.0)
   2.9.1

> The implementation of LocalizationStatus in ContainerStatusProto
> 
>
> Key: YARN-6606
> URL: https://issues.apache.org/jira/browse/YARN-6606
> Project: Hadoop YARN
>  Issue Type: Task
>  Components: nodemanager
>Affects Versions: 2.9.0
>Reporter: Bingxue Qiu
> Fix For: 2.9.1
>
> Attachments: YARN-6606.1.patch, YARN-6606.2.patch
>
>
> we have a use case, where the full implementation of localization status in 
> ContainerStatusProto 
> [Continuous-resource-localization|https://issues.apache.org/jira/secure/attachment/12825041/Continuous-resource-localization.pdf]
>need to be done , so we make it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6661) Too much CLEANUP event hang ApplicationMasterLauncher thread pool

2017-11-20 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated YARN-6661:
-
Fix Version/s: (was: 2.9.0)
   2.9.1

> Too much CLEANUP event hang ApplicationMasterLauncher thread pool
> -
>
> Key: YARN-6661
> URL: https://issues.apache.org/jira/browse/YARN-6661
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.7.2
> Environment: hadoop 2.7.2 
>Reporter: JackZhou
> Fix For: 2.9.1
>
>
> Some one else have already come up with the similar problem and fix it.
> We can look the jira(https://issues.apache.org/jira/browse/YARN-3809) for 
> detail.
> But I think the fix have not solve the problem completely, blow was the 
> problem I encountered:
> There is about 1000 nodes in my hadoop cluster, and I submit about 1800 apps.
> I failover my active rm and rm will failover all those 1800 apps.
> When a application failover, It will wait for AM container register itself. 
> But there is a bug in my AM (I do it intentionally), and it will not register 
> itself.
> So the RM will wait for about 10mins for the AM expiration, and it will send 
> a CLEANUP event to 
> ApplicationMasterLauncher thread pool. Because there is about 1800 apps, so 
> it will hang the ApplicationMasterLauncher
> thread pool for a large time. I have already use the 
> patch(https://issues.apache.org/jira/secure/attachment/12740804/YARN-3809.03.patch),
>  so
> a CLEANUP event will hang a thread 10 * 20 = 200s. But I have 1800 apps, so 
> for each of my thread, it will
> hang 1800 / 50 * 200s = 7200s=20min.
> Because the AM have register itself during 10mins, so it will retry and 
> create a new application attempt. 
> The application attempt will accept a container from RM, and send a LAUNCH to 
> ApplicationMasterLauncher thread pool.
> Because the 1800 CLEANUP will hang the 50 thread pools about 20mins. So the 
> application attempt will not 
> start the AM container during 10min. 
> And it will expire, and send a CLEANUP event to ApplicationMasterLauncher 
> thread pools too.
> As you can see, none of my application can really run it. 
> Each of them have 5 application attempts as follows, and each of them keep 
> retrying.
> appattempt_1495786030132_4000_05
> appattempt_1495786030132_4000_04
> appattempt_1495786030132_4000_03
> appattempt_1495786030132_4000_02  
> appattempt_1495786030132_4000_01
> So all of my apps have hang several hours, and none of them can really run. 
> I think this is a bug!!! We can treat CLEANUP and LAUNCH as different events.
> And use some other thread to deal with LAUNCH event or use other way.
> Sorry, I english is so poor. I don't know have I describe it clearly.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7190) Ensure only NM classpath in 2.x gets TSv2 related hbase jars, not the user classpath

2017-11-20 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated YARN-7190:
-
Fix Version/s: (was: 2.9.0)
   2.9.1

> Ensure only NM classpath in 2.x gets TSv2 related hbase jars, not the user 
> classpath
> 
>
> Key: YARN-7190
> URL: https://issues.apache.org/jira/browse/YARN-7190
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineclient, timelinereader, timelineserver
>Reporter: Vrushali C
>Assignee: Varun Saxena
> Fix For: YARN-5355_branch2, 2.9.1
>
> Attachments: YARN-7190-YARN-5355_branch2.01.patch, 
> YARN-7190-YARN-5355_branch2.02.patch, YARN-7190-YARN-5355_branch2.03.patch, 
> YARN-7190.01.patch
>
>
> [~jlowe] had a good observation about the user classpath getting extra jars 
> in hadoop 2.x brought in with TSv2.  If users start picking up Hadoop 2,x's 
> version of HBase jars instead of the ones they shipped with their job, it 
> could be a problem.
> So when TSv2 is to be used in 2,x, the hbase related jars should come into 
> only the NM classpath not the user classpath.
> Here is a list of some jars
> {code}
> commons-csv-1.0.jar
> commons-el-1.0.jar
> commons-httpclient-3.1.jar
> disruptor-3.3.0.jar
> findbugs-annotations-1.3.9-1.jar
> hbase-annotations-1.2.6.jar
> hbase-client-1.2.6.jar
> hbase-common-1.2.6.jar
> hbase-hadoop2-compat-1.2.6.jar
> hbase-hadoop-compat-1.2.6.jar
> hbase-prefix-tree-1.2.6.jar
> hbase-procedure-1.2.6.jar
> hbase-protocol-1.2.6.jar
> hbase-server-1.2.6.jar
> htrace-core-3.1.0-incubating.jar
> jamon-runtime-2.4.1.jar
> jasper-compiler-5.5.23.jar
> jasper-runtime-5.5.23.jar
> jcodings-1.0.8.jar
> joni-2.1.2.jar
> jsp-2.1-6.1.14.jar
> jsp-api-2.1-6.1.14.jar
> jsr311-api-1.1.1.jar
> metrics-core-2.2.0.jar
> servlet-api-2.5-6.1.14.jar
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7520) Queue Ordering policy changes for ordering auto created leaf queues within Managed parent Queues

2017-11-20 Thread Suma Shivaprasad (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suma Shivaprasad updated YARN-7520:
---
Attachment: YARN-7520.1.patch

> Queue Ordering policy changes for ordering auto created leaf queues within 
> Managed parent Queues
> 
>
> Key: YARN-7520
> URL: https://issues.apache.org/jira/browse/YARN-7520
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
> Attachments: YARN-7520.1.patch
>
>
> Queue Ordering policy currently uses priority, utilization and absolute 
> capacity for pre-configured parent queues to order leaf queues while 
> assigning containers. It needs modifications for auto created leaf queues 
> since they can have zero capacity



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7520) Queue Ordering policy changes for ordering auto created leaf queues within Managed parent Queues

2017-11-20 Thread Suma Shivaprasad (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suma Shivaprasad updated YARN-7520:
---
Attachment: (was: YARN-7520.1.patch)

> Queue Ordering policy changes for ordering auto created leaf queues within 
> Managed parent Queues
> 
>
> Key: YARN-7520
> URL: https://issues.apache.org/jira/browse/YARN-7520
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
> Attachments: YARN-7520.1.patch
>
>
> Queue Ordering policy currently uses priority, utilization and absolute 
> capacity for pre-configured parent queues to order leaf queues while 
> assigning containers. It needs modifications for auto created leaf queues 
> since they can have zero capacity



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7520) Queue Ordering policy changes for ordering auto created leaf queues within Managed parent Queues

2017-11-20 Thread Suma Shivaprasad (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suma Shivaprasad updated YARN-7520:
---
Attachment: YARN-7520.1.patch

Attached patch which ensures that leaf queues which have guaranteed absolute 
capacity are bumped up versus zero capacity queues under ManagedParentQueue(s). 
The behaviour should be the same as previous logic for pre-configured leaf 
queues under a ParentQueue.

> Queue Ordering policy changes for ordering auto created leaf queues within 
> Managed parent Queues
> 
>
> Key: YARN-7520
> URL: https://issues.apache.org/jira/browse/YARN-7520
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
> Attachments: YARN-7520.1.patch
>
>
> Queue Ordering policy currently uses priority, utilization and absolute 
> capacity for pre-configured parent queues to order leaf queues while 
> assigning containers. It needs modifications for auto created leaf queues 
> since they can have zero capacity



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6168) Restarted RM may not inform AM about all existing containers

2017-11-20 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16259960#comment-16259960
 ] 

Jian He commented on YARN-6168:
---

could you also add more detailed comments in 
AllocateResponse#get/setContainersFromPreviousAttempts to explain the scenario 
the containers might not be received in the previous register call ?

> Restarted RM may not inform AM about all existing containers
> 
>
> Key: YARN-6168
> URL: https://issues.apache.org/jira/browse/YARN-6168
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Billie Rinaldi
>Assignee: Chandni Singh
> Attachments: YARN-6168.001.patch
>
>
> There appears to be a race condition when an RM is restarted. I had a 
> situation where the RMs and AM were down, but NMs and app containers were 
> still running. When I restarted the RM, the AM restarted, registered with the 
> RM, and received its list of existing containers before the NMs had reported 
> all of their containers to the RM. The AM was only told about some of the 
> app's existing containers.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6168) Restarted RM may not inform AM about all existing containers

2017-11-20 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16259957#comment-16259957
 ] 

Jian He commented on YARN-6168:
---

- AllocateResponsePBImpl#mergeLocalToBuilder needs some changes too ?
- recoveredPreviousAttemptContainers, the type can be Container type, so that 
pullPreviousAttemptContainers doesn't need to transform RMContainer to 
container.
- I think getLiveContainers and clearPreviousContainers need to be in same 
synchronization block. Otherwise, it is possible to lose the previous 
containers such as: 
1. AM acquires the live containers on register
2. containers added to live container and previous containers
3. clear previous containers 
{code}
Collection liveContainers =
app.getCurrentAppAttempt().getLiveContainers();
app.getCurrentAppAttempt().resetPreviousAttemptContainers();
{code}
- could you add comments in the header of 
testContainersFromPreviousAttemptsWithRMRestart to explain what the tests do, 
so that others don't need to dig into the code to understand what it does.

> Restarted RM may not inform AM about all existing containers
> 
>
> Key: YARN-6168
> URL: https://issues.apache.org/jira/browse/YARN-6168
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Billie Rinaldi
>Assignee: Chandni Singh
> Attachments: YARN-6168.001.patch
>
>
> There appears to be a race condition when an RM is restarted. I had a 
> situation where the RMs and AM were down, but NMs and app containers were 
> still running. When I restarted the RM, the AM restarted, registered with the 
> RM, and received its list of existing containers before the NMs had reported 
> all of their containers to the RM. The AM was only told about some of the 
> app's existing containers.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7473) Implement Framework and policy for capacity management of auto created queues

2017-11-20 Thread Suma Shivaprasad (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suma Shivaprasad updated YARN-7473:
---
Attachment: YARN-7473.4.patch

Removed Clock from AutoCreatedQueueManagementPolicy.reinitialize

> Implement Framework and policy for capacity management of auto created queues 
> --
>
> Key: YARN-7473
> URL: https://issues.apache.org/jira/browse/YARN-7473
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
> Attachments: YARN-7473.1.patch, YARN-7473.2.patch, YARN-7473.3.patch, 
> YARN-7473.4.patch
>
>
> This jira mainly addresses the following
>  
> 1.Support adding pluggable policies on parent queue for dynamically managing 
> capacity/state for leaf queues.
> 2. Implement  a default policy that manages capacity based on pending 
> applications and either grants guaranteed or zero capacity to queues based on 
> parent's available guaranteed capacity.
> 3. Integrate with SchedulingEditPolicy framework to trigger this periodically 
> and signal scheduler to take necessary actions for capacity/queue management.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7473) Implement Framework and policy for capacity management of auto created queues

2017-11-20 Thread Suma Shivaprasad (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suma Shivaprasad updated YARN-7473:
---
Attachment: YARN-7473.3.patch

Thanks for the review [~wangda] Attached patch with comments addressed. 

> Implement Framework and policy for capacity management of auto created queues 
> --
>
> Key: YARN-7473
> URL: https://issues.apache.org/jira/browse/YARN-7473
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
> Attachments: YARN-7473.1.patch, YARN-7473.2.patch, YARN-7473.3.patch
>
>
> This jira mainly addresses the following
>  
> 1.Support adding pluggable policies on parent queue for dynamically managing 
> capacity/state for leaf queues.
> 2. Implement  a default policy that manages capacity based on pending 
> applications and either grants guaranteed or zero capacity to queues based on 
> parent's available guaranteed capacity.
> 3. Integrate with SchedulingEditPolicy framework to trigger this periodically 
> and signal scheduler to take necessary actions for capacity/queue management.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6750) Add a configuration to cap how much a NM can be overallocated

2017-11-20 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16259942#comment-16259942
 ] 

Miklos Szegedi commented on YARN-6750:
--

Thank you for the patch [~haibochen].
{code}
99  
100   public SchedulerNode(RMNode node, boolean usePortForNodeName,
{code}
There is an extra space here.
All references to 4.0 in the patch should be replaced with 
YarnConfiguration.DEFAULT_PER_NODE_MAX_OVERALLOCATION_RATIO.

> Add a configuration to cap how much a NM can be overallocated
> -
>
> Key: YARN-6750
> URL: https://issues.apache.org/jira/browse/YARN-6750
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Reporter: Haibo Chen
>Assignee: Haibo Chen
> Attachments: YARN-6750-YARN-1011.00.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6921) Allow resource request to opt out of oversubscription in Fair Scheduler

2017-11-20 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16259934#comment-16259934
 ] 

Miklos Szegedi commented on YARN-6921:
--

+1 pending jenkins. Thank you for the patch [~haibochen]

> Allow resource request to opt out of oversubscription in Fair Scheduler
> ---
>
> Key: YARN-6921
> URL: https://issues.apache.org/jira/browse/YARN-6921
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler
>Affects Versions: 3.0.0-alpha3
>Reporter: Haibo Chen
>Assignee: Haibo Chen
> Attachments: YARN-6921-YARN-1011.00.patch
>
>
> Guaranteed container requests, enforce tag true or not, are by default 
> eligible for oversubscription, and thus can get OPPORTUNISTIC container 
> allocations. We should allow them to opt out if their enforce tag is set to 
> true.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6669) Support security for YARN service framework

2017-11-20 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-6669:
--
Attachment: YARN-6669.02.patch

Fixed the UT failures 

> Support security for YARN service framework
> ---
>
> Key: YARN-6669
> URL: https://issues.apache.org/jira/browse/YARN-6669
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-6669.01.patch, YARN-6669.02.patch, 
> YARN-6669.yarn-native-services.01.patch, 
> YARN-6669.yarn-native-services.03.patch, 
> YARN-6669.yarn-native-services.04.patch, 
> YARN-6669.yarn-native-services.05.patch
>
>
> Changes include:
> -  Make registry client to programmatically generate the jaas conf for secure 
> access ZK quorum
> - Create a KerberosPrincipal resource object in REST API for user to supply 
> keberos keytab and principal 
> - User has two ways to configure:
> -- If keytab starts with "hdfs://",  the keytab will be localized by YARN
> -- If keytab starts with "file://", it is assumed that the keytab are 
> available on the localhost.
> - AM will use the keytab to log in
> - ServiceClient is changed to ask hdfs delegation token when submitting the 
> service
> - AM code will use the tokens when launching containers 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7337) Expose per-node over-allocation info in Node Report

2017-11-20 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16259899#comment-16259899
 ] 

Miklos Szegedi edited comment on YARN-7337 at 11/20/17 9:50 PM:


Thenk you for the patch [~haibochen]
{code}
153   @Public
154   @Unstable
155   public abstract Resource getGuaranteedResourceUsed();
156 
...
160 
161   /**
162* Get opportunistic Resource used on the node.
163* @return opportunistic Resource used on the 
node
164*/
165   @Public
166   @Stable
167   public abstract Resource getOpportunisticResourceUsed();
{code}
The stability annotation usage is inconsistent in case of these two functions.
{code}
45/**
46 * @return the amount of resources currently used by the node.
47 */
{code}
This should mention amount of guaranteed resources.


was (Author: miklos.szeg...@cloudera.com):
Thenk you for the patch [~haibochen]
{code}
153   @Public
154   @Unstable
155   public abstract Resource getGuaranteedResourceUsed();
156 
...
160 
161   /**
162* Get opportunistic Resource used on the node.
163* @return opportunistic Resource used on the 
node
164*/
165   @Public
166   @Stable
167   public abstract Resource getOpportunisticResourceUsed();
{code}
The stability annotation usage is inconsistent in case of these two functions.

> Expose per-node over-allocation info in Node Report
> ---
>
> Key: YARN-7337
> URL: https://issues.apache.org/jira/browse/YARN-7337
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: 3.0.0-alpha3
>Reporter: Haibo Chen
>Assignee: Haibo Chen
> Attachments: YARN-7337-YARN-1011.00.patch, 
> YARN-7337-YARN-1011.01.patch, YARN-7337-YARN-1011.02.patch, 
> YARN-7337-YARN-1011.03.patch, YARN-7337-YARN-1011.04.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7337) Expose per-node over-allocation info in Node Report

2017-11-20 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16259899#comment-16259899
 ] 

Miklos Szegedi commented on YARN-7337:
--

Thenk you for the patch [~haibochen]
{code}
153   @Public
154   @Unstable
155   public abstract Resource getGuaranteedResourceUsed();
156 
...
160 
161   /**
162* Get opportunistic Resource used on the node.
163* @return opportunistic Resource used on the 
node
164*/
165   @Public
166   @Stable
167   public abstract Resource getOpportunisticResourceUsed();
{code}
The stability annotation usage is inconsistent in case of these two functions.

> Expose per-node over-allocation info in Node Report
> ---
>
> Key: YARN-7337
> URL: https://issues.apache.org/jira/browse/YARN-7337
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: 3.0.0-alpha3
>Reporter: Haibo Chen
>Assignee: Haibo Chen
> Attachments: YARN-7337-YARN-1011.00.patch, 
> YARN-7337-YARN-1011.01.patch, YARN-7337-YARN-1011.02.patch, 
> YARN-7337-YARN-1011.03.patch, YARN-7337-YARN-1011.04.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6647) RM can crash during transitionToStandby due to InterruptedException

2017-11-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16259888#comment-16259888
 ] 

Hadoop QA commented on YARN-6647:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 25s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
1s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 in trunk has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 21s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 1 new + 4 unchanged - 0 fixed = 5 total (was 4) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 24s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 61m  2s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}103m 58s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestNodeLabelContainerAllocation
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-6647 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12898514/YARN-6647.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux d96faea4e75e 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / c326fc8 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-YARN-Build/18580/artif

[jira] [Commented] (YARN-7513) FindBugs in FSAppAttempt.getWeight()

2017-11-20 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16259881#comment-16259881
 ] 

Daniel Templeton commented on YARN-7513:


Fine by me.

> FindBugs in FSAppAttempt.getWeight()
> 
>
> Key: YARN-7513
> URL: https://issues.apache.org/jira/browse/YARN-7513
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 3.1.0
>Reporter: Wilfred Spiegelenburg
>Assignee: Wilfred Spiegelenburg
>Priority: Minor
> Attachments: YARN-7513.001.patch
>
>
> With the change from YARN-7414 a new FindBugs warning was introduced.
> The code that was moved from the FairScheduler to the FSAppAttempt can also 
> be simplified by removing the unneeded locking.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7513) FindBugs in FSAppAttempt.getWeight()

2017-11-20 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16259875#comment-16259875
 ] 

Yufei Gu commented on YARN-7513:


I'll commit it if there is no objection until tomorrow.

> FindBugs in FSAppAttempt.getWeight()
> 
>
> Key: YARN-7513
> URL: https://issues.apache.org/jira/browse/YARN-7513
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 3.1.0
>Reporter: Wilfred Spiegelenburg
>Assignee: Wilfred Spiegelenburg
>Priority: Minor
> Attachments: YARN-7513.001.patch
>
>
> With the change from YARN-7414 a new FindBugs warning was introduced.
> The code that was moved from the FairScheduler to the FSAppAttempt can also 
> be simplified by removing the unneeded locking.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6669) Support security for YARN service framework

2017-11-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16259868#comment-16259868
 ] 

Hadoop QA commented on YARN-6669:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  9m 
44s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
59s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 35s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
43s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m  0s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 46 new + 237 unchanged - 23 fixed = 283 total (was 260) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 46s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
52s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core
 generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 58s{color} 
| {color:red} hadoop-yarn-registry in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 12m 41s{color} 
| {color:red} hadoop-yarn-services-core in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
26s{color} | {color:green} hadoop-yarn-services-api in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 93m 58s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | 
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core
 |
|  |  Class org.apache.hadoop.yarn.service.api.records.Service defines 
non-transient non-serializable instance field kerberosPrincipal  In 
Service.java:instance field kerberosPrincipal  In Service.java |
|

[jira] [Updated] (YARN-7527) Over-allocate node resource in async-scheduling mode of CapacityScheduler

2017-11-20 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-7527:
-
Fix Version/s: 3.0.1
   3.1.0

> Over-allocate node resource in async-scheduling mode of CapacityScheduler
> -
>
> Key: YARN-7527
> URL: https://issues.apache.org/jira/browse/YARN-7527
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacityscheduler
>Affects Versions: 3.0.0-alpha4, 2.9.1
>Reporter: Tao Yang
>Assignee: Tao Yang
> Fix For: 3.1.0, 3.0.1
>
> Attachments: YARN-7527.001.patch
>
>
> Currently in async-scheduling mode of CapacityScheduler, node resource may be 
> over-allocated since node resource check is ignored.
> {{FiCaSchedulerApp#commonCheckContainerAllocation}} will check whether this 
> node have enough available resource for this proposal and return check result 
> (ture/false), but this result is ignored in {{CapacityScheduler#accept}} as 
> below.
> {noformat}
> commonCheckContainerAllocation(allocation, schedulerContainer);
> {noformat}
> If {{FiCaSchedulerApp#commonCheckContainerAllocation}} returns false, 
> {{CapacityScheduler#accept}} should also return false as below:
> {noformat}
> if (!commonCheckContainerAllocation(allocation, schedulerContainer)) {
>   return false;
> }
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7527) Over-allocate node resource in async-scheduling mode of CapacityScheduler

2017-11-20 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16259864#comment-16259864
 ] 

Wangda Tan commented on YARN-7527:
--

Thanks [~Tao Yang], pushed to branch-3.0/trunk. There're some minor conflict 
with branch-2, could you upload patch against branch-2?

> Over-allocate node resource in async-scheduling mode of CapacityScheduler
> -
>
> Key: YARN-7527
> URL: https://issues.apache.org/jira/browse/YARN-7527
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacityscheduler
>Affects Versions: 3.0.0-alpha4, 2.9.1
>Reporter: Tao Yang
>Assignee: Tao Yang
> Fix For: 3.1.0, 3.0.1
>
> Attachments: YARN-7527.001.patch
>
>
> Currently in async-scheduling mode of CapacityScheduler, node resource may be 
> over-allocated since node resource check is ignored.
> {{FiCaSchedulerApp#commonCheckContainerAllocation}} will check whether this 
> node have enough available resource for this proposal and return check result 
> (ture/false), but this result is ignored in {{CapacityScheduler#accept}} as 
> below.
> {noformat}
> commonCheckContainerAllocation(allocation, schedulerContainer);
> {noformat}
> If {{FiCaSchedulerApp#commonCheckContainerAllocation}} returns false, 
> {{CapacityScheduler#accept}} should also return false as below:
> {noformat}
> if (!commonCheckContainerAllocation(allocation, schedulerContainer)) {
>   return false;
> }
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6124) Make SchedulingEditPolicy can be enabled / disabled / updated with RMAdmin -refreshQueues

2017-11-20 Thread Eric Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16259865#comment-16259865
 ] 

Eric Payne commented on YARN-6124:
--

bq. AdminService#refreshQueues, conf.size(): I'm not sure why this is needed
I see that if this is not there, it gets the following exception.
{noformat}
refreshQueues: com.ctc.wstx.exc.WstxIOException: Stream closed
{noformat}
Still, calling {{conf.size()}} seems awkward. It seems like there should be a 
better way to do this.

> Make SchedulingEditPolicy can be enabled / disabled / updated with RMAdmin 
> -refreshQueues
> -
>
> Key: YARN-6124
> URL: https://issues.apache.org/jira/browse/YARN-6124
> Project: Hadoop YARN
>  Issue Type: Task
>Reporter: Wangda Tan
>Assignee: Zian Chen
> Attachments: YARN-6124.wip.1.patch, YARN-6124.wip.2.patch, 
> YARN-6124.wip.3.patch
>
>
> Now enabled / disable / update SchedulingEditPolicy config requires restart 
> RM. This is inconvenient when admin wants to make changes to 
> SchedulingEditPolicies.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7527) Over-allocate node resource in async-scheduling mode of CapacityScheduler

2017-11-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16259790#comment-16259790
 ] 

Hudson commented on YARN-7527:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13260 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13260/])
YARN-7527. Over-allocate node resource in async-scheduling mode of (wangda: rev 
0d781dd03b979d65de94978071b2faa55005b34a)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/common/fica/FiCaSchedulerApp.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacitySchedulerAsyncScheduling.java


> Over-allocate node resource in async-scheduling mode of CapacityScheduler
> -
>
> Key: YARN-7527
> URL: https://issues.apache.org/jira/browse/YARN-7527
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacityscheduler
>Affects Versions: 3.0.0-alpha4, 2.9.1
>Reporter: Tao Yang
>Assignee: Tao Yang
> Attachments: YARN-7527.001.patch
>
>
> Currently in async-scheduling mode of CapacityScheduler, node resource may be 
> over-allocated since node resource check is ignored.
> {{FiCaSchedulerApp#commonCheckContainerAllocation}} will check whether this 
> node have enough available resource for this proposal and return check result 
> (ture/false), but this result is ignored in {{CapacityScheduler#accept}} as 
> below.
> {noformat}
> commonCheckContainerAllocation(allocation, schedulerContainer);
> {noformat}
> If {{FiCaSchedulerApp#commonCheckContainerAllocation}} returns false, 
> {{CapacityScheduler#accept}} should also return false as below:
> {noformat}
> if (!commonCheckContainerAllocation(allocation, schedulerContainer)) {
>   return false;
> }
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7537) [Atsv2] load hbase configuration from filesystem rather than URL

2017-11-20 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16259779#comment-16259779
 ] 

Vrushali C commented on YARN-7537:
--

Hmm, so just to confirm, giving a URL of hdfs:///path-to-conf 
does not work? 

> [Atsv2] load hbase configuration from filesystem rather than URL
> 
>
> Key: YARN-7537
> URL: https://issues.apache.org/jira/browse/YARN-7537
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
> Attachments: YARN-7537.01.patch
>
>
> Currently HBaseTimelineStorageUtils#getTimelineServiceHBaseConf loads hbase 
> configurations using URL if *yarn.timeline-service.hbase.configuration.file* 
> is configured. But it is restricted to URLs only. This need to be changed to 
> load from file system. In deployment, hbase configuration can be kept under 
> filesystem so that it be utilized by all the NodeManager and ResourceManager.
> cc :/ [~vrushalic] [~varun_saxena]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7529) TestYarnNativeServices#testRecoverComponentsAfterRMRestart() fails intermittently

2017-11-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16259766#comment-16259766
 ] 

Hudson commented on YARN-7529:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13259 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13259/])
YARN-7529. TestYarnNativeServices#testRecoverComponentsAfterRMRestart() 
(billie: rev 6f9d7a146d5940a9e8a7913c19b43b265d6bfa32)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/test/java/org/apache/hadoop/yarn/service/TestYarnNativeServices.java


> TestYarnNativeServices#testRecoverComponentsAfterRMRestart() fails 
> intermittently
> -
>
> Key: YARN-7529
> URL: https://issues.apache.org/jira/browse/YARN-7529
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Chandni Singh
>Assignee: Chandni Singh
> Fix For: 3.1.0
>
> Attachments: YARN-7529.001.patch, YARN-7529.002.patch
>
>
> java.lang.AssertionError: component container affected by restart 
> expected:<{}> but was:<{compb=[container_1510781886708_0001_01_06, 
> container_1510781886708_0001_01_05], 
> compa=[container_1510781886708_0001_01_03, 
> container_1510781886708_0001_01_02]}>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at 
> org.apache.hadoop.yarn.service.TestYarnNativeServices.testRecoverComponentsAfterRMRestart(TestYarnNativeServices.java:213)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7539) NullPointerException in timeline service v2

2017-11-20 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16259762#comment-16259762
 ] 

Vrushali C commented on YARN-7539:
--


Hmm. Interesting..  And you saw this was thrown after the job has finished? Do 
you know what happens after this... Is the history file generated? Does the 
application show failed status or something else? 
Also, is the NPE printed once or many times? Basically wondering if the 
timeline collector is retrying or something and keeps throwing this error.

> NullPointerException in timeline service v2
> ---
>
> Key: YARN-7539
> URL: https://issues.apache.org/jira/browse/YARN-7539
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.1.0
> Environment: Hadoop 3.1 (trunk)
> MIT Kerberos
> CentOS Linux release 7.0.1406
>Reporter: Eric Yang
>
> Application submit through Docker YARN REST API is generating a null point 
> exception after job has been finished.  In Resource Manager log file, we see 
> this error:
> {code}
> 2017-11-20 18:41:46,395 ERROR 
> org.apache.hadoop.yarn.server.resourcemanager.metrics.TimelineServiceV2Publisher:
>  Error when publishing entity TimelineEntity[type='YARN_CONTAINER', 
> id='container_1511201172997_0003_01_13']
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.metrics.TimelineServiceV2Publisher.putEntity(TimelineServiceV2Publisher.java:452)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.metrics.TimelineServiceV2Publisher.access$100(TimelineServiceV2Publisher.java:72)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.metrics.TimelineServiceV2Publisher$TimelineV2EventHandler.handle(TimelineServiceV2Publisher.java:484)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.metrics.TimelineServiceV2Publisher$TimelineV2EventHandler.handle(TimelineServiceV2Publisher.java:473)
>   at 
> org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:197)
>   at 
> org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:126)
>   at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6483) Add nodes transitioning to DECOMMISSIONING state to the list of updated nodes returned by the Resource Manager as a response to the Application Master heartbeat

2017-11-20 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16259749#comment-16259749
 ] 

Arun Suresh commented on YARN-6483:
---

bq. by "update type" you mean adding an additional field with type 
`RMAppNodeUpdateType` to `NodeReport`?
Yup - but not necessarily the 'RMAppNodeUpdateType' object itself. We should 
add another public facing enum in the proto file that maps to values in 
RMAppNodeUPdateType enum and add that as a field in the NodeReport.

> Add nodes transitioning to DECOMMISSIONING state to the list of updated nodes 
> returned by the Resource Manager as a response to the Application Master 
> heartbeat
> 
>
> Key: YARN-6483
> URL: https://issues.apache.org/jira/browse/YARN-6483
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Affects Versions: 3.1.0
>Reporter: Juan Rodríguez Hortalá
>Assignee: Juan Rodríguez Hortalá
> Attachments: YARN-6483-v1.patch, YARN-6483.002.patch
>
>
> The DECOMMISSIONING node state is currently used as part of the graceful 
> decommissioning mechanism to give time for tasks to complete in a node that 
> is scheduled for decommission, and for reducer tasks to read the shuffle 
> blocks in that node. Also, YARN effectively blacklists nodes in 
> DECOMMISSIONING state by assigning them a capacity of 0, to prevent 
> additional containers to be launched in those nodes, so no more shuffle 
> blocks are written to the node. This blacklisting is not effective for 
> applications like Spark, because a Spark executor running in a YARN container 
> will keep receiving more tasks after the corresponding node has been 
> blacklisted at the YARN level. We would like to propose a modification of the 
> YARN heartbeat mechanism so nodes transitioning to DECOMMISSIONING are added 
> to the list of updated nodes returned by the Resource Manager as a response 
> to the Application Master heartbeat. This way a Spark application master 
> would be able to blacklist a DECOMMISSIONING at the Spark level.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7506) Overhaul the design of the Linux container-executor regarding Docker and future runtimes

2017-11-20 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16259740#comment-16259740
 ] 

Miklos Szegedi commented on YARN-7506:
--

The startup time might prevent us to use a root java process. The question is 
the CLI. What are the reasons it is better than a long running root Java 
process listening to a Unix socket accessible by yarn only? It does parameter 
checking, but does not the docker daemon do it anyway? CLI is slower to start 
up, it has all the risks with environment, shell, etc.

> Overhaul the design of the Linux container-executor regarding Docker and 
> future runtimes
> 
>
> Key: YARN-7506
> URL: https://issues.apache.org/jira/browse/YARN-7506
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Reporter: Miklos Szegedi
>  Labels: Docker, container-executor
> Attachments: YARN-Docker control options.pdf
>
>
> I raise this topic to discuss a potential improvement of the container 
> executor tool in node manager.
> container-executor has two main purposes. It executes Linux *system calls not 
> available from Java*, and it executes tasks *available to root that are not 
> available to the yarn user*. Historically container-executor did both by 
> doing impersonation. The yarn user is separated from root because it runs 
> network services, so *the yarn user should be restricted* by design. Because 
> of this it has it's own config file container-executor.cfg writable by root 
> only that specifies what actions are allowed for the yarn user. However, the 
> requirements have changed with Docker and that raises the following questions:
> 1. The Docker feature of YARN requires root permissions to *access the Docker 
> socket* but it does not run any system calls, so could the Docker related 
> code in container-executor be *refactored into a separate Java process ran as 
> root*? Java would make the development much faster and more secure. 
> 2. The Docker feature only needs the Docker unix socket. It is not a good 
> idea to let the yarn user directly access the socket, since that would 
> elevate its privileges to root. However, the Java tool running as root 
> mentioned in the previous question could act as a *proxy on the Docker 
> socket* operating directly on the Docker REST API *eliminating the need to 
> use the Docker CLI*. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7538) Performance improvement related to absolute min-max resources

2017-11-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16259739#comment-16259739
 ] 

Hadoop QA commented on YARN-7538:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  6s{color} 
| {color:red} YARN-7538 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-7538 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12898501/YARN-7538.001.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/18581/console |
| Powered by | Apache Yetus 0.7.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Performance improvement related to absolute min-max resources
> -
>
> Key: YARN-7538
> URL: https://issues.apache.org/jira/browse/YARN-7538
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Sunil G
>Assignee: Sunil G
> Attachments: YARN-7538.001.patch
>
>
> performance improvements.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6669) Support security for YARN service framework

2017-11-20 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16259736#comment-16259736
 ] 

Jian He commented on YARN-6669:
---

I also tested above changes in a secure cluster

> Support security for YARN service framework
> ---
>
> Key: YARN-6669
> URL: https://issues.apache.org/jira/browse/YARN-6669
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-6669.01.patch, 
> YARN-6669.yarn-native-services.01.patch, 
> YARN-6669.yarn-native-services.03.patch, 
> YARN-6669.yarn-native-services.04.patch, 
> YARN-6669.yarn-native-services.05.patch
>
>
> Changes include:
> -  Make registry client to programmatically generate the jaas conf for secure 
> access ZK quorum
> - Create a KerberosPrincipal resource object in REST API for user to supply 
> keberos keytab and principal 
> - User has two ways to configure:
> -- If keytab starts with "hdfs://",  the keytab will be localized by YARN
> -- If keytab starts with "file://", it is assumed that the keytab are 
> available on the localhost.
> - AM will use the keytab to log in
> - ServiceClient is changed to ask hdfs delegation token when submitting the 
> service
> - AM code will use the tokens when launching containers 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6669) Support security for YARN service framework

2017-11-20 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-6669:
--
Description: 
Changes include:
-  Make registry client to programmatically generate the jaas conf for secure 
access ZK quorum
- Create a KerberosPrincipal resource object in REST API for user to supply 
keberos keytab and principal 
- User has two ways to configure:
-- If keytab starts with "hdfs://",  the keytab will be localized by YARN
-- If keytab starts with "file://", it is assumed that the keytab are available 
on the localhost.
- AM will use the keytab to log in
- ServiceClient is changed to ask hdfs delegation token when submitting the 
service
- AM code will use the tokens when launching containers 


  was:
Changes include:
-  Make registry client to programmatically generate the jaas conf for secure 
access ZK quorum
- Create a KerberosPrincipal resource object in REST API for user to supply 
keberos keytab and principal 
- User has two ways to configure:
-- If keytab starts with "hdfs://",  the keytab will be localized by YARN
-- If keytab starts with "file://", it is assumed that the keytab are available 
on the localhost.
- ServiceClient is changed to ask hdfs delegation token when submitting the 
service
- AM code will use the tokens when launching containers 



> Support security for YARN service framework
> ---
>
> Key: YARN-6669
> URL: https://issues.apache.org/jira/browse/YARN-6669
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-6669.01.patch, 
> YARN-6669.yarn-native-services.01.patch, 
> YARN-6669.yarn-native-services.03.patch, 
> YARN-6669.yarn-native-services.04.patch, 
> YARN-6669.yarn-native-services.05.patch
>
>
> Changes include:
> -  Make registry client to programmatically generate the jaas conf for secure 
> access ZK quorum
> - Create a KerberosPrincipal resource object in REST API for user to supply 
> keberos keytab and principal 
> - User has two ways to configure:
> -- If keytab starts with "hdfs://",  the keytab will be localized by YARN
> -- If keytab starts with "file://", it is assumed that the keytab are 
> available on the localhost.
> - AM will use the keytab to log in
> - ServiceClient is changed to ask hdfs delegation token when submitting the 
> service
> - AM code will use the tokens when launching containers 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6669) Support security for YARN service framework

2017-11-20 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-6669:
--
Summary: Support security for YARN service framework  (was: Kerberos 
support for native service AM with the service REST API)

> Support security for YARN service framework
> ---
>
> Key: YARN-6669
> URL: https://issues.apache.org/jira/browse/YARN-6669
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-6669.01.patch, 
> YARN-6669.yarn-native-services.01.patch, 
> YARN-6669.yarn-native-services.03.patch, 
> YARN-6669.yarn-native-services.04.patch, 
> YARN-6669.yarn-native-services.05.patch
>
>
> Changes include:
> -  Make registry client to programmatically generate the jaas conf for secure 
> access ZK quorum
> - Create a KerberosPrincipal resource object in REST API for user to supply 
> keberos keytab and principal 
> - User has two ways to configure:
> -- If keytab starts with "hdfs://",  the keytab will be localized by YARN
> -- If keytab starts with "file://", it is assumed that the keytab are 
> available on the localhost.
> - ServiceClient is changed to ask hdfs delegation token when submitting the 
> service
> - AM code will use the tokens when launching containers 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6669) Kerberos support for native service AM with the service REST API

2017-11-20 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-6669:
--
Description: 
Changes include:
-  Make registry client to programmatically generate the jaas conf for secure 
access ZK quorum
- Create a KerberosPrincipal resource object in REST API for user to supply 
keberos keytab and principal 
- User has two ways to configure:
-- If keytab starts with "hdfs://",  the keytab will be localized by YARN
-- If keytab starts with "file://", it is assumed that the keytab are available 
on the localhost.
- ServiceClient is changed to ask hdfs delegation token when submitting the 
service
- AM code will use the tokens when launching containers 


> Kerberos support for native service AM with the service REST API
> 
>
> Key: YARN-6669
> URL: https://issues.apache.org/jira/browse/YARN-6669
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-6669.01.patch, 
> YARN-6669.yarn-native-services.01.patch, 
> YARN-6669.yarn-native-services.03.patch, 
> YARN-6669.yarn-native-services.04.patch, 
> YARN-6669.yarn-native-services.05.patch
>
>
> Changes include:
> -  Make registry client to programmatically generate the jaas conf for secure 
> access ZK quorum
> - Create a KerberosPrincipal resource object in REST API for user to supply 
> keberos keytab and principal 
> - User has two ways to configure:
> -- If keytab starts with "hdfs://",  the keytab will be localized by YARN
> -- If keytab starts with "file://", it is assumed that the keytab are 
> available on the localhost.
> - ServiceClient is changed to ask hdfs delegation token when submitting the 
> service
> - AM code will use the tokens when launching containers 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6669) Kerberos support for native service AM with the service REST API

2017-11-20 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-6669:
--
Attachment: YARN-6669.01.patch

> Kerberos support for native service AM with the service REST API
> 
>
> Key: YARN-6669
> URL: https://issues.apache.org/jira/browse/YARN-6669
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-6669.01.patch, 
> YARN-6669.yarn-native-services.01.patch, 
> YARN-6669.yarn-native-services.03.patch, 
> YARN-6669.yarn-native-services.04.patch, 
> YARN-6669.yarn-native-services.05.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6669) Kerberos support for native service AM with the service REST API

2017-11-20 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16259732#comment-16259732
 ] 

Jian He commented on YARN-6669:
---

Changes include:
- YARN-7517. Make registry client to programmatically generate the jaas conf 
for secure access
- Create a KerberosPrincipal resource object in REST API for user to supply 
keberos keytab and principal 
- User has two ways to configure:
-- If keytab starts with "hdfs://",  the keytab will be localized by YARN
-- If keytab starts with "file://", it is assumed that the keytab are available 
on the localhost.
- ServiceClient is changed to ask hdfs delegation token when submitting the 
service
- AM code will use the tokens when launching containers 


> Kerberos support for native service AM with the service REST API
> 
>
> Key: YARN-6669
> URL: https://issues.apache.org/jira/browse/YARN-6669
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-6669.yarn-native-services.01.patch, 
> YARN-6669.yarn-native-services.03.patch, 
> YARN-6669.yarn-native-services.04.patch, 
> YARN-6669.yarn-native-services.05.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



  1   2   >