[jira] [Commented] (YARN-8663) Opportunistic Container property "mapreduce.job.num-opportunistic-maps-percent" is throwing wrong exception at wrong sequence

2018-10-18 Thread Bilwa S T (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16654729#comment-16654729
 ] 

Bilwa S T commented on YARN-8663:
-

[~abmodi] are u still working on this jira?

> Opportunistic Container property 
> "mapreduce.job.num-opportunistic-maps-percent" is throwing wrong exception at 
> wrong sequence
> -
>
> Key: YARN-8663
> URL: https://issues.apache.org/jira/browse/YARN-8663
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Affects Versions: 3.1.1
> Environment: Secure Installation with Kerberos ON.
>Reporter: Akshay Agarwal
>Assignee: Abhishek Modi
>Priority: Major
>
> Pre-requisites:
> {code:java}
> 1. Install HA cluster.
> 2.Set yarn.nodemanager.opportunistic-containers-max-queue-length=(positive 
> integer value)[NodeManager->yarnsite.xml]
> 3. Set yarn.resourcemanager.opportunistic-container-allocation.enabled= 
> true[ResourceManager->yarnsite.xml]
> {code}
>  
> Steps to reproduce:
> {code:java}
> 1.Keep All NodeManagers Up
> 2. Submit a job with -Dmapreduce.job.num-opportunistic-maps-percent="abh" or 
> "2.5" 
> {code}
> Expected Result: 
> {code:java}
> Should through an Exception stating "NumberFormatException" before writing 
> the input for mappers.
> {code}
> Log Details:
> {code:java}
> 2018-08-14 18:15:54,049 INFO mapreduce.Job:  map 0% reduce 0%
> 2018-08-14 18:15:54,069 INFO mapreduce.Job: Job job_1534236847054_0005 failed 
> with state FAILED due to: Application application_1534236847054_0005 failed 2 
> times due to AM Container for appattempt_1534236847054_0005_02 exited 
> with  exitCode: 1
> Failing this attempt.Diagnostics: [2018-08-14 18:15:53.110]Exception from 
> container-launch.
> Container id: container_e31_1534236847054_0005_02_01
> Exit code: 1
> [2018-08-14 18:15:53.113]Container exited with a non-zero exit code 1. Error 
> file: prelaunch.err.
> Last 4096 bytes of prelaunch.err :
> Last 4096 bytes of stderr :
> Java HotSpot(TM) 64-Bit Server VM warning: ignoring option UseSplitVerifier; 
> support was removed in 8.0
> Aug 14, 2018 6:15:51 PM 
> com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register
> INFO: Registering 
> org.apache.hadoop.mapreduce.v2.app.webapp.JAXBContextResolver as a provider 
> class
> Aug 14, 2018 6:15:51 PM 
> com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register
> INFO: Registering org.apache.hadoop.yarn.webapp.GenericExceptionHandler as a 
> provider class
> Aug 14, 2018 6:15:51 PM 
> com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register
> INFO: Registering org.apache.hadoop.mapreduce.v2.app.webapp.AMWebServices as 
> a root resource class
> Aug 14, 2018 6:15:51 PM 
> com.sun.jersey.server.impl.application.WebApplicationImpl _initiate
> INFO: Initiating Jersey application, version 'Jersey: 1.19 02/11/2015 03:25 
> AM'
> Aug 14, 2018 6:15:51 PM 
> com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory 
> getComponentProvider
> INFO: Binding org.apache.hadoop.mapreduce.v2.app.webapp.JAXBContextResolver 
> to GuiceManagedComponentProvider with the scope "Singleton"
> Aug 14, 2018 6:15:52 PM 
> com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory 
> getComponentProvider
> INFO: Binding org.apache.hadoop.yarn.webapp.GenericExceptionHandler to 
> GuiceManagedComponentProvider with the scope "Singleton"
> Aug 14, 2018 6:15:52 PM 
> com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory 
> getComponentProvider
> INFO: Binding org.apache.hadoop.mapreduce.v2.app.webapp.AMWebServices to 
> GuiceManagedComponentProvider with the scope "PerRequest"
> log4j:WARN No appenders could be found for logger 
> (org.apache.hadoop.mapreduce.v2.app.MRAppMaster).
> log4j:WARN Please initialize the log4j system properly.
> log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more 
> info.
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8873) Add CSI java-based client library

2018-10-18 Thread Weiwei Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated YARN-8873:
--
Attachment: YARN-8873.005.patch

> Add CSI java-based client library
> -
>
> Key: YARN-8873
> URL: https://issues.apache.org/jira/browse/YARN-8873
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Major
> Attachments: YARN-8873.001.patch, YARN-8873.002.patch, 
> YARN-8873.003.patch, YARN-8873.004.patch, YARN-8873.005.patch
>
>
> Build a java-based client to talk to CSI drivers, through CSI gRPC services.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8663) Opportunistic Container property "mapreduce.job.num-opportunistic-maps-percent" is throwing wrong exception at wrong sequence

2018-10-18 Thread Abhishek Modi (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16654737#comment-16654737
 ] 

Abhishek Modi commented on YARN-8663:
-

I looked into this and found that there are lots of properties whose values are 
fetched while execution and this issue will be applicable to them too. I am not 
very sure whether we should fix it. If you want to take it feel free.

> Opportunistic Container property 
> "mapreduce.job.num-opportunistic-maps-percent" is throwing wrong exception at 
> wrong sequence
> -
>
> Key: YARN-8663
> URL: https://issues.apache.org/jira/browse/YARN-8663
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Affects Versions: 3.1.1
> Environment: Secure Installation with Kerberos ON.
>Reporter: Akshay Agarwal
>Assignee: Abhishek Modi
>Priority: Major
>
> Pre-requisites:
> {code:java}
> 1. Install HA cluster.
> 2.Set yarn.nodemanager.opportunistic-containers-max-queue-length=(positive 
> integer value)[NodeManager->yarnsite.xml]
> 3. Set yarn.resourcemanager.opportunistic-container-allocation.enabled= 
> true[ResourceManager->yarnsite.xml]
> {code}
>  
> Steps to reproduce:
> {code:java}
> 1.Keep All NodeManagers Up
> 2. Submit a job with -Dmapreduce.job.num-opportunistic-maps-percent="abh" or 
> "2.5" 
> {code}
> Expected Result: 
> {code:java}
> Should through an Exception stating "NumberFormatException" before writing 
> the input for mappers.
> {code}
> Log Details:
> {code:java}
> 2018-08-14 18:15:54,049 INFO mapreduce.Job:  map 0% reduce 0%
> 2018-08-14 18:15:54,069 INFO mapreduce.Job: Job job_1534236847054_0005 failed 
> with state FAILED due to: Application application_1534236847054_0005 failed 2 
> times due to AM Container for appattempt_1534236847054_0005_02 exited 
> with  exitCode: 1
> Failing this attempt.Diagnostics: [2018-08-14 18:15:53.110]Exception from 
> container-launch.
> Container id: container_e31_1534236847054_0005_02_01
> Exit code: 1
> [2018-08-14 18:15:53.113]Container exited with a non-zero exit code 1. Error 
> file: prelaunch.err.
> Last 4096 bytes of prelaunch.err :
> Last 4096 bytes of stderr :
> Java HotSpot(TM) 64-Bit Server VM warning: ignoring option UseSplitVerifier; 
> support was removed in 8.0
> Aug 14, 2018 6:15:51 PM 
> com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register
> INFO: Registering 
> org.apache.hadoop.mapreduce.v2.app.webapp.JAXBContextResolver as a provider 
> class
> Aug 14, 2018 6:15:51 PM 
> com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register
> INFO: Registering org.apache.hadoop.yarn.webapp.GenericExceptionHandler as a 
> provider class
> Aug 14, 2018 6:15:51 PM 
> com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register
> INFO: Registering org.apache.hadoop.mapreduce.v2.app.webapp.AMWebServices as 
> a root resource class
> Aug 14, 2018 6:15:51 PM 
> com.sun.jersey.server.impl.application.WebApplicationImpl _initiate
> INFO: Initiating Jersey application, version 'Jersey: 1.19 02/11/2015 03:25 
> AM'
> Aug 14, 2018 6:15:51 PM 
> com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory 
> getComponentProvider
> INFO: Binding org.apache.hadoop.mapreduce.v2.app.webapp.JAXBContextResolver 
> to GuiceManagedComponentProvider with the scope "Singleton"
> Aug 14, 2018 6:15:52 PM 
> com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory 
> getComponentProvider
> INFO: Binding org.apache.hadoop.yarn.webapp.GenericExceptionHandler to 
> GuiceManagedComponentProvider with the scope "Singleton"
> Aug 14, 2018 6:15:52 PM 
> com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory 
> getComponentProvider
> INFO: Binding org.apache.hadoop.mapreduce.v2.app.webapp.AMWebServices to 
> GuiceManagedComponentProvider with the scope "PerRequest"
> log4j:WARN No appenders could be found for logger 
> (org.apache.hadoop.mapreduce.v2.app.MRAppMaster).
> log4j:WARN Please initialize the log4j system properly.
> log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more 
> info.
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8873) Add CSI java-based client library

2018-10-18 Thread Weiwei Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16654738#comment-16654738
 ] 

Weiwei Yang commented on YARN-8873:
---

Fix license warnings, ignore checking for generated sources.

> Add CSI java-based client library
> -
>
> Key: YARN-8873
> URL: https://issues.apache.org/jira/browse/YARN-8873
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Major
> Attachments: YARN-8873.001.patch, YARN-8873.002.patch, 
> YARN-8873.003.patch, YARN-8873.004.patch, YARN-8873.005.patch
>
>
> Build a java-based client to talk to CSI drivers, through CSI gRPC services.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8865) RMStateStore contains large number of expired RMDelegationToken

2018-10-18 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16654751#comment-16654751
 ] 

Hadoop QA commented on YARN-8865:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
41s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 1s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
42s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 59s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
15s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
32s{color} | {color:green} root: The patch generated 0 new + 59 unchanged - 1 
fixed = 59 total (was 60) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 40s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
50s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
26s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 94m 38s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
41s{color} | {color:green} hadoop-mapreduce-client-hs in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
39s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}208m 21s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.security.TestRMDelegationTokens |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | YARN-8865 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12944469/YARN-8865.004.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux a2ac5544e2d8 4.4.0-133-generic #159-Ubuntu SMP Fri Aug 10 
07:31:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personal

[jira] [Updated] (YARN-8881) Add basic pluggable device plugin framework

2018-10-18 Thread Zhankun Tang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhankun Tang updated YARN-8881:
---
Attachment: YARN-8881-trunk.001.patch

> Add basic pluggable device plugin framework
> ---
>
> Key: YARN-8881
> URL: https://issues.apache.org/jira/browse/YARN-8881
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Zhankun Tang
>Assignee: Zhankun Tang
>Priority: Major
> Attachments: YARN-8881-trunk.001.patch
>
>
> It includes adding support in "ResourcePluginManager" to load plugin classes 
> based on configuration, an interface for the vendor to implement and the 
> adapter to decouple plugin and YARN internals. And the vendor device resource 
> discovery will be ready after this support



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8587) Delays are noticed to launch docker container

2018-10-18 Thread Charo Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16654760#comment-16654760
 ] 

Charo Zhang commented on YARN-8587:
---

add patch

> Delays are noticed to launch docker container
> -
>
> Key: YARN-8587
> URL: https://issues.apache.org/jira/browse/YARN-8587
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.1.1
>Reporter: Yesha Vora
>Priority: Major
>  Labels: Docker
>
> Launch dshell application. Wait for application to go in RUNNING state.
> {code:java}
> yarn  jar /xx/hadoop-yarn-applications-distributedshell-*.jar  -shell_command 
> "sleep 300" -num_containers 1 -shell_env YARN_CONTAINER_RUNTIME_TYPE=docker 
> -shell_env YARN_CONTAINER_RUNTIME_DOCKER_IMAGE=httpd:0.1 -shell_env 
> YARN_CONTAINER_RUNTIME_DOCKER_DELAYED_REMOVAL=true -jar 
> /usr/hdp/current/hadoop-yarn-client/hadoop-yarn-applications-distributedshell-xx.jar
> {code}
> Find out container allocation. Run docker inspect command for docker 
> containers launched by app.
> Sometimes, the container is allocated to NM but docker PID is not up.
> {code:java}
> Command ssh -q -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null 
> xxx "sudo su - -c \"docker ps  -a | grep 
> container_e02_1531189225093_0003_01_02\" root" failed after 0 retries 
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (YARN-8587) Delays are noticed to launch docker container

2018-10-18 Thread Charo Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Charo Zhang updated YARN-8587:
--
Comment: was deleted

(was: add patch)

> Delays are noticed to launch docker container
> -
>
> Key: YARN-8587
> URL: https://issues.apache.org/jira/browse/YARN-8587
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.1.1
>Reporter: Yesha Vora
>Priority: Major
>  Labels: Docker
>
> Launch dshell application. Wait for application to go in RUNNING state.
> {code:java}
> yarn  jar /xx/hadoop-yarn-applications-distributedshell-*.jar  -shell_command 
> "sleep 300" -num_containers 1 -shell_env YARN_CONTAINER_RUNTIME_TYPE=docker 
> -shell_env YARN_CONTAINER_RUNTIME_DOCKER_IMAGE=httpd:0.1 -shell_env 
> YARN_CONTAINER_RUNTIME_DOCKER_DELAYED_REMOVAL=true -jar 
> /usr/hdp/current/hadoop-yarn-client/hadoop-yarn-applications-distributedshell-xx.jar
> {code}
> Find out container allocation. Run docker inspect command for docker 
> containers launched by app.
> Sometimes, the container is allocated to NM but docker PID is not up.
> {code:java}
> Command ssh -q -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null 
> xxx "sudo su - -c \"docker ps  -a | grep 
> container_e02_1531189225093_0003_01_02\" root" failed after 0 retries 
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8904) TestRMDelegationTokens can fail in testRMDTMasterKeyStateOnRollingMasterKey

2018-10-18 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16654766#comment-16654766
 ] 

Hadoop QA commented on YARN-8904:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
26s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  3s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 50s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 96m 15s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}146m 38s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | YARN-8904 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12944477/YARN-8904.001.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux b0735dc17486 4.4.0-133-generic #159-Ubuntu SMP Fri Aug 10 
07:31:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 3ed7163 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/22234/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/22234/testReport/ |
| Max. process+thread count | 972 (vs. ulimit of 1) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommi

[jira] [Commented] (YARN-8873) Add CSI java-based client library

2018-10-18 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16654856#comment-16654856
 ] 

Hadoop QA commented on YARN-8873:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 55s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
7s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 41s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
27s{color} | {color:green} hadoop-yarn-csi in the patch passed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
27s{color} | {color:red} The patch generated 21 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 66m 23s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | YARN-8873 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12944487/YARN-8873.005.patch |
| Optional Tests |  dupname  asflicense  findbugs  xml  compile  javac  javadoc 
 mvninstall  mvnsite  unit  shadedclient  checkstyle  cc  |
| uname | Linux 8a63edba9a8b 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 3ed7163 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/22235/testReport/ |
| asflicense | 
https://builds.apache.org/job/PreCommit-YARN-Build/22235

[jira] [Created] (YARN-8905) [Router] Add JvmMetricsInfo and pause monitor

2018-10-18 Thread Bibin A Chundatt (JIRA)
Bibin A Chundatt created YARN-8905:
--

 Summary: [Router] Add JvmMetricsInfo and pause monitor
 Key: YARN-8905
 URL: https://issues.apache.org/jira/browse/YARN-8905
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Bibin A Chundatt


Similar to resourcemanager and nodemanager serivce we can add JvmMetricsInfo to 
router service too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-8905) [Router] Add JvmMetricsInfo and pause monitor

2018-10-18 Thread Bilwa S T (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8905?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bilwa S T reassigned YARN-8905:
---

Assignee: Bilwa S T

> [Router] Add JvmMetricsInfo and pause monitor
> -
>
> Key: YARN-8905
> URL: https://issues.apache.org/jira/browse/YARN-8905
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Bibin A Chundatt
>Assignee: Bilwa S T
>Priority: Major
>
> Similar to resourcemanager and nodemanager serivce we can add JvmMetricsInfo 
> to router service too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8873) Add CSI java-based client library

2018-10-18 Thread Weiwei Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated YARN-8873:
--
Attachment: YARN-8873.006.patch

> Add CSI java-based client library
> -
>
> Key: YARN-8873
> URL: https://issues.apache.org/jira/browse/YARN-8873
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Major
> Attachments: YARN-8873.001.patch, YARN-8873.002.patch, 
> YARN-8873.003.patch, YARN-8873.004.patch, YARN-8873.005.patch, 
> YARN-8873.006.patch
>
>
> Build a java-based client to talk to CSI drivers, through CSI gRPC services.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8868) Set HTTPOnly attribute to Cookie

2018-10-18 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16655006#comment-16655006
 ] 

Hudson commented on YARN-8868:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15252 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15252/])
YARN-8868. Set HTTPOnly attribute to Cookie. Contributed by Chandni (sunilg: 
rev 2202e00ba8a44ad70f0a90e6c519257e3ae56a36)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/Dispatcher.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy/src/main/java/org/apache/hadoop/yarn/server/webproxy/WebAppProxyServlet.java


> Set HTTPOnly attribute to Cookie
> 
>
> Key: YARN-8868
> URL: https://issues.apache.org/jira/browse/YARN-8868
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Chandni Singh
>Assignee: Chandni Singh
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: YARN-8868.001.patch, YARN-8868.002.patch, new_rm_ui.png, 
> old_rm_ui.png
>
>
> 1. The program creates a cookie in Dispatcher.java at line 182, 185 and 199, 
> but fails to set the HttpOnly flag to true.
> 2. The program creates a cookie in WebAppProxyServlet.java at line 141 and 
> 388, but fails to set the HttpOnly flag to true.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-8906) NM hostnames not displayed correctly in Node Heatmap Chart

2018-10-18 Thread Charan Hebri (JIRA)
Charan Hebri created YARN-8906:
--

 Summary: NM hostnames not displayed correctly in Node Heatmap Chart
 Key: YARN-8906
 URL: https://issues.apache.org/jira/browse/YARN-8906
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Charan Hebri
 Attachments: Node_Heatmap_Chart.png

Hostnames displayed on the Node Heatmap Chart look garbled and are not clearly 
visible. Attached screenshot.
cc [~akhilpb]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8900) [Router] Federation: routing getContainers REST invocations transparently to multiple RMs

2018-10-18 Thread Bibin A Chundatt (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16655035#comment-16655035
 ] 

Bibin A Chundatt commented on YARN-8900:


Thank you [~giovanni.fumarola] for patch.

Few comments

# Can we have a generic methods similar to 
FederationClientInterceptor#invokeConcurrent ? getClusterMetricsInfo and 
getContainers and others can reuse.
# IMHO better to send exception, than empty response when clusters are not 
available. Applications zero containers and  zero subcluster currently return 
empty.


> [Router] Federation: routing getContainers REST invocations transparently to 
> multiple RMs
> -
>
> Key: YARN-8900
> URL: https://issues.apache.org/jira/browse/YARN-8900
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: federation, router
>Reporter: Giovanni Matteo Fumarola
>Assignee: Giovanni Matteo Fumarola
>Priority: Major
> Attachments: YARN-8900.v1.patch, YARN-8900.v2.patch
>
>
> This JIRA tracks the design/implementation of the layer for routing 
> RMWebServicesProtocol requests to the appropriate RM(s) in a federated YARN 
> cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-8906) NM hostnames not displayed correctly in Node Heatmap Chart

2018-10-18 Thread Akhil PB (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akhil PB reassigned YARN-8906:
--

Assignee: Akhil PB

> NM hostnames not displayed correctly in Node Heatmap Chart
> --
>
> Key: YARN-8906
> URL: https://issues.apache.org/jira/browse/YARN-8906
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Charan Hebri
>Assignee: Akhil PB
>Priority: Major
> Attachments: Node_Heatmap_Chart.png
>
>
> Hostnames displayed on the Node Heatmap Chart look garbled and are not 
> clearly visible. Attached screenshot.
> cc [~akhilpb]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8880) Add configurations for pluggable plugin framework

2018-10-18 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16655057#comment-16655057
 ] 

Hadoop QA commented on YARN-8880:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
7s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 54s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
56s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
41s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 25s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 9 new + 224 unchanged - 0 fixed = 233 total (was 224) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 26s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
50s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
47s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  3m 18s{color} 
| {color:red} hadoop-yarn-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 18m 33s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
42s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}109m 53s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.client.api.impl.TestTimelineClientV2Impl |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | YARN-8880 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12944283/YARN-8880-trunk.002.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  xml  

[jira] [Commented] (YARN-7502) Nodemanager restart docs should describe nodemanager supervised property

2018-10-18 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-7502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16655066#comment-16655066
 ] 

Sunil Govindan commented on YARN-7502:
--

+1. Committing later today. Thanks [~suma.shivaprasad]

> Nodemanager restart docs should describe nodemanager supervised property
> 
>
> Key: YARN-7502
> URL: https://issues.apache.org/jira/browse/YARN-7502
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.9.0, 2.7.4, 2.8.2, 3.0.0
>Reporter: Jason Lowe
>Assignee: Suma Shivaprasad
>Priority: Major
> Attachments: YARN-7502.1.patch, YARN-7502.2.patch
>
>
> The yarn.nodemanager.recovery.supervised property is not mentioned in the 
> nodemanager restart documentation.  The docs should describe what this 
> property does and when it is useful to set it to a value different than the 
> work-preserving restart property.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8864) NM incorrectly logs container user as the user who sent a start/stop container request in its audit log

2018-10-18 Thread Wilfred Spiegelenburg (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16655080#comment-16655080
 ] 

Wilfred Spiegelenburg commented on YARN-8864:
-

The test failure is in the native tests, I have run all those tests locally and 
have not seen any failure.

> NM incorrectly logs container user as the user who sent a start/stop 
> container request in its audit log
> ---
>
> Key: YARN-8864
> URL: https://issues.apache.org/jira/browse/YARN-8864
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.2.0
>Reporter: Haibo Chen
>Assignee: Wilfred Spiegelenburg
>Priority: Major
> Attachments: YARN-8864.001.patch, YARN-8864.002.patch
>
>
> As in  ContainerManagerImpl.java
> {code:java}
> protected void stopContainerInternal(ContainerId containerID)
>   throws YarnException, IOException { 
>     ...   
> NMAuditLogger.logSuccess(container.getUser(), 
> AuditConstants.STOP_CONTAINER,
>"ContainerManageImpl", 
> containerID.getApplicationAttemptId().getApplicationId(), containerID);
> }{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8865) RMStateStore contains large number of expired RMDelegationToken

2018-10-18 Thread Wilfred Spiegelenburg (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16655123#comment-16655123
 ] 

Wilfred Spiegelenburg commented on YARN-8865:
-

Test failure is covered by YARN-8904.
[~daryn] or [~jlowe] could you please have a look at this latest patch please?

> RMStateStore contains large number of expired RMDelegationToken
> ---
>
> Key: YARN-8865
> URL: https://issues.apache.org/jira/browse/YARN-8865
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 3.1.0
>Reporter: Wilfred Spiegelenburg
>Assignee: Wilfred Spiegelenburg
>Priority: Major
> Attachments: YARN-8865.001.patch, YARN-8865.002.patch, 
> YARN-8865.003.patch, YARN-8865.004.patch
>
>
> When the RM state store is restored expired delegation tokens are restored 
> and added to the system. These expired tokens do not get cleaned up or 
> removed. The exact reason why the tokens are still in the store is not clear. 
> We have seen as many as 250,000 tokens in the store some of which were 2 
> years old.
> This has two side effects:
> * for the zookeeper store this leads to a jute buffer exhaustion issue and 
> prevents the RM from becoming active.
> * restore takes longer than needed and heap usage is higher than it should be
> We should not restore already expired tokens since they cannot be renewed or 
> used.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8854) [Hadoop YARN Common] Update jquery datatable version references

2018-10-18 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16655228#comment-16655228
 ] 

Sunil Govindan commented on YARN-8854:
--

[~akhilpb] pls help to check jenkins error.

ASF, whitespace, checkstyle etc.

> [Hadoop YARN Common] Update jquery datatable version references
> ---
>
> Key: YARN-8854
> URL: https://issues.apache.org/jira/browse/YARN-8854
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Akhil PB
>Assignee: Akhil PB
>Priority: Critical
> Attachments: YARN-8854.001.patch, YARN-8854.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8904) TestRMDelegationTokens can fail in testRMDTMasterKeyStateOnRollingMasterKey

2018-10-18 Thread Wilfred Spiegelenburg (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16655245#comment-16655245
 ] 

Wilfred Spiegelenburg commented on YARN-8904:
-

I cannot find any test failure: [WARNING] Tests run: 2428, Failures: 0, Errors: 
0, Skipped: 7

Lots of the RM runs have this as the failure:
{code}
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.21.0:test (default-test) on 
project hadoop-yarn-server-resourcemanager: There was a timeout or other error 
in the fork -> [Help 1]
{code}
But that does not happen locally

> TestRMDelegationTokens can fail in testRMDTMasterKeyStateOnRollingMasterKey
> ---
>
> Key: YARN-8904
> URL: https://issues.apache.org/jira/browse/YARN-8904
> Project: Hadoop YARN
>  Issue Type: Test
>  Components: test
>Affects Versions: 3.1.0
>Reporter: Wilfred Spiegelenburg
>Assignee: Wilfred Spiegelenburg
>Priority: Minor
> Attachments: YARN-8904.001.patch
>
>
> In build 
> [link|https://builds.apache.org/job/PreCommit-YARN-Build/22215/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt],
>  TestRMDelegationTokens fails for a test case:
> * TestRMDelegationTokens.testRMDTMasterKeyStateOnRollingMasterKey
> The test fails with an extra key in the list. It can be easily reproduced by 
> introducing a short sleep in the thread.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8854) [Hadoop YARN Common] Update jquery datatable version references

2018-10-18 Thread Akhil PB (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8854?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akhil PB updated YARN-8854:
---
Attachment: YARN-8854.003.patch

> [Hadoop YARN Common] Update jquery datatable version references
> ---
>
> Key: YARN-8854
> URL: https://issues.apache.org/jira/browse/YARN-8854
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Akhil PB
>Assignee: Akhil PB
>Priority: Critical
> Attachments: YARN-8854.001.patch, YARN-8854.002.patch, 
> YARN-8854.003.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8873) Add CSI java-based client library

2018-10-18 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16655331#comment-16655331
 ] 

Hadoop QA commented on YARN-8873:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
30s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  5m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
20m 13s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
46s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
38s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 14m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  5m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
5s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 31s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
46s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}143m 16s{color} 
| {color:red} hadoop-yarn in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
35s{color} | {color:green} hadoop-yarn-csi in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
42s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}256m 18s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage
 |
|   | hadoop.yarn.server.resourcemanager.security.TestRMDelegationTokens |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | YARN-8873 |
| JIRA Patch URL | 
https://issue

[jira] [Commented] (YARN-8854) [Hadoop YARN Common] Update jquery datatable version references

2018-10-18 Thread Akhil PB (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16655336#comment-16655336
 ] 

Akhil PB commented on YARN-8854:


[~sunilg] Updated patch v3 with fixes for whitespaces, ASF license etc. Could 
you please check.

> [Hadoop YARN Common] Update jquery datatable version references
> ---
>
> Key: YARN-8854
> URL: https://issues.apache.org/jira/browse/YARN-8854
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Akhil PB
>Assignee: Akhil PB
>Priority: Critical
> Attachments: YARN-8854.001.patch, YARN-8854.002.patch, 
> YARN-8854.003.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8873) Add CSI java-based client library

2018-10-18 Thread Weiwei Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16655340#comment-16655340
 ] 

Weiwei Yang commented on YARN-8873:
---

Hi [~sunilg]

Now the license warnings are resolved. Do you have any more comments for the 
patch? The UT failure was not related.

> Add CSI java-based client library
> -
>
> Key: YARN-8873
> URL: https://issues.apache.org/jira/browse/YARN-8873
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Major
> Attachments: YARN-8873.001.patch, YARN-8873.002.patch, 
> YARN-8873.003.patch, YARN-8873.004.patch, YARN-8873.005.patch, 
> YARN-8873.006.patch
>
>
> Build a java-based client to talk to CSI drivers, through CSI gRPC services.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8059) Resource type is ignored when FS decide to preempt

2018-10-18 Thread Szilard Nemeth (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szilard Nemeth updated YARN-8059:
-
Attachment: YARN-8059.004.patch

> Resource type is ignored when FS decide to preempt
> --
>
> Key: YARN-8059
> URL: https://issues.apache.org/jira/browse/YARN-8059
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 3.0.0
>Reporter: Yufei Gu
>Assignee: Szilard Nemeth
>Priority: Major
> Attachments: YARN-8059.001.patch, YARN-8059.002.patch, 
> YARN-8059.003.patch, YARN-8059.004.patch
>
>
> Method Fairscheduler#shouldAttemptPreemption doesn't consider resources other 
> than vcore and memory. We may need to rethink it in the resource type 
> scenario. cc [~miklos.szeg...@cloudera.com], [~wilfreds] and [~snemeth].
> {code}
> if (context.isPreemptionEnabled()) {
>   return (context.getPreemptionUtilizationThreshold() < Math.max(
>   (float) rootMetrics.getAllocatedMB() /
>   getClusterResource().getMemorySize(),
>   (float) rootMetrics.getAllocatedVirtualCores() /
>   getClusterResource().getVirtualCores()));
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8059) Resource type is ignored when FS decide to preempt

2018-10-18 Thread Szilard Nemeth (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16655353#comment-16655353
 ] 

Szilard Nemeth commented on YARN-8059:
--

patch004 fixes the remaining test failure as well

> Resource type is ignored when FS decide to preempt
> --
>
> Key: YARN-8059
> URL: https://issues.apache.org/jira/browse/YARN-8059
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 3.0.0
>Reporter: Yufei Gu
>Assignee: Szilard Nemeth
>Priority: Major
> Attachments: YARN-8059.001.patch, YARN-8059.002.patch, 
> YARN-8059.003.patch, YARN-8059.004.patch
>
>
> Method Fairscheduler#shouldAttemptPreemption doesn't consider resources other 
> than vcore and memory. We may need to rethink it in the resource type 
> scenario. cc [~miklos.szeg...@cloudera.com], [~wilfreds] and [~snemeth].
> {code}
> if (context.isPreemptionEnabled()) {
>   return (context.getPreemptionUtilizationThreshold() < Math.max(
>   (float) rootMetrics.getAllocatedMB() /
>   getClusterResource().getMemorySize(),
>   (float) rootMetrics.getAllocatedVirtualCores() /
>   getClusterResource().getVirtualCores()));
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8862) [GPG] Add Yarn Registry cleanup in ApplicationCleaner

2018-10-18 Thread Bibin A Chundatt (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8862?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bibin A Chundatt updated YARN-8862:
---
Summary: [GPG] Add Yarn Registry cleanup in ApplicationCleaner  (was: [GPG] 
add Yarn Registry cleanup in ApplicationCleaner)

> [GPG] Add Yarn Registry cleanup in ApplicationCleaner
> -
>
> Key: YARN-8862
> URL: https://issues.apache.org/jira/browse/YARN-8862
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Botong Huang
>Assignee: Botong Huang
>Priority: Major
> Attachments: YARN-8862-YARN-7402.v1.patch, 
> YARN-8862-YARN-7402.v2.patch, YARN-8862-YARN-7402.v3.patch, 
> YARN-8862-YARN-7402.v4.patch, YARN-8862-YARN-7402.v5.patch, 
> YARN-8862-YARN-7402.v6.patch
>
>
> In Yarn Federation, we use Yarn Registry to use the AMToken for UAMs in 
> secondary sub-clusters. Because of potential more app attempts later, 
> AMRMProxy cannot kill the UAM and delete the tokens when one local attempt 
> finishes. So similar to the StateStore application table, we need 
> ApplicationCleaner in GPG to cleanup the finished app entries in Yarn 
> Registry. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8897) LoadBasedRouterPolicy throws "NPE" in case of sub cluster unavailability

2018-10-18 Thread Bibin A Chundatt (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8897?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16655427#comment-16655427
 ] 

Bibin A Chundatt commented on YARN-8897:


Thank you [~BilwaST] for patch.

Inconsistent handling which i was able to find in case of REST and API . Incase 
of REST failure in choosing home cluster results in {{SERVICE_UNAVAILABLE}}, 
but in case of RouterClientRMService  will retry an fail.

[~subru] I think its better to throw YarnException out incase of 
RouterClientRMService too.

> LoadBasedRouterPolicy throws "NPE" in case of sub cluster unavailability 
> -
>
> Key: YARN-8897
> URL: https://issues.apache.org/jira/browse/YARN-8897
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: federation, router
>Reporter: Akshay Agarwal
>Assignee: Bilwa S T
>Priority: Minor
> Attachments: YARN-8897-001.patch
>
>
> If no sub clusters are available for "*Load Based Router Policy*" with 
> *cluster weight* as *1*  in Router Based Federation Setup , throwing 
> "*NullPointerException*".
>  
> *Exception Details:*
> {code:java}
> java.lang.NullPointerException: java.lang.NullPointerException
>  at 
> org.apache.hadoop.yarn.server.federation.policies.router.LoadBasedRouterPolicy.getHomeSubcluster(LoadBasedRouterPolicy.java:99)
>  at 
> org.apache.hadoop.yarn.server.federation.policies.RouterPolicyFacade.getHomeSubcluster(RouterPolicyFacade.java:204)
>  at 
> org.apache.hadoop.yarn.server.router.clientrm.FederationClientInterceptor.submitApplication(FederationClientInterceptor.java:362)
>  at 
> org.apache.hadoop.yarn.server.router.clientrm.RouterClientRMService.submitApplication(RouterClientRMService.java:218)
>  at 
> org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.submitApplication(ApplicationClientProtocolPBServiceImpl.java:282)
>  at 
> org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:579)
>  at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
>  at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:872)
>  at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:818)
>  at java.security.AccessController.doPrivileged(Native Method)
>  at javax.security.auth.Subject.doAs(Subject.java:422)
>  at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
>  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2678)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>  at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>  at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>  at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>  at org.apache.hadoop.yarn.ipc.RPCUtil.instantiateException(RPCUtil.java:53)
>  at 
> org.apache.hadoop.yarn.ipc.RPCUtil.instantiateRuntimeException(RPCUtil.java:85)
>  at 
> org.apache.hadoop.yarn.ipc.RPCUtil.unwrapAndThrowException(RPCUtil.java:122)
>  at 
> org.apache.hadoop.yarn.api.impl.pb.client.ApplicationClientProtocolPBClientImpl.submitApplication(ApplicationClientProtocolPBClientImpl.java:297)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498)
>  at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
>  at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
>  at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
>  at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
>  at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
>  at com.sun.proxy.$Proxy15.submitApplication(Unknown Source)
>  at 
> org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.submitApplication(YarnClientImpl.java:288)
>  at 
> org.apache.hadoop.mapred.ResourceMgrDelegate.submitApplication(ResourceMgrDelegate.java:300)
>  at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:331)
>  at 
> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:254)
>  at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1570)
>  at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1567)
>  at java.security.AccessController.doPriv

[jira] [Commented] (YARN-8449) RM HA for AM HTTPS Support

2018-10-18 Thread Haibo Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16655436#comment-16655436
 ] 

Haibo Chen commented on YARN-8449:
--

[~rkanter] can you address the outstanding checkstyle issues?

> RM HA for AM HTTPS Support
> --
>
> Key: YARN-8449
> URL: https://issues.apache.org/jira/browse/YARN-8449
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Robert Kanter
>Assignee: Robert Kanter
>Priority: Major
> Attachments: YARN-8449.001.patch, YARN-8449.002.patch, 
> YARN-8449.003.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8864) NM incorrectly logs container user as the user who sent a start/stop container request in its audit log

2018-10-18 Thread Haibo Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16655446#comment-16655446
 ] 

Haibo Chen commented on YARN-8864:
--

+1. The native test failures was seen YARN-8448. Checking this patch in shortly.

> NM incorrectly logs container user as the user who sent a start/stop 
> container request in its audit log
> ---
>
> Key: YARN-8864
> URL: https://issues.apache.org/jira/browse/YARN-8864
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.2.0
>Reporter: Haibo Chen
>Assignee: Wilfred Spiegelenburg
>Priority: Major
> Attachments: YARN-8864.001.patch, YARN-8864.002.patch
>
>
> As in  ContainerManagerImpl.java
> {code:java}
> protected void stopContainerInternal(ContainerId containerID)
>   throws YarnException, IOException { 
>     ...   
> NMAuditLogger.logSuccess(container.getUser(), 
> AuditConstants.STOP_CONTAINER,
>"ContainerManageImpl", 
> containerID.getApplicationAttemptId().getApplicationId(), containerID);
> }{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8896) Limit the maximum number of container assignments per heartbeat

2018-10-18 Thread Zhankun Tang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8896?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhankun Tang updated YARN-8896:
---
Attachment: YARN-8896-trunk.001.patch

> Limit the maximum number of container assignments per heartbeat
> ---
>
> Key: YARN-8896
> URL: https://issues.apache.org/jira/browse/YARN-8896
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.9.0, 3.0.0
>Reporter: Weiwei Yang
>Assignee: Zhankun Tang
>Priority: Major
> Attachments: YARN-8896-trunk.001.patch
>
>
> YARN-4161 adds a configuration 
> \{{yarn.scheduler.capacity.per-node-heartbeat.maximum-container-assignments}} 
> to control max number of container assignments per heartbeat, however the 
> default value is -1. This could potentially cause the CS gets stuck in the 
> while loop causing issue like YARN-8513. We should change this to a finite 
> number, e.g 100.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8489) Need to support "dominant" component concept inside YARN service

2018-10-18 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16655476#comment-16655476
 ] 

Eric Yang commented on YARN-8489:
-

[~billie.rinaldi] Configuration properties make sense.  How about 
"yarn.service.container-state-report-as-service-state"?

> Need to support "dominant" component concept inside YARN service
> 
>
> Key: YARN-8489
> URL: https://issues.apache.org/jira/browse/YARN-8489
> Project: Hadoop YARN
>  Issue Type: Task
>  Components: yarn-native-services
>Reporter: Wangda Tan
>Priority: Major
>
> Existing YARN service support termination policy for different restart 
> policies. For example ALWAYS means service will not be terminated. And NEVER 
> means if all component terminated, service will be terminated.
> The name "dominant" might not be most appropriate , we can figure out better 
> names. But in simple, it means, a dominant component which final state will 
> determine job's final state regardless of other components.
> Use cases: 
> 1) Tensorflow job has master/worker/services/tensorboard. Once master goes to 
> final state, no matter if it is succeeded or failed, we should terminate 
> ps/tensorboard/workers. And the mark the job to succeeded/failed. 
> 2) Not sure if it is a real-world use case: A service which has multiple 
> component, some component is not restartable. For such services, if a 
> component is failed, we should mark the whole service to failed. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8896) Limit the maximum number of container assignments per heartbeat

2018-10-18 Thread Zhankun Tang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16655478#comment-16655478
 ] 

Zhankun Tang commented on YARN-8896:


[~leftnoteasy] , change the default max assign value from -1 to 100. Please 
review.

> Limit the maximum number of container assignments per heartbeat
> ---
>
> Key: YARN-8896
> URL: https://issues.apache.org/jira/browse/YARN-8896
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.9.0, 3.0.0
>Reporter: Weiwei Yang
>Assignee: Zhankun Tang
>Priority: Major
> Attachments: YARN-8896-trunk.001.patch
>
>
> YARN-4161 adds a configuration 
> \{{yarn.scheduler.capacity.per-node-heartbeat.maximum-container-assignments}} 
> to control max number of container assignments per heartbeat, however the 
> default value is -1. This could potentially cause the CS gets stuck in the 
> while loop causing issue like YARN-8513. We should change this to a finite 
> number, e.g 100.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8864) NM incorrectly logs container user as the user who sent a start/stop container request in its audit log

2018-10-18 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16655482#comment-16655482
 ] 

Hudson commented on YARN-8864:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15253 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15253/])
YARN-8864. NM incorrectly logs container user as the user who sent a 
(haibochen: rev 32fe351bb654e684f127f47ab808c497e0d3f258)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/DummyContainerManager.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/BaseContainerManagerTest.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/TestContainerManagerRecovery.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestNodeManagerResync.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/ContainerManagerImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/TestContainerManager.java


> NM incorrectly logs container user as the user who sent a start/stop 
> container request in its audit log
> ---
>
> Key: YARN-8864
> URL: https://issues.apache.org/jira/browse/YARN-8864
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.2.0
>Reporter: Haibo Chen
>Assignee: Wilfred Spiegelenburg
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: YARN-8864.001.patch, YARN-8864.002.patch
>
>
> As in  ContainerManagerImpl.java
> {code:java}
> protected void stopContainerInternal(ContainerId containerID)
>   throws YarnException, IOException { 
>     ...   
> NMAuditLogger.logSuccess(container.getUser(), 
> AuditConstants.STOP_CONTAINER,
>"ContainerManageImpl", 
> containerID.getApplicationAttemptId().getApplicationId(), containerID);
> }{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8687) YARN service example is out-dated

2018-10-18 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16655488#comment-16655488
 ] 

Eric Yang commented on YARN-8687:
-

+1

> YARN service example is out-dated
> -
>
> Key: YARN-8687
> URL: https://issues.apache.org/jira/browse/YARN-8687
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Reporter: Eric Yang
>Assignee: Suma Shivaprasad
>Priority: Major
> Attachments: YARN-8687.1.patch
>
>
> Example for YARN service is using file type "ENV".
> {code}
> {
>   "name": "httpd-service",
>   "version": "1.0",
>   "lifetime": "3600",
>   "components": [
> {
>   "name": "httpd",
>   "number_of_containers": 2,
>   "artifact": {
> "id": "centos/httpd-24-centos7:latest",
> "type": "DOCKER"
>   },
>   "launch_command": "/usr/bin/run-httpd",
>   "resource": {
> "cpus": 1,
> "memory": "1024"
>   },
>   "configuration": {
> "files": [
>   {
> "type": "TEMPLATE",
> "dest_file": "/var/www/html/index.html",
> "properties": {
>   "content": 
> "TitleHello from 
> ${COMPONENT_INSTANCE_NAME}!"
> }
>   }
> ]
>   }
> },
> {
>   "name": "httpd-proxy",
>   "number_of_containers": 1,
>   "artifact": {
> "id": "centos/httpd-24-centos7:latest",
> "type": "DOCKER"
>   },
>   "launch_command": "/usr/bin/run-httpd",
>   "resource": {
> "cpus": 1,
> "memory": "1024"
>   },
>   "configuration": {
> "files": [
>   {
> "type": "TEMPLATE",
> "dest_file": "/etc/httpd/conf.d/httpd-proxy.conf",
> "src_file": "httpd-proxy.conf"
>   }
> ]
>   }
> }
>   ],
>   "quicklinks": {
> "Apache HTTP Server": 
> "http://httpd-proxy-0.${SERVICE_NAME}.${USER}.${DOMAIN}:8080";
>   }
> }
> {code}
> The type has changed to "TEMPLATE" in the code.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-8907) Modify a error message in TestCapacityScheduler

2018-10-18 Thread Zhankun Tang (JIRA)
Zhankun Tang created YARN-8907:
--

 Summary: Modify a error message in TestCapacityScheduler
 Key: YARN-8907
 URL: https://issues.apache.org/jira/browse/YARN-8907
 Project: Hadoop YARN
  Issue Type: Improvement
Reporter: Zhankun Tang
Assignee: Zhankun Tang


Change the log message "START" to "END" in some test cases.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8687) YARN service example is out-dated

2018-10-18 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-8687:

Fix Version/s: 3.1.2
   3.2.0

> YARN service example is out-dated
> -
>
> Key: YARN-8687
> URL: https://issues.apache.org/jira/browse/YARN-8687
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Reporter: Eric Yang
>Assignee: Suma Shivaprasad
>Priority: Major
> Fix For: 3.2.0, 3.1.2
>
> Attachments: YARN-8687.1.patch
>
>
> Example for YARN service is using file type "ENV".
> {code}
> {
>   "name": "httpd-service",
>   "version": "1.0",
>   "lifetime": "3600",
>   "components": [
> {
>   "name": "httpd",
>   "number_of_containers": 2,
>   "artifact": {
> "id": "centos/httpd-24-centos7:latest",
> "type": "DOCKER"
>   },
>   "launch_command": "/usr/bin/run-httpd",
>   "resource": {
> "cpus": 1,
> "memory": "1024"
>   },
>   "configuration": {
> "files": [
>   {
> "type": "TEMPLATE",
> "dest_file": "/var/www/html/index.html",
> "properties": {
>   "content": 
> "TitleHello from 
> ${COMPONENT_INSTANCE_NAME}!"
> }
>   }
> ]
>   }
> },
> {
>   "name": "httpd-proxy",
>   "number_of_containers": 1,
>   "artifact": {
> "id": "centos/httpd-24-centos7:latest",
> "type": "DOCKER"
>   },
>   "launch_command": "/usr/bin/run-httpd",
>   "resource": {
> "cpus": 1,
> "memory": "1024"
>   },
>   "configuration": {
> "files": [
>   {
> "type": "TEMPLATE",
> "dest_file": "/etc/httpd/conf.d/httpd-proxy.conf",
> "src_file": "httpd-proxy.conf"
>   }
> ]
>   }
> }
>   ],
>   "quicklinks": {
> "Apache HTTP Server": 
> "http://httpd-proxy-0.${SERVICE_NAME}.${USER}.${DOMAIN}:8080";
>   }
> }
> {code}
> The type has changed to "TEMPLATE" in the code.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8907) Modify a error message in TestCapacityScheduler

2018-10-18 Thread Zhankun Tang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8907?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhankun Tang updated YARN-8907:
---
Attachment: YARN-8907-trunk.001.patch

> Modify a error message in TestCapacityScheduler
> ---
>
> Key: YARN-8907
> URL: https://issues.apache.org/jira/browse/YARN-8907
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Zhankun Tang
>Assignee: Zhankun Tang
>Priority: Minor
> Attachments: YARN-8907-trunk.001.patch
>
>
> Change the log message "START" to "END" in some test cases.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8907) Modify a error message in TestCapacityScheduler

2018-10-18 Thread Zhankun Tang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16655500#comment-16655500
 ] 

Zhankun Tang commented on YARN-8907:


[~cheersyang] , Please review. Thanks.

> Modify a error message in TestCapacityScheduler
> ---
>
> Key: YARN-8907
> URL: https://issues.apache.org/jira/browse/YARN-8907
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Zhankun Tang
>Assignee: Zhankun Tang
>Priority: Minor
> Attachments: YARN-8907-trunk.001.patch
>
>
> Change the log message "START" to "END" in some test cases.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8587) Delays are noticed to launch docker container

2018-10-18 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16655508#comment-16655508
 ] 

Eric Yang commented on YARN-8587:
-

[~Charo Zhang] Would you like to contribute a patch for this issue?

> Delays are noticed to launch docker container
> -
>
> Key: YARN-8587
> URL: https://issues.apache.org/jira/browse/YARN-8587
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.1.1
>Reporter: Yesha Vora
>Priority: Major
>  Labels: Docker
>
> Launch dshell application. Wait for application to go in RUNNING state.
> {code:java}
> yarn  jar /xx/hadoop-yarn-applications-distributedshell-*.jar  -shell_command 
> "sleep 300" -num_containers 1 -shell_env YARN_CONTAINER_RUNTIME_TYPE=docker 
> -shell_env YARN_CONTAINER_RUNTIME_DOCKER_IMAGE=httpd:0.1 -shell_env 
> YARN_CONTAINER_RUNTIME_DOCKER_DELAYED_REMOVAL=true -jar 
> /usr/hdp/current/hadoop-yarn-client/hadoop-yarn-applications-distributedshell-xx.jar
> {code}
> Find out container allocation. Run docker inspect command for docker 
> containers launched by app.
> Sometimes, the container is allocated to NM but docker PID is not up.
> {code:java}
> Command ssh -q -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null 
> xxx "sudo su - -c \"docker ps  -a | grep 
> container_e02_1531189225093_0003_01_02\" root" failed after 0 retries 
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8687) YARN service example is out-dated

2018-10-18 Thread Suma Shivaprasad (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16655510#comment-16655510
 ] 

Suma Shivaprasad commented on YARN-8687:


Thanks [~eyang]

> YARN service example is out-dated
> -
>
> Key: YARN-8687
> URL: https://issues.apache.org/jira/browse/YARN-8687
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Reporter: Eric Yang
>Assignee: Suma Shivaprasad
>Priority: Major
> Fix For: 3.2.0, 3.1.2
>
> Attachments: YARN-8687.1.patch
>
>
> Example for YARN service is using file type "ENV".
> {code}
> {
>   "name": "httpd-service",
>   "version": "1.0",
>   "lifetime": "3600",
>   "components": [
> {
>   "name": "httpd",
>   "number_of_containers": 2,
>   "artifact": {
> "id": "centos/httpd-24-centos7:latest",
> "type": "DOCKER"
>   },
>   "launch_command": "/usr/bin/run-httpd",
>   "resource": {
> "cpus": 1,
> "memory": "1024"
>   },
>   "configuration": {
> "files": [
>   {
> "type": "TEMPLATE",
> "dest_file": "/var/www/html/index.html",
> "properties": {
>   "content": 
> "TitleHello from 
> ${COMPONENT_INSTANCE_NAME}!"
> }
>   }
> ]
>   }
> },
> {
>   "name": "httpd-proxy",
>   "number_of_containers": 1,
>   "artifact": {
> "id": "centos/httpd-24-centos7:latest",
> "type": "DOCKER"
>   },
>   "launch_command": "/usr/bin/run-httpd",
>   "resource": {
> "cpus": 1,
> "memory": "1024"
>   },
>   "configuration": {
> "files": [
>   {
> "type": "TEMPLATE",
> "dest_file": "/etc/httpd/conf.d/httpd-proxy.conf",
> "src_file": "httpd-proxy.conf"
>   }
> ]
>   }
> }
>   ],
>   "quicklinks": {
> "Apache HTTP Server": 
> "http://httpd-proxy-0.${SERVICE_NAME}.${USER}.${DOMAIN}:8080";
>   }
> }
> {code}
> The type has changed to "TEMPLATE" in the code.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-8908) Fix errors in document of GPU/FPGA

2018-10-18 Thread Zhankun Tang (JIRA)
Zhankun Tang created YARN-8908:
--

 Summary: Fix errors in document of GPU/FPGA
 Key: YARN-8908
 URL: https://issues.apache.org/jira/browse/YARN-8908
 Project: Hadoop YARN
  Issue Type: Improvement
Reporter: Zhankun Tang






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8908) Fix errors in yarn-default.xml related to GPU/FPGA

2018-10-18 Thread Zhankun Tang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhankun Tang updated YARN-8908:
---
Summary: Fix errors in yarn-default.xml related to GPU/FPGA  (was: Fix 
errors in document of GPU/FPGA)

> Fix errors in yarn-default.xml related to GPU/FPGA
> --
>
> Key: YARN-8908
> URL: https://issues.apache.org/jira/browse/YARN-8908
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Zhankun Tang
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8456) Fix a configuration handling bug when user leave FPGA discover executable path configuration default but set OpenCL SDK path environment variable

2018-10-18 Thread Zhankun Tang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8456?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhankun Tang updated YARN-8456:
---
Description: 
*Issue:*
 When the user doesn't configure 
"yarn.nodemanager.resource-plugins.fpga.path-to-discovery-executables" in 
yarn-site.xml and have "ALTERAOCLSDKROOT" environment variable set, the FPGA 
discoverer cannot find the correct executable path (with IntelFPGAOpenclPlugin).

*Reason:*

In IntelFPGAOpenclPlugin,  the current code builds a wrong path string after 
getting the environment variable value. It should append "/bin/" otherwise it would fail the FPGA resource discovery.

 

*Solution:*

Fix the path construction code in IntelFPGAOpenclPlugin.

 

  was:
*Issue:*
 When the user doesn't configure 
"yarn.nodemanager.resource-plugins.fpga.path-to-discovery-executables" in 
yarn-site.xml and have "ALTERAOCLSDKROOT" environment variable set, the FPGA 
discoverer cannot find the correct executable path (with IntelFPGAOpenclPlugin).

*Reason:*

In IntelFPGAOpenclPlugin,  the current code builds a wrong path string after 
getting the environment variable value. It should append "/bin/" otherwise it would fail the FPGA resource discovery.

 

*Solution:*

Fix the path construction code in IntelFPGAOpenclPlugin.

 

*MISC:*

The patch also corrects some minor issues:
 # Change _"yarn-io/_gpu_" and _"yarn-io/_fpga_" to _"yarn.io/gpu"_, 
_"yarn.io/fpga"_ in documents
 # Use _"auto"_ as the default value for 
"yarn.nodemanager.resource-plugins.fpga.allowed-fpga-devices". The original 
"0,1" won't cause any problem but use "auto" is better

 


> Fix a configuration handling bug when user leave FPGA discover executable 
> path configuration default but set OpenCL SDK path environment variable
> -
>
> Key: YARN-8456
> URL: https://issues.apache.org/jira/browse/YARN-8456
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Zhankun Tang
>Assignee: Zhankun Tang
>Priority: Major
> Attachments: YARN-8456-trunk.001.patch
>
>
> *Issue:*
>  When the user doesn't configure 
> "yarn.nodemanager.resource-plugins.fpga.path-to-discovery-executables" in 
> yarn-site.xml and have "ALTERAOCLSDKROOT" environment variable set, the FPGA 
> discoverer cannot find the correct executable path (with 
> IntelFPGAOpenclPlugin).
> *Reason:*
> In IntelFPGAOpenclPlugin,  the current code builds a wrong path string after 
> getting the environment variable value. It should append "/bin/ name>" otherwise it would fail the FPGA resource discovery.
>  
> *Solution:*
> Fix the path construction code in IntelFPGAOpenclPlugin.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8456) Fix a configuration handling bug when user leave FPGA discover executable path configuration default but set OpenCL SDK path environment variable

2018-10-18 Thread Zhankun Tang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8456?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhankun Tang updated YARN-8456:
---
Attachment: YARN-8456-trunk.002.patch

> Fix a configuration handling bug when user leave FPGA discover executable 
> path configuration default but set OpenCL SDK path environment variable
> -
>
> Key: YARN-8456
> URL: https://issues.apache.org/jira/browse/YARN-8456
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Zhankun Tang
>Assignee: Zhankun Tang
>Priority: Major
> Attachments: YARN-8456-trunk.001.patch, YARN-8456-trunk.002.patch
>
>
> *Issue:*
>  When the user doesn't configure 
> "yarn.nodemanager.resource-plugins.fpga.path-to-discovery-executables" in 
> yarn-site.xml and have "ALTERAOCLSDKROOT" environment variable set, the FPGA 
> discoverer cannot find the correct executable path (with 
> IntelFPGAOpenclPlugin).
> *Reason:*
> In IntelFPGAOpenclPlugin,  the current code builds a wrong path string after 
> getting the environment variable value. It should append "/bin/ name>" otherwise it would fail the FPGA resource discovery.
>  
> *Solution:*
> Fix the path construction code in IntelFPGAOpenclPlugin.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-8909) Fix a document error in UsingFPGA.md

2018-10-18 Thread Zhankun Tang (JIRA)
Zhankun Tang created YARN-8909:
--

 Summary: Fix a document error in UsingFPGA.md
 Key: YARN-8909
 URL: https://issues.apache.org/jira/browse/YARN-8909
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Zhankun Tang






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-8909) Fix a document error in UsingFPGA.md

2018-10-18 Thread Zhankun Tang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhankun Tang reassigned YARN-8909:
--

Assignee: Zhankun Tang

> Fix a document error in UsingFPGA.md
> 
>
> Key: YARN-8909
> URL: https://issues.apache.org/jira/browse/YARN-8909
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Zhankun Tang
>Assignee: Zhankun Tang
>Priority: Minor
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8687) YARN service example is out-dated

2018-10-18 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16655536#comment-16655536
 ] 

Hudson commented on YARN-8687:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15254 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15254/])
YARN-8687. Update YARN service file type in documentation.(eyang: 
rev ba7e81667ce12d5cf9d87ee18a8627323759cee0)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/yarn-service/YarnServiceAPI.md
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/yarn-service/Examples.md


> YARN service example is out-dated
> -
>
> Key: YARN-8687
> URL: https://issues.apache.org/jira/browse/YARN-8687
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Reporter: Eric Yang
>Assignee: Suma Shivaprasad
>Priority: Major
> Fix For: 3.2.0, 3.1.2
>
> Attachments: YARN-8687.1.patch
>
>
> Example for YARN service is using file type "ENV".
> {code}
> {
>   "name": "httpd-service",
>   "version": "1.0",
>   "lifetime": "3600",
>   "components": [
> {
>   "name": "httpd",
>   "number_of_containers": 2,
>   "artifact": {
> "id": "centos/httpd-24-centos7:latest",
> "type": "DOCKER"
>   },
>   "launch_command": "/usr/bin/run-httpd",
>   "resource": {
> "cpus": 1,
> "memory": "1024"
>   },
>   "configuration": {
> "files": [
>   {
> "type": "TEMPLATE",
> "dest_file": "/var/www/html/index.html",
> "properties": {
>   "content": 
> "TitleHello from 
> ${COMPONENT_INSTANCE_NAME}!"
> }
>   }
> ]
>   }
> },
> {
>   "name": "httpd-proxy",
>   "number_of_containers": 1,
>   "artifact": {
> "id": "centos/httpd-24-centos7:latest",
> "type": "DOCKER"
>   },
>   "launch_command": "/usr/bin/run-httpd",
>   "resource": {
> "cpus": 1,
> "memory": "1024"
>   },
>   "configuration": {
> "files": [
>   {
> "type": "TEMPLATE",
> "dest_file": "/etc/httpd/conf.d/httpd-proxy.conf",
> "src_file": "httpd-proxy.conf"
>   }
> ]
>   }
> }
>   ],
>   "quicklinks": {
> "Apache HTTP Server": 
> "http://httpd-proxy-0.${SERVICE_NAME}.${USER}.${DOMAIN}:8080";
>   }
> }
> {code}
> The type has changed to "TEMPLATE" in the code.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8909) Fix a document error in UsingFPGA.md

2018-10-18 Thread Zhankun Tang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhankun Tang updated YARN-8909:
---
Attachment: YARN-8909-trunk.001.patch

> Fix a document error in UsingFPGA.md
> 
>
> Key: YARN-8909
> URL: https://issues.apache.org/jira/browse/YARN-8909
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Zhankun Tang
>Assignee: Zhankun Tang
>Priority: Minor
> Attachments: YARN-8909-trunk.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8908) Fix errors in yarn-default.xml related to GPU/FPGA

2018-10-18 Thread Zhankun Tang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhankun Tang updated YARN-8908:
---
Attachment: YARN-8908-trunk.001.patch

> Fix errors in yarn-default.xml related to GPU/FPGA
> --
>
> Key: YARN-8908
> URL: https://issues.apache.org/jira/browse/YARN-8908
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Zhankun Tang
>Priority: Major
> Attachments: YARN-8908-trunk.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-8908) Fix errors in yarn-default.xml related to GPU/FPGA

2018-10-18 Thread Zhankun Tang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhankun Tang reassigned YARN-8908:
--

Assignee: Zhankun Tang

[~leftnoteasy] . Fix errors in document, please review.

> Fix errors in yarn-default.xml related to GPU/FPGA
> --
>
> Key: YARN-8908
> URL: https://issues.apache.org/jira/browse/YARN-8908
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Zhankun Tang
>Assignee: Zhankun Tang
>Priority: Major
> Attachments: YARN-8908-trunk.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8449) RM HA for AM HTTPS Support

2018-10-18 Thread Robert Kanter (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16655571#comment-16655571
 ] 

Robert Kanter commented on YARN-8449:
-

I think those are okay to ignore.  For instance,
{quote}./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy/src/main/java/org/apache/hadoop/yarn/server/webproxy/ProxyCA.java:114:
  public void init(X509Certificate caCert, PrivateKey caPrivateKey):36: 
'caCert' hides a field. [HiddenField]{quote}
It doesn't like this very commonly used pattern:
{code:java}
public void init(X509Certificate caCert, PrivateKey caPrivateKey) {
   this.caCert = caCert;
   ...
}
{code}

> RM HA for AM HTTPS Support
> --
>
> Key: YARN-8449
> URL: https://issues.apache.org/jira/browse/YARN-8449
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Robert Kanter
>Assignee: Robert Kanter
>Priority: Major
> Attachments: YARN-8449.001.patch, YARN-8449.002.patch, 
> YARN-8449.003.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8908) Fix errors in yarn-default.xml related to GPU/FPGA

2018-10-18 Thread Wangda Tan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16655575#comment-16655575
 ] 

Wangda Tan commented on YARN-8908:
--

+1, patch LGMT. 

Thanks [~tangzhankun].

> Fix errors in yarn-default.xml related to GPU/FPGA
> --
>
> Key: YARN-8908
> URL: https://issues.apache.org/jira/browse/YARN-8908
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Zhankun Tang
>Assignee: Zhankun Tang
>Priority: Major
> Attachments: YARN-8908-trunk.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8896) Limit the maximum number of container assignments per heartbeat

2018-10-18 Thread Wangda Tan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16655577#comment-16655577
 ] 

Wangda Tan commented on YARN-8896:
--

+1, patch LGTM, thanks [~tangzhankun]. 

> Limit the maximum number of container assignments per heartbeat
> ---
>
> Key: YARN-8896
> URL: https://issues.apache.org/jira/browse/YARN-8896
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.9.0, 3.0.0
>Reporter: Weiwei Yang
>Assignee: Zhankun Tang
>Priority: Major
> Attachments: YARN-8896-trunk.001.patch
>
>
> YARN-4161 adds a configuration 
> \{{yarn.scheduler.capacity.per-node-heartbeat.maximum-container-assignments}} 
> to control max number of container assignments per heartbeat, however the 
> default value is -1. This could potentially cause the CS gets stuck in the 
> while loop causing issue like YARN-8513. We should change this to a finite 
> number, e.g 100.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6098) Add documentation for Delete Queue

2018-10-18 Thread Suma Shivaprasad (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-6098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suma Shivaprasad updated YARN-6098:
---
Attachment: YARN-6098.1.patch

> Add documentation for Delete Queue
> --
>
> Key: YARN-6098
> URL: https://issues.apache.org/jira/browse/YARN-6098
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler, documentation
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
>Priority: Major
> Attachments: YARN-6098.1.patch
>
>
> As per the discussion in YARN-5556, we need to document steps for  deleting a 
> queue.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-6098) Add documentation for Delete Queue

2018-10-18 Thread Suma Shivaprasad (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-6098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suma Shivaprasad reassigned YARN-6098:
--

Assignee: Suma Shivaprasad  (was: Naganarasimha G R)

Attaching a patch with steps to delete a queue

> Add documentation for Delete Queue
> --
>
> Key: YARN-6098
> URL: https://issues.apache.org/jira/browse/YARN-6098
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler, documentation
>Reporter: Naganarasimha G R
>Assignee: Suma Shivaprasad
>Priority: Major
> Attachments: YARN-6098.1.patch
>
>
> As per the discussion in YARN-5556, we need to document steps for  deleting a 
> queue.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8868) Set HTTPOnly attribute to Cookie

2018-10-18 Thread Chandni Singh (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16655626#comment-16655626
 ] 

Chandni Singh commented on YARN-8868:
-

Thanks [~sunilg] for reviewing and merging

> Set HTTPOnly attribute to Cookie
> 
>
> Key: YARN-8868
> URL: https://issues.apache.org/jira/browse/YARN-8868
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Chandni Singh
>Assignee: Chandni Singh
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: YARN-8868.001.patch, YARN-8868.002.patch, new_rm_ui.png, 
> old_rm_ui.png
>
>
> 1. The program creates a cookie in Dispatcher.java at line 182, 185 and 199, 
> but fails to set the HttpOnly flag to true.
> 2. The program creates a cookie in WebAppProxyServlet.java at line 141 and 
> 388, but fails to set the HttpOnly flag to true.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8909) Fix a document error in UsingFPGA.md

2018-10-18 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16655633#comment-16655633
 ] 

Hadoop QA commented on YARN-8909:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
32m  4s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 55s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 46m 42s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | YARN-8909 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12944573/YARN-8909-trunk.001.patch
 |
| Optional Tests |  dupname  asflicense  mvnsite  |
| uname | Linux 1c2f28342049 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / ba7e816 |
| maven | version: Apache Maven 3.3.9 |
| Max. process+thread count | 336 (vs. ulimit of 1) |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/22242/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Fix a document error in UsingFPGA.md
> 
>
> Key: YARN-8909
> URL: https://issues.apache.org/jira/browse/YARN-8909
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Zhankun Tang
>Assignee: Zhankun Tang
>Priority: Minor
> Attachments: YARN-8909-trunk.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8908) Fix errors in yarn-default.xml related to GPU/FPGA

2018-10-18 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16655650#comment-16655650
 ] 

Hadoop QA commented on YARN-8908:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
33m 26s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 41s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
11s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 54m 19s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | YARN-8908 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12944575/YARN-8908-trunk.001.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  xml  |
| uname | Linux e57be14a2074 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / ba7e816 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/22243/testReport/ |
| Max. process+thread count | 339 (vs. ulimit of 1) |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/22243/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Fix errors in yarn-default.xml related to GPU/FPGA
> --
>
> Key: YARN-8908
> URL: https://issues.apache.org/jira/browse/YARN-8908
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Zhankun Tang
>Assignee: Zhankun Tang
>Priority: Major
> Attachments: YARN-89

[jira] [Commented] (YARN-8456) Fix a configuration handling bug when user leave FPGA discover executable path configuration default but set OpenCL SDK path environment variable

2018-10-18 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16655660#comment-16655660
 ] 

Hadoop QA commented on YARN-8456:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 28s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 22s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 The patch generated 6 new + 40 unchanged - 2 fixed = 46 total (was 42) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 40s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 18m 43s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 72m  0s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | YARN-8456 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12944570/YARN-8456-trunk.002.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 86143c7b54c6 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / ba7e816 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/22241/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/22241/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Bui

[jira] [Updated] (YARN-8870) [Submarine] Add submarine installation scripts

2018-10-18 Thread Wangda Tan (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-8870:
-
Fix Version/s: 3.2.0

> [Submarine] Add submarine installation scripts
> --
>
> Key: YARN-8870
> URL: https://issues.apache.org/jira/browse/YARN-8870
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xun Liu
>Assignee: Xun Liu
>Priority: Critical
> Fix For: 3.2.0
>
> Attachments: YARN-8870.001.patch, YARN-8870.004.patch, 
> YARN-8870.005.patch, YARN-8870.006.patch, YARN-8870.007.patch
>
>
> In order to reduce the deployment difficulty of Hadoop
> {Submarine} DNS, Docker, GPU, Network, graphics card, operating system kernel 
> modification and other components, I specially developed this installation 
> script to deploy Hadoop \{Submarine}
> runtime environment, providing one-click installation Scripts, which can also 
> be used to install, uninstall, start, and stop individual components step by 
> step.
>  
> {color:#ff}design d{color}{color:#FF}ocument:{color} 
> [https://docs.google.com/document/d/1muCTGFuUXUvM4JaDYjKqX5liQEg-AsNgkxfLMIFxYHU/edit?usp=sharing]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8456) Fix a configuration handling bug when user leave FPGA discover executable path configuration default but set OpenCL SDK path environment variable

2018-10-18 Thread Wangda Tan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16655664#comment-16655664
 ] 

Wangda Tan commented on YARN-8456:
--

+1, thanks [~tangzhankun]. 

> Fix a configuration handling bug when user leave FPGA discover executable 
> path configuration default but set OpenCL SDK path environment variable
> -
>
> Key: YARN-8456
> URL: https://issues.apache.org/jira/browse/YARN-8456
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Zhankun Tang
>Assignee: Zhankun Tang
>Priority: Major
> Attachments: YARN-8456-trunk.001.patch, YARN-8456-trunk.002.patch
>
>
> *Issue:*
>  When the user doesn't configure 
> "yarn.nodemanager.resource-plugins.fpga.path-to-discovery-executables" in 
> yarn-site.xml and have "ALTERAOCLSDKROOT" environment variable set, the FPGA 
> discoverer cannot find the correct executable path (with 
> IntelFPGAOpenclPlugin).
> *Reason:*
> In IntelFPGAOpenclPlugin,  the current code builds a wrong path string after 
> getting the environment variable value. It should append "/bin/ name>" otherwise it would fail the FPGA resource discovery.
>  
> *Solution:*
> Fix the path construction code in IntelFPGAOpenclPlugin.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8489) Need to support "dominant" component concept inside YARN service

2018-10-18 Thread Wangda Tan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16655670#comment-16655670
 ] 

Wangda Tan commented on YARN-8489:
--

[~billie.rinaldi]/[~eyang], 

Suggestions make sense to me. I will +1 to 
{{yarn.service.container-state-report-as-service-state}}.

> Need to support "dominant" component concept inside YARN service
> 
>
> Key: YARN-8489
> URL: https://issues.apache.org/jira/browse/YARN-8489
> Project: Hadoop YARN
>  Issue Type: Task
>  Components: yarn-native-services
>Reporter: Wangda Tan
>Priority: Major
>
> Existing YARN service support termination policy for different restart 
> policies. For example ALWAYS means service will not be terminated. And NEVER 
> means if all component terminated, service will be terminated.
> The name "dominant" might not be most appropriate , we can figure out better 
> names. But in simple, it means, a dominant component which final state will 
> determine job's final state regardless of other components.
> Use cases: 
> 1) Tensorflow job has master/worker/services/tensorboard. Once master goes to 
> final state, no matter if it is succeeded or failed, we should terminate 
> ps/tensorboard/workers. And the mark the job to succeeded/failed. 
> 2) Not sure if it is a real-world use case: A service which has multiple 
> component, some component is not restartable. For such services, if a 
> component is failed, we should mark the whole service to failed. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8814) Yarn Service Upgrade: Update the swagger definition

2018-10-18 Thread Chandni Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chandni Singh updated YARN-8814:

Summary: Yarn Service Upgrade: Update the swagger definition  (was: Yarn 
Service Upgrade: Update the swagger definition and docs)

> Yarn Service Upgrade: Update the swagger definition
> ---
>
> Key: YARN-8814
> URL: https://issues.apache.org/jira/browse/YARN-8814
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Chandni Singh
>Assignee: Chandni Singh
>Priority: Major
>
> Yarn swagger definition is missing states added recently with upgrade. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-8910) Misleading log statement in NM when max retries is -1

2018-10-18 Thread Chandni Singh (JIRA)
Chandni Singh created YARN-8910:
---

 Summary: Misleading log statement in NM when max retries is -1
 Key: YARN-8910
 URL: https://issues.apache.org/jira/browse/YARN-8910
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Chandni Singh
Assignee: Chandni Singh


If max attempt is -1, then the below log stmt is misleading:
Relaunching Container container_e05_1533635581781_0001_01_02. Remaining 
retry attempts(after relaunch) : -4816.




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8862) [GPG] Add Yarn Registry cleanup in ApplicationCleaner

2018-10-18 Thread Botong Huang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16655696#comment-16655696
 ] 

Botong Huang commented on YARN-8862:


Committed to YARN-7402. Thanks [~bibinchundatt] and [~giovanni.fumarola] for 
reviewing!

> [GPG] Add Yarn Registry cleanup in ApplicationCleaner
> -
>
> Key: YARN-8862
> URL: https://issues.apache.org/jira/browse/YARN-8862
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Botong Huang
>Assignee: Botong Huang
>Priority: Major
> Attachments: YARN-8862-YARN-7402.v1.patch, 
> YARN-8862-YARN-7402.v2.patch, YARN-8862-YARN-7402.v3.patch, 
> YARN-8862-YARN-7402.v4.patch, YARN-8862-YARN-7402.v5.patch, 
> YARN-8862-YARN-7402.v6.patch
>
>
> In Yarn Federation, we use Yarn Registry to use the AMToken for UAMs in 
> secondary sub-clusters. Because of potential more app attempts later, 
> AMRMProxy cannot kill the UAM and delete the tokens when one local attempt 
> finishes. So similar to the StateStore application table, we need 
> ApplicationCleaner in GPG to cleanup the finished app entries in Yarn 
> Registry. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6098) Add documentation for Delete Queue

2018-10-18 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-6098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16655698#comment-16655698
 ] 

Hadoop QA commented on YARN-6098:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
31m 46s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 48s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 46m  6s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | YARN-6098 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12944584/YARN-6098.1.patch |
| Optional Tests |  dupname  asflicense  mvnsite  |
| uname | Linux 2363cd35535b 4.4.0-134-generic #160~14.04.1-Ubuntu SMP Fri Aug 
17 11:07:07 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 1e78dfc |
| maven | version: Apache Maven 3.3.9 |
| whitespace | 
https://builds.apache.org/job/PreCommit-YARN-Build/22244/artifact/out/whitespace-eol.txt
 |
| Max. process+thread count | 335 (vs. ulimit of 1) |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/22244/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Add documentation for Delete Queue
> --
>
> Key: YARN-6098
> URL: https://issues.apache.org/jira/browse/YARN-6098
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler, documentation
>Reporter: Naganarasimha G R
>Assignee: Suma Shivaprasad
>Priority: Major
> Attachments: YARN-6098.1.patch
>
>
> As per the discussion in YARN-5556, we need to document steps for  deleting a 
> queue.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8873) Add CSI java-based client library

2018-10-18 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16655703#comment-16655703
 ] 

Hadoop QA commented on YARN-8873:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
25s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m  8s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
57s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
34s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  8m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  8m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 41s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
32s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}141m 19s{color} 
| {color:red} hadoop-yarn in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
35s{color} | {color:green} hadoop-yarn-csi in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}221m 56s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | YARN-8873 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12944521/YARN-8873.006.patch |
| Option

[jira] [Commented] (YARN-8899) TestCleanupAfterKIll is failing due to unsatisfied dependencies

2018-10-18 Thread Robert Kanter (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16655704#comment-16655704
 ] 

Robert Kanter commented on YARN-8899:
-

I've been trying to play with the minicluster pom 
({{hadoop-client-minicluster}}), but it's very complicated and I can't seem to 
get it to work right.  [~ste...@apache.org], any ideas on how to fix this 
without adding bouncycastle at the test scope for a bunch of different poms?  
Or is that actually the right way to fix this?

> TestCleanupAfterKIll is failing due to unsatisfied dependencies
> ---
>
> Key: YARN-8899
> URL: https://issues.apache.org/jira/browse/YARN-8899
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: yarn-native-services
>Affects Versions: 3.3.0
>Reporter: Eric Yang
>Priority: Major
>
> BouncyCastle upgrade cause unit test to fail due to unsatisfied transitive 
> dependencies.
> It looks like the new version of bcprov-jdk15on bring in a new dependency 
> bcpkix-jdk15on.  Minicluster does not have bcpkix-jdk15on dependency and 
> cause unit test that depends on minicluster to fail.
> {code}
> [ERROR] 
> testRegistryCleanedOnLifetimeExceeded(org.apache.hadoop.yarn.service.TestCleanupAfterKill)
>   Time elapsed: 2.709 s  <<< ERROR!
> java.lang.NoClassDefFoundError: 
> org/bouncycastle/operator/OperatorCreationException
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMActiveServices.serviceInit(ResourceManager.java:836)
>   at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.createAndInitActiveServices(ResourceManager.java:1256)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceInit(ResourceManager.java:324)
>   at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
>   at 
> org.apache.hadoop.yarn.server.MiniYARNCluster.initResourceManager(MiniYARNCluster.java:348)
>   at 
> org.apache.hadoop.yarn.server.MiniYARNCluster.access$200(MiniYARNCluster.java:128)
>   at 
> org.apache.hadoop.yarn.server.MiniYARNCluster$ResourceManagerWrapper.serviceInit(MiniYARNCluster.java:497)
>   at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
>   at 
> org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:108)
>   at 
> org.apache.hadoop.yarn.server.MiniYARNCluster.serviceInit(MiniYARNCluster.java:316)
>   at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
>   at 
> org.apache.hadoop.yarn.service.ServiceTestUtils.setupInternal(ServiceTestUtils.java:251)
>   at 
> org.apache.hadoop.yarn.service.TestCleanupAfterKill.testRegistryCleanedOnLifetimeExceeded(TestCleanupAfterKill.java:66)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> Caused by: java.lang.ClassNotFoundException: 
> org.bouncycastle.operator.OperatorCreationException
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   ... 23 more
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7225) Add queue and partition info to RM audit log

2018-10-18 Thread Jonathan Hung (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-7225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16655716#comment-16655716
 ] 

Jonathan Hung commented on YARN-7225:
-

Hi [~eepayne] thanks for updating the patch.

Sorry I missed this in the previous review. But I think the logged partition 
should be changed from logging the partition of the AM container request to the 
partition the container ran on:

for the logSuccess in FiCaSchedulerApp#containerCompleted:
{noformat}
// In order to save space in the audit log, only include the partition
// if it is not the default partition.
String containerPartition = null;
if (appAMNodePartitionName != null &&
  !appAMNodePartitionName.isEmpty()) {
  containerPartition = appAMNodePartitionName;
}
Resource containerResource = rmContainer.getContainer().getResource();
RMAuditLogger.logSuccess(getUser(), AuditConstants.RELEASE_CONTAINER,
"SchedulerApp", getApplicationId(), containerId, containerResource,
getQueueName(), containerPartition);
{noformat}
We should just be able to log "partition" instead of containerPartition since 
the partition passed to containerCompleted is the node's partition

For the logSuccess in  FiCaScheduler#apply:
{noformat}
String partition = null;
if (appAMNodePartitionName != null &&
  !appAMNodePartitionName.isEmpty()) {
  partition = appAMNodePartitionName;
}
RMAuditLogger.logSuccess(getUser(), AuditConstants.ALLOC_CONTAINER,
"SchedulerApp", getApplicationId(), containerId,
allocation.getAllocatedOrReservedResource(), getQueueName(),
partition);{noformat}
We should be able to log schedulerContainer.getSchedulerNode().getPartition.

> Add queue and partition info to RM audit log
> 
>
> Key: YARN-7225
> URL: https://issues.apache.org/jira/browse/YARN-7225
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Affects Versions: 2.9.1, 2.8.4, 3.0.2, 3.1.1
>Reporter: Jonathan Hung
>Assignee: Eric Payne
>Priority: Major
> Attachments: YARN-7225.001.patch, YARN-7225.002.patch, 
> YARN-7225.003.patch, YARN-7225.004.patch, YARN-7225.branch-2.8.001.patch
>
>
> Right now RM audit log has fields such as user, ip, resource, etc. Having 
> queue and partition  is useful for resource tracking.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8899) TestCleanupAfterKIll is failing due to unsatisfied dependencies

2018-10-18 Thread Robert Kanter (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16655719#comment-16655719
 ] 

Robert Kanter commented on YARN-8899:
-

I think this is a more general solution - we can add the 
{{hadoop-yarn-server-web-proxy}} to the {{hadoop-minicluster}} pom.  See the 
001 patch.

> TestCleanupAfterKIll is failing due to unsatisfied dependencies
> ---
>
> Key: YARN-8899
> URL: https://issues.apache.org/jira/browse/YARN-8899
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: yarn-native-services
>Affects Versions: 3.3.0
>Reporter: Eric Yang
>Priority: Major
> Attachments: YARN-8899.001.patch
>
>
> BouncyCastle upgrade cause unit test to fail due to unsatisfied transitive 
> dependencies.
> It looks like the new version of bcprov-jdk15on bring in a new dependency 
> bcpkix-jdk15on.  Minicluster does not have bcpkix-jdk15on dependency and 
> cause unit test that depends on minicluster to fail.
> {code}
> [ERROR] 
> testRegistryCleanedOnLifetimeExceeded(org.apache.hadoop.yarn.service.TestCleanupAfterKill)
>   Time elapsed: 2.709 s  <<< ERROR!
> java.lang.NoClassDefFoundError: 
> org/bouncycastle/operator/OperatorCreationException
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMActiveServices.serviceInit(ResourceManager.java:836)
>   at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.createAndInitActiveServices(ResourceManager.java:1256)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceInit(ResourceManager.java:324)
>   at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
>   at 
> org.apache.hadoop.yarn.server.MiniYARNCluster.initResourceManager(MiniYARNCluster.java:348)
>   at 
> org.apache.hadoop.yarn.server.MiniYARNCluster.access$200(MiniYARNCluster.java:128)
>   at 
> org.apache.hadoop.yarn.server.MiniYARNCluster$ResourceManagerWrapper.serviceInit(MiniYARNCluster.java:497)
>   at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
>   at 
> org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:108)
>   at 
> org.apache.hadoop.yarn.server.MiniYARNCluster.serviceInit(MiniYARNCluster.java:316)
>   at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
>   at 
> org.apache.hadoop.yarn.service.ServiceTestUtils.setupInternal(ServiceTestUtils.java:251)
>   at 
> org.apache.hadoop.yarn.service.TestCleanupAfterKill.testRegistryCleanedOnLifetimeExceeded(TestCleanupAfterKill.java:66)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> Caused by: java.lang.ClassNotFoundException: 
> org.bouncycastle.operator.OperatorCreationException
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   ... 23 more
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-8899) TestCleanupAfterKIll is failing due to unsatisfied dependencies

2018-10-18 Thread Robert Kanter (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Kanter reassigned YARN-8899:
---

Assignee: Robert Kanter

> TestCleanupAfterKIll is failing due to unsatisfied dependencies
> ---
>
> Key: YARN-8899
> URL: https://issues.apache.org/jira/browse/YARN-8899
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: yarn-native-services
>Affects Versions: 3.3.0
>Reporter: Eric Yang
>Assignee: Robert Kanter
>Priority: Major
> Attachments: YARN-8899.001.patch
>
>
> BouncyCastle upgrade cause unit test to fail due to unsatisfied transitive 
> dependencies.
> It looks like the new version of bcprov-jdk15on bring in a new dependency 
> bcpkix-jdk15on.  Minicluster does not have bcpkix-jdk15on dependency and 
> cause unit test that depends on minicluster to fail.
> {code}
> [ERROR] 
> testRegistryCleanedOnLifetimeExceeded(org.apache.hadoop.yarn.service.TestCleanupAfterKill)
>   Time elapsed: 2.709 s  <<< ERROR!
> java.lang.NoClassDefFoundError: 
> org/bouncycastle/operator/OperatorCreationException
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMActiveServices.serviceInit(ResourceManager.java:836)
>   at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.createAndInitActiveServices(ResourceManager.java:1256)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceInit(ResourceManager.java:324)
>   at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
>   at 
> org.apache.hadoop.yarn.server.MiniYARNCluster.initResourceManager(MiniYARNCluster.java:348)
>   at 
> org.apache.hadoop.yarn.server.MiniYARNCluster.access$200(MiniYARNCluster.java:128)
>   at 
> org.apache.hadoop.yarn.server.MiniYARNCluster$ResourceManagerWrapper.serviceInit(MiniYARNCluster.java:497)
>   at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
>   at 
> org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:108)
>   at 
> org.apache.hadoop.yarn.server.MiniYARNCluster.serviceInit(MiniYARNCluster.java:316)
>   at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
>   at 
> org.apache.hadoop.yarn.service.ServiceTestUtils.setupInternal(ServiceTestUtils.java:251)
>   at 
> org.apache.hadoop.yarn.service.TestCleanupAfterKill.testRegistryCleanedOnLifetimeExceeded(TestCleanupAfterKill.java:66)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> Caused by: java.lang.ClassNotFoundException: 
> org.bouncycastle.operator.OperatorCreationException
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   ... 23 more
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8899) TestCleanupAfterKIll is failing due to unsatisfied dependencies

2018-10-18 Thread Robert Kanter (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Kanter updated YARN-8899:

Attachment: YARN-8899.001.patch

> TestCleanupAfterKIll is failing due to unsatisfied dependencies
> ---
>
> Key: YARN-8899
> URL: https://issues.apache.org/jira/browse/YARN-8899
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: yarn-native-services
>Affects Versions: 3.3.0
>Reporter: Eric Yang
>Priority: Major
> Attachments: YARN-8899.001.patch
>
>
> BouncyCastle upgrade cause unit test to fail due to unsatisfied transitive 
> dependencies.
> It looks like the new version of bcprov-jdk15on bring in a new dependency 
> bcpkix-jdk15on.  Minicluster does not have bcpkix-jdk15on dependency and 
> cause unit test that depends on minicluster to fail.
> {code}
> [ERROR] 
> testRegistryCleanedOnLifetimeExceeded(org.apache.hadoop.yarn.service.TestCleanupAfterKill)
>   Time elapsed: 2.709 s  <<< ERROR!
> java.lang.NoClassDefFoundError: 
> org/bouncycastle/operator/OperatorCreationException
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMActiveServices.serviceInit(ResourceManager.java:836)
>   at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.createAndInitActiveServices(ResourceManager.java:1256)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceInit(ResourceManager.java:324)
>   at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
>   at 
> org.apache.hadoop.yarn.server.MiniYARNCluster.initResourceManager(MiniYARNCluster.java:348)
>   at 
> org.apache.hadoop.yarn.server.MiniYARNCluster.access$200(MiniYARNCluster.java:128)
>   at 
> org.apache.hadoop.yarn.server.MiniYARNCluster$ResourceManagerWrapper.serviceInit(MiniYARNCluster.java:497)
>   at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
>   at 
> org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:108)
>   at 
> org.apache.hadoop.yarn.server.MiniYARNCluster.serviceInit(MiniYARNCluster.java:316)
>   at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
>   at 
> org.apache.hadoop.yarn.service.ServiceTestUtils.setupInternal(ServiceTestUtils.java:251)
>   at 
> org.apache.hadoop.yarn.service.TestCleanupAfterKill.testRegistryCleanedOnLifetimeExceeded(TestCleanupAfterKill.java:66)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> Caused by: java.lang.ClassNotFoundException: 
> org.bouncycastle.operator.OperatorCreationException
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   ... 23 more
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8897) LoadBasedRouterPolicy throws "NPE" in case of sub cluster unavailability

2018-10-18 Thread Giovanni Matteo Fumarola (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8897?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16655733#comment-16655733
 ] 

Giovanni Matteo Fumarola commented on YARN-8897:


Thanks [~bibinchundatt] for the comment.

RMWebServiceProtocol implements a bunch of methods. Some of them return an 
Object (GET calls) and other return a Response (POST-PUT calls).

In case of error in a POST-PUT call, the code returns an error response. We can 
discuss if the code returns the proper error aka. SERVICE_UNAVAILABLE vs 
INTERNAL_SERVER ERROR. Some of our internal clients retry for 
service_unavailable. 
Now, in case of error in a GET call, the code returns an empty object. The same 
practice is done in RM. If we want to return an exception as YarnException we 
need to change the protocol.

> LoadBasedRouterPolicy throws "NPE" in case of sub cluster unavailability 
> -
>
> Key: YARN-8897
> URL: https://issues.apache.org/jira/browse/YARN-8897
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: federation, router
>Reporter: Akshay Agarwal
>Assignee: Bilwa S T
>Priority: Minor
> Attachments: YARN-8897-001.patch
>
>
> If no sub clusters are available for "*Load Based Router Policy*" with 
> *cluster weight* as *1*  in Router Based Federation Setup , throwing 
> "*NullPointerException*".
>  
> *Exception Details:*
> {code:java}
> java.lang.NullPointerException: java.lang.NullPointerException
>  at 
> org.apache.hadoop.yarn.server.federation.policies.router.LoadBasedRouterPolicy.getHomeSubcluster(LoadBasedRouterPolicy.java:99)
>  at 
> org.apache.hadoop.yarn.server.federation.policies.RouterPolicyFacade.getHomeSubcluster(RouterPolicyFacade.java:204)
>  at 
> org.apache.hadoop.yarn.server.router.clientrm.FederationClientInterceptor.submitApplication(FederationClientInterceptor.java:362)
>  at 
> org.apache.hadoop.yarn.server.router.clientrm.RouterClientRMService.submitApplication(RouterClientRMService.java:218)
>  at 
> org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.submitApplication(ApplicationClientProtocolPBServiceImpl.java:282)
>  at 
> org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:579)
>  at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
>  at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:872)
>  at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:818)
>  at java.security.AccessController.doPrivileged(Native Method)
>  at javax.security.auth.Subject.doAs(Subject.java:422)
>  at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
>  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2678)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>  at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>  at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>  at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>  at org.apache.hadoop.yarn.ipc.RPCUtil.instantiateException(RPCUtil.java:53)
>  at 
> org.apache.hadoop.yarn.ipc.RPCUtil.instantiateRuntimeException(RPCUtil.java:85)
>  at 
> org.apache.hadoop.yarn.ipc.RPCUtil.unwrapAndThrowException(RPCUtil.java:122)
>  at 
> org.apache.hadoop.yarn.api.impl.pb.client.ApplicationClientProtocolPBClientImpl.submitApplication(ApplicationClientProtocolPBClientImpl.java:297)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498)
>  at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
>  at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
>  at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
>  at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
>  at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
>  at com.sun.proxy.$Proxy15.submitApplication(Unknown Source)
>  at 
> org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.submitApplication(YarnClientImpl.java:288)
>  at 
> org.apache.hadoop.mapred.ResourceMgrDelegate.submitApplication(ResourceMgrDelegate.java:300)
>  at org.apache.hadoop.mapred.YARNRunner.

[jira] [Commented] (YARN-8900) [Router] Federation: routing getContainers REST invocations transparently to multiple RMs

2018-10-18 Thread Giovanni Matteo Fumarola (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16655736#comment-16655736
 ] 

Giovanni Matteo Fumarola commented on YARN-8900:


Thanks [~bibinchundatt] for the comment.
I replied to 2) in the 
[comment|https://issues.apache.org/jira/browse/YARN-8897?focusedCommentId=16655733&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16655733
 ] 

> [Router] Federation: routing getContainers REST invocations transparently to 
> multiple RMs
> -
>
> Key: YARN-8900
> URL: https://issues.apache.org/jira/browse/YARN-8900
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: federation, router
>Reporter: Giovanni Matteo Fumarola
>Assignee: Giovanni Matteo Fumarola
>Priority: Major
> Attachments: YARN-8900.v1.patch, YARN-8900.v2.patch
>
>
> This JIRA tracks the design/implementation of the layer for routing 
> RMWebServicesProtocol requests to the appropriate RM(s) in a federated YARN 
> cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8907) Modify a error message in TestCapacityScheduler

2018-10-18 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16655745#comment-16655745
 ] 

Hadoop QA commented on YARN-8907:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
26s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 10s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  2s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 93m 18s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}142m 45s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | YARN-8907 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12944566/YARN-8907-trunk.001.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux cc37175104c2 4.4.0-133-generic #159-Ubuntu SMP Fri Aug 10 
07:31:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / ba7e816 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/22240/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/22240/testReport/ |
| Max. process+thread count | 956 (vs. ulimit of 1) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/P

[jira] [Commented] (YARN-6098) Add documentation for Delete Queue

2018-10-18 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-6098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16655785#comment-16655785
 ] 

Hudson commented on YARN-6098:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15257 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15257/])
YARN-6098. Add documentation for Delete Queue. (Suma Shivaprasad via (wangda: 
rev bfb88b10f46a265aa38ab3e1d87b6a0a99d94be8)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/CapacityScheduler.md


> Add documentation for Delete Queue
> --
>
> Key: YARN-6098
> URL: https://issues.apache.org/jira/browse/YARN-6098
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler, documentation
>Reporter: Naganarasimha G R
>Assignee: Suma Shivaprasad
>Priority: Major
> Fix For: 3.2.1
>
> Attachments: YARN-6098.1.patch
>
>
> As per the discussion in YARN-5556, we need to document steps for  deleting a 
> queue.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8456) Fix a configuration handling bug when user leave FPGA discover executable path configuration default but set OpenCL SDK path environment variable

2018-10-18 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16655786#comment-16655786
 ] 

Hudson commented on YARN-8456:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15257 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15257/])
YARN-8456. Fix a configuration handling bug when user leave FPGA (wangda: rev 
a457a8951a1b35f06811c40443ca44bb9c698c30)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/resourceplugin/fpga/IntelFpgaOpenclPlugin.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/resourceplugin/fpga/TestFpgaDiscoverer.java


> Fix a configuration handling bug when user leave FPGA discover executable 
> path configuration default but set OpenCL SDK path environment variable
> -
>
> Key: YARN-8456
> URL: https://issues.apache.org/jira/browse/YARN-8456
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Zhankun Tang
>Assignee: Zhankun Tang
>Priority: Major
> Fix For: 3.2.1
>
> Attachments: YARN-8456-trunk.001.patch, YARN-8456-trunk.002.patch
>
>
> *Issue:*
>  When the user doesn't configure 
> "yarn.nodemanager.resource-plugins.fpga.path-to-discovery-executables" in 
> yarn-site.xml and have "ALTERAOCLSDKROOT" environment variable set, the FPGA 
> discoverer cannot find the correct executable path (with 
> IntelFPGAOpenclPlugin).
> *Reason:*
> In IntelFPGAOpenclPlugin,  the current code builds a wrong path string after 
> getting the environment variable value. It should append "/bin/ name>" otherwise it would fail the FPGA resource discovery.
>  
> *Solution:*
> Fix the path construction code in IntelFPGAOpenclPlugin.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8896) Limit the maximum number of container assignments per heartbeat

2018-10-18 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16655787#comment-16655787
 ] 

Hudson commented on YARN-8896:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15257 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15257/])
YARN-8896. Limit the maximum number of container assignments per (wangda: rev 
780be14f07df2a3ed6273b96ae857c278fd72718)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacitySchedulerConfiguration.java


> Limit the maximum number of container assignments per heartbeat
> ---
>
> Key: YARN-8896
> URL: https://issues.apache.org/jira/browse/YARN-8896
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.9.0, 3.0.0
>Reporter: Weiwei Yang
>Assignee: Zhankun Tang
>Priority: Major
> Attachments: YARN-8896-trunk.001.patch
>
>
> YARN-4161 adds a configuration 
> \{{yarn.scheduler.capacity.per-node-heartbeat.maximum-container-assignments}} 
> to control max number of container assignments per heartbeat, however the 
> default value is -1. This could potentially cause the CS gets stuck in the 
> while loop causing issue like YARN-8513. We should change this to a finite 
> number, e.g 100.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-8896) Limit the maximum number of container assignments per heartbeat

2018-10-18 Thread Wangda Tan (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8896?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan resolved YARN-8896.
--
Resolution: Fixed

> Limit the maximum number of container assignments per heartbeat
> ---
>
> Key: YARN-8896
> URL: https://issues.apache.org/jira/browse/YARN-8896
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.9.0, 3.0.0
>Reporter: Weiwei Yang
>Assignee: Zhankun Tang
>Priority: Major
> Fix For: 3.1.2, 3.2.1
>
> Attachments: YARN-8896-trunk.001.patch
>
>
> YARN-4161 adds a configuration 
> \{{yarn.scheduler.capacity.per-node-heartbeat.maximum-container-assignments}} 
> to control max number of container assignments per heartbeat, however the 
> default value is -1. This could potentially cause the CS gets stuck in the 
> while loop causing issue like YARN-8513. We should change this to a finite 
> number, e.g 100.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8899) TestCleanupAfterKIll is failing due to unsatisfied dependencies

2018-10-18 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16655792#comment-16655792
 ] 

Hadoop QA commented on YARN-8899:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
29m 21s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 43s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
19s{color} | {color:green} hadoop-minicluster in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 44m 25s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | YARN-8899 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12944601/YARN-8899.001.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  xml  |
| uname | Linux 55ae2da798a1 4.4.0-133-generic #159-Ubuntu SMP Fri Aug 10 
07:31:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 0e56c88 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/22245/testReport/ |
| Max. process+thread count | 415 (vs. ulimit of 1) |
| modules | C: hadoop-minicluster U: hadoop-minicluster |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/22245/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> TestCleanupAfterKIll is failing due to unsatisfied dependencies
> ---
>
> Key: YARN-8899
> URL: https://issues.apache.org/jira/browse/YARN-8899
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: yarn-native-services
>Affects Versions: 3.3.0
>Reporter: Eric Yang
>Assignee: Robert Kanter
>Priority: Major
> 

[jira] [Commented] (YARN-8896) Limit the maximum number of container assignments per heartbeat

2018-10-18 Thread Wangda Tan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16655790#comment-16655790
 ] 

Wangda Tan commented on YARN-8896:
--

Committed to trunk/branch-3.1/branch-3.2.

> Limit the maximum number of container assignments per heartbeat
> ---
>
> Key: YARN-8896
> URL: https://issues.apache.org/jira/browse/YARN-8896
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.9.0, 3.0.0
>Reporter: Weiwei Yang
>Assignee: Zhankun Tang
>Priority: Major
> Fix For: 3.1.2, 3.2.1
>
> Attachments: YARN-8896-trunk.001.patch
>
>
> YARN-4161 adds a configuration 
> \{{yarn.scheduler.capacity.per-node-heartbeat.maximum-container-assignments}} 
> to control max number of container assignments per heartbeat, however the 
> default value is -1. This could potentially cause the CS gets stuck in the 
> while loop causing issue like YARN-8513. We should change this to a finite 
> number, e.g 100.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8896) Limit the maximum number of container assignments per heartbeat

2018-10-18 Thread Wangda Tan (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8896?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-8896:
-
Fix Version/s: 3.2.1
   3.1.2

> Limit the maximum number of container assignments per heartbeat
> ---
>
> Key: YARN-8896
> URL: https://issues.apache.org/jira/browse/YARN-8896
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.9.0, 3.0.0
>Reporter: Weiwei Yang
>Assignee: Zhankun Tang
>Priority: Major
> Fix For: 3.1.2, 3.2.1
>
> Attachments: YARN-8896-trunk.001.patch
>
>
> YARN-4161 adds a configuration 
> \{{yarn.scheduler.capacity.per-node-heartbeat.maximum-container-assignments}} 
> to control max number of container assignments per heartbeat, however the 
> default value is -1. This could potentially cause the CS gets stuck in the 
> while loop causing issue like YARN-8513. We should change this to a finite 
> number, e.g 100.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6098) Add documentation for Delete Queue

2018-10-18 Thread Wangda Tan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-6098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16655793#comment-16655793
 ] 

Wangda Tan commented on YARN-6098:
--

Backported to branch-3.1 as well.

> Add documentation for Delete Queue
> --
>
> Key: YARN-6098
> URL: https://issues.apache.org/jira/browse/YARN-6098
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler, documentation
>Reporter: Naganarasimha G R
>Assignee: Suma Shivaprasad
>Priority: Major
> Fix For: 3.1.2, 3.2.1
>
> Attachments: YARN-6098.1.patch
>
>
> As per the discussion in YARN-5556, we need to document steps for  deleting a 
> queue.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6098) Add documentation for Delete Queue

2018-10-18 Thread Wangda Tan (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-6098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-6098:
-
Fix Version/s: 3.1.2

> Add documentation for Delete Queue
> --
>
> Key: YARN-6098
> URL: https://issues.apache.org/jira/browse/YARN-6098
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler, documentation
>Reporter: Naganarasimha G R
>Assignee: Suma Shivaprasad
>Priority: Major
> Fix For: 3.1.2, 3.2.1
>
> Attachments: YARN-6098.1.patch
>
>
> As per the discussion in YARN-5556, we need to document steps for  deleting a 
> queue.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-8682) YARN Service throws NPE when explicit null instead of empty object {} is used

2018-10-18 Thread Chandni Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chandni Singh reassigned YARN-8682:
---

Assignee: Chandni Singh

> YARN Service throws NPE when explicit null instead of empty object {} is used
> -
>
> Key: YARN-8682
> URL: https://issues.apache.org/jira/browse/YARN-8682
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-native-services
>Affects Versions: 3.0.1
>Reporter: Gour Saha
>Assignee: Chandni Singh
>Priority: Major
>
> YARN Service should not throw NPE for a config like this -
> {code}
> .
> .
> "configuration": {
> "env": {
> "HADOOP_CONF_DIR": "/hadoop-conf",
> "USER": "testuser",
> "YARN_CONTAINER_RUNTIME_DOCKER_MOUNTS": 
> "/sys/fs/cgroup:/sys/fs/cgroup:ro",
> "YARN_CONTAINER_RUNTIME_DOCKER_RUN_OVERRIDE_DISABLE": "true"
> },
> "files": null
> }
> .
> .
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8899) TestCleanupAfterKIll is failing due to unsatisfied dependencies

2018-10-18 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16655804#comment-16655804
 ] 

Eric Yang commented on YARN-8899:
-

Thank you for the patch [~rkanter].  +1 for patch 001.

> TestCleanupAfterKIll is failing due to unsatisfied dependencies
> ---
>
> Key: YARN-8899
> URL: https://issues.apache.org/jira/browse/YARN-8899
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: yarn-native-services
>Affects Versions: 3.3.0
>Reporter: Eric Yang
>Assignee: Robert Kanter
>Priority: Major
> Attachments: YARN-8899.001.patch
>
>
> BouncyCastle upgrade cause unit test to fail due to unsatisfied transitive 
> dependencies.
> It looks like the new version of bcprov-jdk15on bring in a new dependency 
> bcpkix-jdk15on.  Minicluster does not have bcpkix-jdk15on dependency and 
> cause unit test that depends on minicluster to fail.
> {code}
> [ERROR] 
> testRegistryCleanedOnLifetimeExceeded(org.apache.hadoop.yarn.service.TestCleanupAfterKill)
>   Time elapsed: 2.709 s  <<< ERROR!
> java.lang.NoClassDefFoundError: 
> org/bouncycastle/operator/OperatorCreationException
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMActiveServices.serviceInit(ResourceManager.java:836)
>   at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.createAndInitActiveServices(ResourceManager.java:1256)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceInit(ResourceManager.java:324)
>   at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
>   at 
> org.apache.hadoop.yarn.server.MiniYARNCluster.initResourceManager(MiniYARNCluster.java:348)
>   at 
> org.apache.hadoop.yarn.server.MiniYARNCluster.access$200(MiniYARNCluster.java:128)
>   at 
> org.apache.hadoop.yarn.server.MiniYARNCluster$ResourceManagerWrapper.serviceInit(MiniYARNCluster.java:497)
>   at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
>   at 
> org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:108)
>   at 
> org.apache.hadoop.yarn.server.MiniYARNCluster.serviceInit(MiniYARNCluster.java:316)
>   at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
>   at 
> org.apache.hadoop.yarn.service.ServiceTestUtils.setupInternal(ServiceTestUtils.java:251)
>   at 
> org.apache.hadoop.yarn.service.TestCleanupAfterKill.testRegistryCleanedOnLifetimeExceeded(TestCleanupAfterKill.java:66)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> Caused by: java.lang.ClassNotFoundException: 
> org.bouncycastle.operator.OperatorCreationException
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   ... 23 more
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8899) TestCleanupAfterKIll is failing due to unsatisfied dependencies

2018-10-18 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-8899:

 Priority: Blocker  (was: Major)
Fix Version/s: 3.3.0

> TestCleanupAfterKIll is failing due to unsatisfied dependencies
> ---
>
> Key: YARN-8899
> URL: https://issues.apache.org/jira/browse/YARN-8899
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: yarn-native-services
>Affects Versions: 3.3.0
>Reporter: Eric Yang
>Assignee: Robert Kanter
>Priority: Blocker
> Fix For: 3.3.0
>
> Attachments: YARN-8899.001.patch
>
>
> BouncyCastle upgrade cause unit test to fail due to unsatisfied transitive 
> dependencies.
> It looks like the new version of bcprov-jdk15on bring in a new dependency 
> bcpkix-jdk15on.  Minicluster does not have bcpkix-jdk15on dependency and 
> cause unit test that depends on minicluster to fail.
> {code}
> [ERROR] 
> testRegistryCleanedOnLifetimeExceeded(org.apache.hadoop.yarn.service.TestCleanupAfterKill)
>   Time elapsed: 2.709 s  <<< ERROR!
> java.lang.NoClassDefFoundError: 
> org/bouncycastle/operator/OperatorCreationException
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMActiveServices.serviceInit(ResourceManager.java:836)
>   at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.createAndInitActiveServices(ResourceManager.java:1256)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceInit(ResourceManager.java:324)
>   at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
>   at 
> org.apache.hadoop.yarn.server.MiniYARNCluster.initResourceManager(MiniYARNCluster.java:348)
>   at 
> org.apache.hadoop.yarn.server.MiniYARNCluster.access$200(MiniYARNCluster.java:128)
>   at 
> org.apache.hadoop.yarn.server.MiniYARNCluster$ResourceManagerWrapper.serviceInit(MiniYARNCluster.java:497)
>   at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
>   at 
> org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:108)
>   at 
> org.apache.hadoop.yarn.server.MiniYARNCluster.serviceInit(MiniYARNCluster.java:316)
>   at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
>   at 
> org.apache.hadoop.yarn.service.ServiceTestUtils.setupInternal(ServiceTestUtils.java:251)
>   at 
> org.apache.hadoop.yarn.service.TestCleanupAfterKill.testRegistryCleanedOnLifetimeExceeded(TestCleanupAfterKill.java:66)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> Caused by: java.lang.ClassNotFoundException: 
> org.bouncycastle.operator.OperatorCreationException
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   ... 23 more
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8489) Need to support "dominant" component concept inside YARN service

2018-10-18 Thread Suma Shivaprasad (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16655808#comment-16655808
 ] 

Suma Shivaprasad commented on YARN-8489:


Thanks for the feedback [~billie.rinaldi] [~eyang] and [~leftnoteasy] for 
raising this issue.

Discussed with [~leftnoteasy] Will be picking up this jira and take it forward.

> Need to support "dominant" component concept inside YARN service
> 
>
> Key: YARN-8489
> URL: https://issues.apache.org/jira/browse/YARN-8489
> Project: Hadoop YARN
>  Issue Type: Task
>  Components: yarn-native-services
>Reporter: Wangda Tan
>Priority: Major
>
> Existing YARN service support termination policy for different restart 
> policies. For example ALWAYS means service will not be terminated. And NEVER 
> means if all component terminated, service will be terminated.
> The name "dominant" might not be most appropriate , we can figure out better 
> names. But in simple, it means, a dominant component which final state will 
> determine job's final state regardless of other components.
> Use cases: 
> 1) Tensorflow job has master/worker/services/tensorboard. Once master goes to 
> final state, no matter if it is succeeded or failed, we should terminate 
> ps/tensorboard/workers. And the mark the job to succeeded/failed. 
> 2) Not sure if it is a real-world use case: A service which has multiple 
> component, some component is not restartable. For such services, if a 
> component is failed, we should mark the whole service to failed. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-8489) Need to support "dominant" component concept inside YARN service

2018-10-18 Thread Suma Shivaprasad (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suma Shivaprasad reassigned YARN-8489:
--

Assignee: Suma Shivaprasad

> Need to support "dominant" component concept inside YARN service
> 
>
> Key: YARN-8489
> URL: https://issues.apache.org/jira/browse/YARN-8489
> Project: Hadoop YARN
>  Issue Type: Task
>  Components: yarn-native-services
>Reporter: Wangda Tan
>Assignee: Suma Shivaprasad
>Priority: Major
>
> Existing YARN service support termination policy for different restart 
> policies. For example ALWAYS means service will not be terminated. And NEVER 
> means if all component terminated, service will be terminated.
> The name "dominant" might not be most appropriate , we can figure out better 
> names. But in simple, it means, a dominant component which final state will 
> determine job's final state regardless of other components.
> Use cases: 
> 1) Tensorflow job has master/worker/services/tensorboard. Once master goes to 
> final state, no matter if it is succeeded or failed, we should terminate 
> ps/tensorboard/workers. And the mark the job to succeeded/failed. 
> 2) Not sure if it is a real-world use case: A service which has multiple 
> component, some component is not restartable. For such services, if a 
> component is failed, we should mark the whole service to failed. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8910) Misleading log statement in NM when max retries is -1

2018-10-18 Thread Chandni Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chandni Singh updated YARN-8910:

Attachment: YARN-8910.001.patch

> Misleading log statement in NM when max retries is -1
> -
>
> Key: YARN-8910
> URL: https://issues.apache.org/jira/browse/YARN-8910
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Chandni Singh
>Assignee: Chandni Singh
>Priority: Minor
> Attachments: YARN-8910.001.patch
>
>
> If max attempt is -1, then the below log stmt is misleading:
> Relaunching Container container_e05_1533635581781_0001_01_02. Remaining 
> retry attempts(after relaunch) : -4816.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8910) Misleading log statement in NM when max retries is -1

2018-10-18 Thread Chandni Singh (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16655814#comment-16655814
 ] 

Chandni Singh commented on YARN-8910:
-

[~eyang] [~rohithsharma] could you please review?

> Misleading log statement in NM when max retries is -1
> -
>
> Key: YARN-8910
> URL: https://issues.apache.org/jira/browse/YARN-8910
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Chandni Singh
>Assignee: Chandni Singh
>Priority: Minor
> Attachments: YARN-8910.001.patch
>
>
> If max attempt is -1, then the below log stmt is misleading:
> Relaunching Container container_e05_1533635581781_0001_01_02. Remaining 
> retry attempts(after relaunch) : -4816.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8854) [Hadoop YARN Common] Update jquery datatable version references

2018-10-18 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16655816#comment-16655816
 ] 

Hadoop QA commented on YARN-8854:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
27s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 13m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
29m 10s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
10s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 12m 
36s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 17 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 15s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  6m  
3s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}172m 58s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
47s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}322m 45s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestLeaseRecovery2 |
|   | hadoop.hdfs.server.datanode.TestDataNodeUUID |
|   | hadoop.hdfs.server.balancer.TestBalancer |
|   | hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier |
|   | hadoop.hdfs.server.namenode.ha.TestHAAppend |
|   | hadoop.hdfs.protocol.TestLayoutVersion |
|   | 
hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage
 |
\\
\\
|| Subsystem || Re

[jira] [Assigned] (YARN-8733) Readiness check for remote component

2018-10-18 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8733?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang reassigned YARN-8733:
---

Assignee: Billie Rinaldi

> Readiness check for remote component
> 
>
> Key: YARN-8733
> URL: https://issues.apache.org/jira/browse/YARN-8733
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: yarn-native-services
>Reporter: Eric Yang
>Assignee: Billie Rinaldi
>Priority: Major
>
> When a service is deploying, there can be remote component dependency between 
> services.  For example, Hive server 2 can depend on Hive metastore, which 
> depends on a remote MySQL database.  It would be great to have ability to 
> check the remote server and port to make sure MySQL is available before 
> deploying Hive LLAP service.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8899) TestCleanupAfterKIll is failing due to unsatisfied dependencies

2018-10-18 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16655831#comment-16655831
 ] 

Hudson commented on YARN-8899:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15259 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15259/])
YARN-8899.  Fixed minicluster dependency on yarn-server-web-proxy.   
(eyang: rev beb850d8f7f1fefa7a6d9502df2b4a4eea372523)
* (edit) hadoop-minicluster/pom.xml


> TestCleanupAfterKIll is failing due to unsatisfied dependencies
> ---
>
> Key: YARN-8899
> URL: https://issues.apache.org/jira/browse/YARN-8899
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: yarn-native-services
>Affects Versions: 3.3.0
>Reporter: Eric Yang
>Assignee: Robert Kanter
>Priority: Blocker
> Fix For: 3.3.0
>
> Attachments: YARN-8899.001.patch
>
>
> BouncyCastle upgrade cause unit test to fail due to unsatisfied transitive 
> dependencies.
> It looks like the new version of bcprov-jdk15on bring in a new dependency 
> bcpkix-jdk15on.  Minicluster does not have bcpkix-jdk15on dependency and 
> cause unit test that depends on minicluster to fail.
> {code}
> [ERROR] 
> testRegistryCleanedOnLifetimeExceeded(org.apache.hadoop.yarn.service.TestCleanupAfterKill)
>   Time elapsed: 2.709 s  <<< ERROR!
> java.lang.NoClassDefFoundError: 
> org/bouncycastle/operator/OperatorCreationException
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMActiveServices.serviceInit(ResourceManager.java:836)
>   at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.createAndInitActiveServices(ResourceManager.java:1256)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceInit(ResourceManager.java:324)
>   at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
>   at 
> org.apache.hadoop.yarn.server.MiniYARNCluster.initResourceManager(MiniYARNCluster.java:348)
>   at 
> org.apache.hadoop.yarn.server.MiniYARNCluster.access$200(MiniYARNCluster.java:128)
>   at 
> org.apache.hadoop.yarn.server.MiniYARNCluster$ResourceManagerWrapper.serviceInit(MiniYARNCluster.java:497)
>   at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
>   at 
> org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:108)
>   at 
> org.apache.hadoop.yarn.server.MiniYARNCluster.serviceInit(MiniYARNCluster.java:316)
>   at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
>   at 
> org.apache.hadoop.yarn.service.ServiceTestUtils.setupInternal(ServiceTestUtils.java:251)
>   at 
> org.apache.hadoop.yarn.service.TestCleanupAfterKill.testRegistryCleanedOnLifetimeExceeded(TestCleanupAfterKill.java:66)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> Caused by: java.lang.ClassNotFoundException: 
> org.bouncycastle.operator.OperatorCreationException
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   ... 23 more
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-8911) NM incorrectly account for container cpu utilization by their number of vcores

2018-10-18 Thread Haibo Chen (JIRA)
Haibo Chen created YARN-8911:


 Summary: NM incorrectly account for container cpu utilization by 
their number of vcores
 Key: YARN-8911
 URL: https://issues.apache.org/jira/browse/YARN-8911
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Reporter: Haibo Chen
Assignee: Haibo Chen


ResourceUtilization represents the cpu utilization with a float number in [0, 
1.0], i.e. the percentage of cpu usage across the node.  However, when 
Containers Monitor tracks the total aggregate resource utilization of all 
containers, it adds up the total number of vcores used by all running 
containers.

See [the 
code|https://github.com/apache/hadoop/blob/beb850d8f7f1fefa7a6d9502df2b4a4eea372523/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/monitor/ContainersMonitorImpl.java#L672]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8911) NM incorrectly account for container cpu utilization by their number of vcores

2018-10-18 Thread Haibo Chen (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8911?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated YARN-8911:
-
Description: 
ResourceUtilization represents the cpu utilization with a float number in [0, 
1.0], i.e. the percentage of cpu usage across the node.  However, when 
Containers Monitor tracks the total aggregate resource utilization of all 
containers, it adds up the total number of vcores used by all running 
containers.

 

(If you have 6 containers running, each consuming 1 vcore, you'd see the 
aggregated cpu container utilization being 6.0, but it's supposed to be always 
between 0 and 1.0)   See [the 
code|https://github.com/apache/hadoop/blob/beb850d8f7f1fefa7a6d9502df2b4a4eea372523/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/monitor/ContainersMonitorImpl.java#L672]

  was:
ResourceUtilization represents the cpu utilization with a float number in [0, 
1.0], i.e. the percentage of cpu usage across the node.  However, when 
Containers Monitor tracks the total aggregate resource utilization of all 
containers, it adds up the total number of vcores used by all running 
containers.

See [the 
code|https://github.com/apache/hadoop/blob/beb850d8f7f1fefa7a6d9502df2b4a4eea372523/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/monitor/ContainersMonitorImpl.java#L672]


> NM incorrectly account for container cpu utilization by their number of vcores
> --
>
> Key: YARN-8911
> URL: https://issues.apache.org/jira/browse/YARN-8911
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>Priority: Major
>
> ResourceUtilization represents the cpu utilization with a float number in [0, 
> 1.0], i.e. the percentage of cpu usage across the node.  However, when 
> Containers Monitor tracks the total aggregate resource utilization of all 
> containers, it adds up the total number of vcores used by all running 
> containers.
>  
> (If you have 6 containers running, each consuming 1 vcore, you'd see the 
> aggregated cpu container utilization being 6.0, but it's supposed to be 
> always between 0 and 1.0)   See [the 
> code|https://github.com/apache/hadoop/blob/beb850d8f7f1fefa7a6d9502df2b4a4eea372523/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/monitor/ContainersMonitorImpl.java#L672]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



  1   2   >