[jira] [Commented] (YARN-7277) Container Launch expand environment needs to consider bracket matching

2018-11-12 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-7277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684793#comment-16684793
 ] 

Akira Ajisaka commented on YARN-7277:
-

If we run this test in hadoop-yarn-server-nodemanager directory, existing 
hadoop-yarn-api jar is used instead of newly built jar. That way the change in 
hadoop-yarn-api is not reflected and the regression test fails. IMO, we cannot 
avoid this kind of error in the precommit job.

> Container Launch expand environment needs to consider bracket matching
> --
>
> Key: YARN-7277
> URL: https://issues.apache.org/jira/browse/YARN-7277
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: balloons
>Assignee: Zhankun Tang
>Priority: Critical
> Attachments: YARN-7277-trunk.001.patch, YARN-7277-trunk.002.patch, 
> YARN-7277-trunk.003.patch, YARN-7277-trunk.004.patch, 
> YARN-7277-trunk.005.patch
>
>
> The SPARK application I submitted always failed and I finally found that the 
> commands I specified to launch AM Container were changed by NM.
> *The following is part of the excerpt I submitted to RM to see the command:*
> {code:java}
> *'{\"handler\":\"FILLER\",\"inputTable\":\"engine_arch.adult_train\",\"outputTable\":[\"ether_features_filler_\$experimentId_\$taskId_out0\"],\"params\":{\"age\":{\"param\":[\"0\"]}}}'*
> {code}
> *The following is an excerpt from the corresponding command used when I 
> observe the NM launch container:*
> {code:java}
> *'{\"handler\":\"FILLER\",\"inputTable\":\"engine_arch.adult_train\",\"outputTable\":[\"ether_features_filler_\$experimentId_\$taskId_out0\"],\"params\":{\"age\":{\"param\":[\"0\"]}*
> {code}
> Finally, I found that NM made the following transformation in launch 
> container which led to this situation:
> {code:java}
> @VisibleForTesting
>   public static String expandEnvironment(String var,
>   Path containerLogDir) {
> var = var.replace(ApplicationConstants.LOG_DIR_EXPANSION_VAR,
>   containerLogDir.toString());
> var =  var.replace(ApplicationConstants.CLASS_PATH_SEPARATOR,
>   File.pathSeparator);
> // replace parameter expansion marker. e.g. {{VAR}} on Windows is replaced
> // as %VAR% and on Linux replaced as "$VAR"
> if (Shell.WINDOWS) {
>   var = var.replaceAll("(\\{\\{)|(\\}\\})", "%");
> } else {
>   var = var.replace(ApplicationConstants.PARAMETER_EXPANSION_LEFT, "$");
>   *var = var.replace(ApplicationConstants.PARAMETER_EXPANSION_RIGHT, "");*
> }
> return var;
>   }
> {code}
> I think this is a Bug that doesn't even consider the pairing of 
> "*PARAMETER_EXPANSION_LEFT*" and "*PARAMETER_EXPANSION_RIGHT*" when 
> substituting. But simply substituting for simple violence.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-8714) [Submarine] Support files/tarballs to be localized for a training job.

2018-11-12 Thread Xun Liu (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684794#comment-16684794
 ] 

Xun Liu edited comment on YARN-8714 at 11/13/18 7:01 AM:
-

# Support files and folders
 # HDFS type and local file type supporting HDFS:// prefix at the same time
 # Keep it intact: the uploaded package is compressed, and the submarine is 
automatically decompressed into the container, which is not suitable, because 
if I rush to save the file that needs to upload the compressed package format, 
it will be destroyed. And it also introduces ambiguity.
 # Parameter format: {color:#FF}-localizations 
hdfs:///user/yarn>{color:#ff}.{color}{color} # indicates the current 
execution path of the container
 # Parameter format: {color:#FF}-localizations 
hdfs:///user/yarn>./abc{color} # Indicates the abc folder under the current 
execution path of the container (submarine marks the file under 
hdfs:///user/yarn as an abc .tar.gz compression package, extract the abc folder 
when pulling up the container, then mount it in it)


was (Author: liuxun323):
# Support files and folders
 # HDFS type and local file type supporting HDFS:// prefix at the same time
 # Keep it intact: the uploaded package is compressed, and the submarine is 
automatically decompressed into the container, which is not suitable, because 
if I rush to save the file that needs to upload the compressed package format, 
it will be destroyed. And it also introduces ambiguity.
 # Parameter format: {color:#FF}--localizations hdfs:///user/yarn->.{color} 
# indicates the current execution path of the container
 # Parameter format: {color:#FF}--localizations 
hdfs:///user/yarn->./abc{color} # Indicates the abc folder under the current 
execution path of the container (submarine marks the file under 
hdfs:///user/yarn as an abc .tar.gz compression package, extract the abc folder 
when pulling up the container, then mount it in it)

> [Submarine] Support files/tarballs to be localized for a training job.
> --
>
> Key: YARN-8714
> URL: https://issues.apache.org/jira/browse/YARN-8714
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Zhankun Tang
>Priority: Major
> Attachments: YARN-8714-WIP1-trunk-001.patch
>
>
> See 
> https://docs.google.com/document/d/199J4pB3blqgV9SCNvBbTqkEoQdjoyGMjESV4MktCo0k/edit#heading=h.vkxp9edl11m7,
>  {{job run --localizations ...}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



***UNCHECKED*** [jira] [Commented] (YARN-8714) [Submarine] Support files/tarballs to be localized for a training job.

2018-11-12 Thread Xun Liu (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684794#comment-16684794
 ] 

Xun Liu commented on YARN-8714:
---

# Support files and folders
 # HDFS type and local file type supporting HDFS:// prefix at the same time
 # Keep it intact: the uploaded package is compressed, and the submarine is 
automatically decompressed into the container, which is not suitable, because 
if I rush to save the file that needs to upload the compressed package format, 
it will be destroyed. And it also introduces ambiguity.
 # Parameter format: {color:#FF}--localizations hdfs:///user/yarn->.{color} 
# indicates the current execution path of the container
 # Parameter format: {color:#FF}--localizations 
hdfs:///user/yarn->./abc{color} # Indicates the abc folder under the current 
execution path of the container (submarine marks the file under 
hdfs:///user/yarn as an abc .tar.gz compression package, extract the abc folder 
when pulling up the container, then mount it in it)

> [Submarine] Support files/tarballs to be localized for a training job.
> --
>
> Key: YARN-8714
> URL: https://issues.apache.org/jira/browse/YARN-8714
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Zhankun Tang
>Priority: Major
> Attachments: YARN-8714-WIP1-trunk-001.patch
>
>
> See 
> https://docs.google.com/document/d/199J4pB3blqgV9SCNvBbTqkEoQdjoyGMjESV4MktCo0k/edit#heading=h.vkxp9edl11m7,
>  {{job run --localizations ...}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-8960) [Submarine] Can't get submarine service status using the command of "yarn app -status" under security environment

2018-11-12 Thread Zac Zhou (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684788#comment-16684788
 ] 

Zac Zhou edited comment on YARN-8960 at 11/13/18 6:53 AM:
--

As discussion offline, we can use the same kerberos parameter for both service 
and user.

Two parameters --keytab, --principal are added to the submarine job.

We can submit a submarine job like this:

./yarn jar 
/home/hadoop/hadoop-current/share/hadoop/yarn/hadoop-yarn-submarine-3.2.0-SNAPSHOT.jar
 job run \
--env DOCKER_JAVA_HOME=/opt/java \
--env DOCKER_HADOOP_HDFS_HOME=/hadoop-3.1.0 --name distributed-tf-gpu \
--env YARN_CONTAINER_RUNTIME_DOCKER_CONTAINER_NETWORK=calico-network \
--worker_docker_image 0.0.0.0:5000/gpu-cuda9.0-tf1.8.0-with-models \
--input_path hdfs://mldev/tmp/cifar-10-data \
--checkpoint_path hdfs://mldev/user/hadoop/tf-distributed-checkpoint \
--num_ps 1 \
--ps_resources memory=4G,vcores=2,gpu=0 \
--ps_launch_cmd "python /test/cifar10_estimator/cifar10_main.py 
--data-dir=hdfs://mldev/tmp/cifar-10-data 
--job-dir=hdfs://mldev/tmp/cifar-10-jobdir --num-gpus=0" \
--ps_docker_image 0.0.0.0:5000/dockerfile-cpu-tf1.8.0-with-models \
--worker_resources memory=4G,vcores=2,gpu=1 --verbose \
--num_workers 2 \
--worker_launch_cmd "python /test/cifar10_estimator/cifar10_main.py 
--data-dir=hdfs://mldev/tmp/cifar-10-data 
--job-dir=hdfs://mldev/tmp/cifar-10-jobdir --train-steps=500 
--eval-batch-size=16 --train-batch-size=16 --sync --num-gpus=1" \
 *--keytab* /tmp/keytabs/hadoop.keytab \
 *--principal* hadoop/ad...@corp.com

 


was (Author: yuan_zac):
As discussion offline, we can use the same kerberos keytab parameter for both 
service and user.

Two parameters --keytab, --principal are added to the submarine job.

We can submit a submarine job like this:

./yarn jar 
/home/hadoop/hadoop-current/share/hadoop/yarn/hadoop-yarn-submarine-3.2.0-SNAPSHOT.jar
 job run \
--env DOCKER_JAVA_HOME=/opt/java \
--env DOCKER_HADOOP_HDFS_HOME=/hadoop-3.1.0 --name distributed-tf-gpu \
--env YARN_CONTAINER_RUNTIME_DOCKER_CONTAINER_NETWORK=calico-network \
--worker_docker_image 0.0.0.0:5000/gpu-cuda9.0-tf1.8.0-with-models \
--input_path hdfs://mldev/tmp/cifar-10-data \
--checkpoint_path hdfs://mldev/user/hadoop/tf-distributed-checkpoint \
--num_ps 1 \
--ps_resources memory=4G,vcores=2,gpu=0 \
--ps_launch_cmd "python /test/cifar10_estimator/cifar10_main.py 
--data-dir=hdfs://mldev/tmp/cifar-10-data 
--job-dir=hdfs://mldev/tmp/cifar-10-jobdir --num-gpus=0" \
--ps_docker_image 0.0.0.0:5000/dockerfile-cpu-tf1.8.0-with-models \
--worker_resources memory=4G,vcores=2,gpu=1 --verbose \
--num_workers 2 \
--worker_launch_cmd "python /test/cifar10_estimator/cifar10_main.py 
--data-dir=hdfs://mldev/tmp/cifar-10-data 
--job-dir=hdfs://mldev/tmp/cifar-10-jobdir --train-steps=500 
--eval-batch-size=16 --train-batch-size=16 --sync --num-gpus=1" \
*--keytab* /tmp/keytabs/hadoop.keytab \
*--principal* hadoop/ad...@corp.com

 

> [Submarine] Can't get submarine service status using the command of "yarn app 
> -status" under security environment
> -
>
> Key: YARN-8960
> URL: https://issues.apache.org/jira/browse/YARN-8960
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Zac Zhou
>Assignee: Zac Zhou
>Priority: Major
> Attachments: YARN-8960.001.patch, YARN-8960.002.patch, 
> YARN-8960.003.patch
>
>
> After submitting a submarine job, we tried to get service status using the 
> following command:
> yarn app -status ${service_name}
> But we got the following error:
> HTTP error code : 500
>  
> The stack in resourcemanager log is :
> ERROR org.apache.hadoop.yarn.service.webapp.ApiServer: Get service failed: {}
> java.lang.reflect.UndeclaredThrowableException
>  at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1748)
>  at 
> org.apache.hadoop.yarn.service.webapp.ApiServer.getServiceFromClient(ApiServer.java:800)
>  at 
> org.apache.hadoop.yarn.service.webapp.ApiServer.getService(ApiServer.java:186)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498)
>  at 
> com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
>  at 
> com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker
> ._dispatch(AbstractResourceMethodDispatchProvider.java:205)
>  at 
> com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodD
> ispatcher.java:75)
>  

[jira] [Commented] (YARN-5168) Add port mapping handling when docker container use bridge network

2018-11-12 Thread Xun Liu (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-5168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684791#comment-16684791
 ] 

Xun Liu commented on YARN-5168:
---

[~eyang],We really need to map this feature to the port. May I ask when this 
JIRA can be completed, thank you!

> Add port mapping handling when docker container use bridge network
> --
>
> Key: YARN-5168
> URL: https://issues.apache.org/jira/browse/YARN-5168
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jun Gong
>Assignee: Eric Yang
>Priority: Major
>  Labels: Docker
>
> YARN-4007 addresses different network setups when launching the docker 
> container. We need support port mapping when docker container uses bridge 
> network.
> The following problems are what we faced:
> 1. Add "-P" to map docker container's exposed ports to automatically.
> 2. Add "-p" to let user specify specific ports to map.
> 3. Add service registry support for bridge network case, then app could find 
> each other. It could be done out of YARN, however it might be more convenient 
> to support it natively in YARN.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8960) [Submarine] Can't get submarine service status using the command of "yarn app -status" under security environment

2018-11-12 Thread Zac Zhou (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684788#comment-16684788
 ] 

Zac Zhou commented on YARN-8960:


As discussion offline, we can use the same kerberos keytab parameter for both 
service and user.

Two parameters --keytab, --principal are added to the submarine job.

We can submit a submarine job like this:

./yarn jar 
/home/hadoop/hadoop-current/share/hadoop/yarn/hadoop-yarn-submarine-3.2.0-SNAPSHOT.jar
 job run \
 --env DOCKER_JAVA_HOME=/opt/java \
 --env DOCKER_HADOOP_HDFS_HOME=/hadoop-3.1.0 --name distributed-tf-gpu \
 --env YARN_CONTAINER_RUNTIME_DOCKER_CONTAINER_NETWORK=calico-network \
 --worker_docker_image 0.0.0.0:5000/gpu-cuda9.0-tf1.8.0-with-models \
 --input_path hdfs://mldev/tmp/cifar-10-data \
 --checkpoint_path hdfs://mldev/user/hadoop/tf-distributed-checkpoint \
 --num_ps 1 \
 --ps_resources memory=4G,vcores=2,gpu=0 \
 --ps_launch_cmd "python /test/cifar10_estimator/cifar10_main.py 
--data-dir=hdfs://mldev/tmp/cifar-10-data 
--job-dir=hdfs://mldev/tmp/cifar-10-jobdir --num-gpus=0" \
 --ps_docker_image 0.0.0.0:5000/dockerfile-cpu-tf1.8.0-with-models \
 --worker_resources memory=4G,vcores=2,gpu=1 --verbose \
 --num_workers 2 \
 --worker_launch_cmd "python /test/cifar10_estimator/cifar10_main.py 
--data-dir=hdfs://mldev/tmp/cifar-10-data 
--job-dir=hdfs://mldev/tmp/cifar-10-jobdir --train-steps=500 
--eval-batch-size=16 --train-batch-size=16 --sync --num-gpus=1" \
 --keytab /tmp/keytabs/hadoop.keytab \
 --principal hadoop/ad...@corp.com

 

> [Submarine] Can't get submarine service status using the command of "yarn app 
> -status" under security environment
> -
>
> Key: YARN-8960
> URL: https://issues.apache.org/jira/browse/YARN-8960
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Zac Zhou
>Assignee: Zac Zhou
>Priority: Major
> Attachments: YARN-8960.001.patch, YARN-8960.002.patch, 
> YARN-8960.003.patch
>
>
> After submitting a submarine job, we tried to get service status using the 
> following command:
> yarn app -status ${service_name}
> But we got the following error:
> HTTP error code : 500
>  
> The stack in resourcemanager log is :
> ERROR org.apache.hadoop.yarn.service.webapp.ApiServer: Get service failed: {}
> java.lang.reflect.UndeclaredThrowableException
>  at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1748)
>  at 
> org.apache.hadoop.yarn.service.webapp.ApiServer.getServiceFromClient(ApiServer.java:800)
>  at 
> org.apache.hadoop.yarn.service.webapp.ApiServer.getService(ApiServer.java:186)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498)
>  at 
> com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
>  at 
> com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker
> ._dispatch(AbstractResourceMethodDispatchProvider.java:205)
>  at 
> com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodD
> ispatcher.java:75)
>  at 
> com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:302)
>  at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
>  at 
> com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
>  at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
>  at 
> com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
>  at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1542)
>  at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1473)
>  at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1419)
>  at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1409)
>  at 
> com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:409)
>  at 
> com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:558)
>  at 
> com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:733)
>  at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
>  at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:848)
>  at 
> 

[jira] [Comment Edited] (YARN-8960) [Submarine] Can't get submarine service status using the command of "yarn app -status" under security environment

2018-11-12 Thread Zac Zhou (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684788#comment-16684788
 ] 

Zac Zhou edited comment on YARN-8960 at 11/13/18 6:50 AM:
--

As discussion offline, we can use the same kerberos keytab parameter for both 
service and user.

Two parameters --keytab, --principal are added to the submarine job.

We can submit a submarine job like this:

./yarn jar 
/home/hadoop/hadoop-current/share/hadoop/yarn/hadoop-yarn-submarine-3.2.0-SNAPSHOT.jar
 job run \
--env DOCKER_JAVA_HOME=/opt/java \
--env DOCKER_HADOOP_HDFS_HOME=/hadoop-3.1.0 --name distributed-tf-gpu \
--env YARN_CONTAINER_RUNTIME_DOCKER_CONTAINER_NETWORK=calico-network \
--worker_docker_image 0.0.0.0:5000/gpu-cuda9.0-tf1.8.0-with-models \
--input_path hdfs://mldev/tmp/cifar-10-data \
--checkpoint_path hdfs://mldev/user/hadoop/tf-distributed-checkpoint \
--num_ps 1 \
--ps_resources memory=4G,vcores=2,gpu=0 \
--ps_launch_cmd "python /test/cifar10_estimator/cifar10_main.py 
--data-dir=hdfs://mldev/tmp/cifar-10-data 
--job-dir=hdfs://mldev/tmp/cifar-10-jobdir --num-gpus=0" \
--ps_docker_image 0.0.0.0:5000/dockerfile-cpu-tf1.8.0-with-models \
--worker_resources memory=4G,vcores=2,gpu=1 --verbose \
--num_workers 2 \
--worker_launch_cmd "python /test/cifar10_estimator/cifar10_main.py 
--data-dir=hdfs://mldev/tmp/cifar-10-data 
--job-dir=hdfs://mldev/tmp/cifar-10-jobdir --train-steps=500 
--eval-batch-size=16 --train-batch-size=16 --sync --num-gpus=1" \
*--keytab* /tmp/keytabs/hadoop.keytab \
*--principal* hadoop/ad...@corp.com

 


was (Author: yuan_zac):
As discussion offline, we can use the same kerberos keytab parameter for both 
service and user.

Two parameters --keytab, --principal are added to the submarine job.

We can submit a submarine job like this:

./yarn jar 
/home/hadoop/hadoop-current/share/hadoop/yarn/hadoop-yarn-submarine-3.2.0-SNAPSHOT.jar
 job run \
 --env DOCKER_JAVA_HOME=/opt/java \
 --env DOCKER_HADOOP_HDFS_HOME=/hadoop-3.1.0 --name distributed-tf-gpu \
 --env YARN_CONTAINER_RUNTIME_DOCKER_CONTAINER_NETWORK=calico-network \
 --worker_docker_image 0.0.0.0:5000/gpu-cuda9.0-tf1.8.0-with-models \
 --input_path hdfs://mldev/tmp/cifar-10-data \
 --checkpoint_path hdfs://mldev/user/hadoop/tf-distributed-checkpoint \
 --num_ps 1 \
 --ps_resources memory=4G,vcores=2,gpu=0 \
 --ps_launch_cmd "python /test/cifar10_estimator/cifar10_main.py 
--data-dir=hdfs://mldev/tmp/cifar-10-data 
--job-dir=hdfs://mldev/tmp/cifar-10-jobdir --num-gpus=0" \
 --ps_docker_image 0.0.0.0:5000/dockerfile-cpu-tf1.8.0-with-models \
 --worker_resources memory=4G,vcores=2,gpu=1 --verbose \
 --num_workers 2 \
 --worker_launch_cmd "python /test/cifar10_estimator/cifar10_main.py 
--data-dir=hdfs://mldev/tmp/cifar-10-data 
--job-dir=hdfs://mldev/tmp/cifar-10-jobdir --train-steps=500 
--eval-batch-size=16 --train-batch-size=16 --sync --num-gpus=1" \
 --keytab /tmp/keytabs/hadoop.keytab \
 --principal hadoop/ad...@corp.com

 

> [Submarine] Can't get submarine service status using the command of "yarn app 
> -status" under security environment
> -
>
> Key: YARN-8960
> URL: https://issues.apache.org/jira/browse/YARN-8960
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Zac Zhou
>Assignee: Zac Zhou
>Priority: Major
> Attachments: YARN-8960.001.patch, YARN-8960.002.patch, 
> YARN-8960.003.patch
>
>
> After submitting a submarine job, we tried to get service status using the 
> following command:
> yarn app -status ${service_name}
> But we got the following error:
> HTTP error code : 500
>  
> The stack in resourcemanager log is :
> ERROR org.apache.hadoop.yarn.service.webapp.ApiServer: Get service failed: {}
> java.lang.reflect.UndeclaredThrowableException
>  at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1748)
>  at 
> org.apache.hadoop.yarn.service.webapp.ApiServer.getServiceFromClient(ApiServer.java:800)
>  at 
> org.apache.hadoop.yarn.service.webapp.ApiServer.getService(ApiServer.java:186)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498)
>  at 
> com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
>  at 
> com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker
> ._dispatch(AbstractResourceMethodDispatchProvider.java:205)
>  at 
> com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodD
> 

[jira] [Commented] (YARN-9013) [GPG] fix order of steps cleaning Registry entries in ApplicationCleaner

2018-11-12 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684780#comment-16684780
 ] 

Hadoop QA commented on YARN-9013:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-7402 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
33s{color} | {color:green} YARN-7402 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
23s{color} | {color:green} YARN-7402 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} YARN-7402 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} YARN-7402 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 44s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
33s{color} | {color:green} YARN-7402 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} YARN-7402 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 22s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
40s{color} | {color:green} hadoop-yarn-server-globalpolicygenerator in the 
patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 56m 47s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | YARN-9013 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12947941/YARN-9013-YARN-7402.v2.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 4f431d8cba10 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | YARN-7402 / e1017a6 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/22519/testReport/ |
| Max. process+thread count | 338 (vs. ulimit of 1) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/22519/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically 

[jira] [Updated] (YARN-9013) [GPG] fix order of steps cleaning Registry entries in ApplicationCleaner

2018-11-12 Thread Botong Huang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9013?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Botong Huang updated YARN-9013:
---
Attachment: YARN-9013-YARN-7402.v2.patch

> [GPG] fix order of steps cleaning Registry entries in ApplicationCleaner
> 
>
> Key: YARN-9013
> URL: https://issues.apache.org/jira/browse/YARN-9013
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Botong Huang
>Assignee: Botong Huang
>Priority: Major
> Attachments: YARN-9013-YARN-7402.v1.patch, 
> YARN-9013-YARN-7402.v2.patch
>
>
> ApplicationCleaner today deletes the entry for all finished (non-running) 
> application in YarnRegistry using this logic:
>  # GPG gets the list of running applications from Router.
>  # GPG gets the full list of applications in registry
>  # GPG deletes in registry every app in 2 that’s not in 1
> The problem is that jobs that started between 1 and 2 meets the criteria in 
> 3, and thus get deleted by mistake. The fix/right order should be 2->1->3, 
> rather than 1->2->3.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6523) Newly retrieved security Tokens are sent as part of each heartbeat to each node from RM which is not desirable in large cluster

2018-11-12 Thread Manikandan R (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-6523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684714#comment-16684714
 ] 

Manikandan R commented on YARN-6523:


Crash error. Not related to this patch. Re-run should help.

> Newly retrieved security Tokens are sent as part of each heartbeat to each 
> node from RM which is not desirable in large cluster
> ---
>
> Key: YARN-6523
> URL: https://issues.apache.org/jira/browse/YARN-6523
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: RM
>Affects Versions: 2.8.0, 2.7.3
>Reporter: Naganarasimha G R
>Assignee: Manikandan R
>Priority: Major
> Attachments: YARN-6523.001.patch, YARN-6523.002.patch, 
> YARN-6523.003.patch, YARN-6523.004.patch, YARN-6523.005.patch, 
> YARN-6523.006.patch
>
>
> Currently as part of heartbeat response RM sets all application's tokens 
> though all applications might not be active on the node. On top of it 
> NodeHeartbeatResponsePBImpl converts tokens for each app into 
> SystemCredentialsForAppsProto. Hence for each node and each heartbeat too 
> many SystemCredentialsForAppsProto objects were getting created.
> We hit a OOM while testing for 2000 concurrent apps on 500 nodes cluster with 
> 8GB RAM configured for RM



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8885) Support NM APIs to query device resource allocation

2018-11-12 Thread Zhankun Tang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhankun Tang updated YARN-8885:
---
Attachment: YARN-8885-trunk.001.patch

> Support NM APIs to query device resource allocation
> ---
>
> Key: YARN-8885
> URL: https://issues.apache.org/jira/browse/YARN-8885
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Zhankun Tang
>Assignee: Zhankun Tang
>Priority: Major
> Attachments: YARN-8885-trunk.001.patch
>
>
> Supprot REST API in NM for user to query allocation
> *_nodemanager_address:port/ws/v1/node/resources/\{resource_name}_*



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (YARN-2823) NullPointerException in RM HA enabled 3-node cluster

2018-11-12 Thread Paul Lin (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-2823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Lin updated YARN-2823:
---
Comment: was deleted

(was: [~imstefanlee] Hi, I'm facing the same issue with Flink applications. I 
tried explicitly setting `KeepContainersAcrossApplicationAttempts` to false, 
but it doesn't work. How do you solve the problem at last? And could you please 
point me to the code where the default value of 
`KeepContainersAcrossApplicationAttempts` is set to true? Thanks a lot!)

> NullPointerException in RM HA enabled 3-node cluster
> 
>
> Key: YARN-2823
> URL: https://issues.apache.org/jira/browse/YARN-2823
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.6.0
>Reporter: Gour Saha
>Assignee: Jian He
>Priority: Critical
> Fix For: 2.6.0
>
> Attachments: YARN-2823.1.patch, logs_with_NPE_in_RM.zip
>
>
> Branch:
> 2.6.0
> Environment: 
> A 3-node cluster with RM HA enabled. The HA setup went pretty smooth (used 
> Ambari) and then installed HBase using Slider. After some time the RMs went 
> down and would not come back up anymore. Following is the NPE we see in both 
> the RM logs.
> {noformat}
> 2014-09-16 01:36:28,037 FATAL resourcemanager.ResourceManager 
> (ResourceManager.java:run(612)) - Error in handling event type 
> APP_ATTEMPT_ADDED to the scheduler
> java.lang.NullPointerException
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApplicationAttempt.transferStateFromPreviousAttempt(SchedulerApplicationAttempt.java:530)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.addApplicationAttempt(CapacityScheduler.java:678)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.handle(CapacityScheduler.java:1015)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.handle(CapacityScheduler.java:98)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$SchedulerEventDispatcher$EventProcessor.run(ResourceManager.java:603)
> at java.lang.Thread.run(Thread.java:744)
> 2014-09-16 01:36:28,042 INFO  resourcemanager.ResourceManager 
> (ResourceManager.java:run(616)) - Exiting, bbye..
> {noformat}
> All the logs for this 3-node cluster has been uploaded.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-8714) [Submarine] Support files/tarballs to be localized for a training job.

2018-11-12 Thread Zac Zhou (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684664#comment-16684664
 ] 

Zac Zhou edited comment on YARN-8714 at 11/13/18 3:14 AM:
--

Looks great, it would be convenient for notebook app, like Zeppline, to submit 
the job if local files are supported.

I'm not sure if the parameter name, localization, is ok. Is it easier to 
understand if we use some parameter like "files" or "libjars" used in map 
reduce job?

Thanks,


was (Author: yuan_zac):
Looks great, it would be convenient for notebook app, like Zeppline, to submit 
the job if local files are supported.

I'm not sure if the parameter name, localization, is ok. Is it easier to 
understand if we use some parameter like '''--files' or "--libjars" used in map 
reduce job?

Thanks,

> [Submarine] Support files/tarballs to be localized for a training job.
> --
>
> Key: YARN-8714
> URL: https://issues.apache.org/jira/browse/YARN-8714
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Zhankun Tang
>Priority: Major
> Attachments: YARN-8714-WIP1-trunk-001.patch
>
>
> See 
> https://docs.google.com/document/d/199J4pB3blqgV9SCNvBbTqkEoQdjoyGMjESV4MktCo0k/edit#heading=h.vkxp9edl11m7,
>  {{job run --localizations ...}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-8714) [Submarine] Support files/tarballs to be localized for a training job.

2018-11-12 Thread Zac Zhou (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684664#comment-16684664
 ] 

Zac Zhou edited comment on YARN-8714 at 11/13/18 3:14 AM:
--

Looks great, it would be convenient for notebook app, like Zeppline, to submit 
the job if local files are supported.

I'm not sure if the parameter name, localization, is ok. Is it easier to 
understand if we use some parameter like "files" or "libjars" used in map 
reduce jobs?

Thanks,


was (Author: yuan_zac):
Looks great, it would be convenient for notebook app, like Zeppline, to submit 
the job if local files are supported.

I'm not sure if the parameter name, localization, is ok. Is it easier to 
understand if we use some parameter like "files" or "libjars" used in map 
reduce job?

Thanks,

> [Submarine] Support files/tarballs to be localized for a training job.
> --
>
> Key: YARN-8714
> URL: https://issues.apache.org/jira/browse/YARN-8714
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Zhankun Tang
>Priority: Major
> Attachments: YARN-8714-WIP1-trunk-001.patch
>
>
> See 
> https://docs.google.com/document/d/199J4pB3blqgV9SCNvBbTqkEoQdjoyGMjESV4MktCo0k/edit#heading=h.vkxp9edl11m7,
>  {{job run --localizations ...}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-8714) [Submarine] Support files/tarballs to be localized for a training job.

2018-11-12 Thread Zac Zhou (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684664#comment-16684664
 ] 

Zac Zhou edited comment on YARN-8714 at 11/13/18 3:13 AM:
--

Looks great, it would be convenient for notebook app, like Zeppline, to submit 
the job if local files are supported.

I'm not sure if the parameter name, localization, is ok. Is it easier to 
understand if we use some parameter like '''--files' or "--libjars" used in map 
reduce job?

Thanks,


was (Author: yuan_zac):
Looks great, it would be convenient for notebook app, like Zeppline, to submit 
the job if local files are supported.

I'm not sure if the parameter name, localization, is ok. Is it easier to 
understand if we use some parameter like '''--files' or "--libjars" used in map 
reduce job?

> [Submarine] Support files/tarballs to be localized for a training job.
> --
>
> Key: YARN-8714
> URL: https://issues.apache.org/jira/browse/YARN-8714
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Zhankun Tang
>Priority: Major
> Attachments: YARN-8714-WIP1-trunk-001.patch
>
>
> See 
> https://docs.google.com/document/d/199J4pB3blqgV9SCNvBbTqkEoQdjoyGMjESV4MktCo0k/edit#heading=h.vkxp9edl11m7,
>  {{job run --localizations ...}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8714) [Submarine] Support files/tarballs to be localized for a training job.

2018-11-12 Thread Zac Zhou (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684664#comment-16684664
 ] 

Zac Zhou commented on YARN-8714:


Looks great, it would be convenient for notebook app, like Zeppline, to submit 
the job if local files are supported.

I'm not sure if the parameter name, localization, is ok. Is it easier to 
understand if we use some parameter like '''--files' or "--libjars" used in map 
reduce job?

> [Submarine] Support files/tarballs to be localized for a training job.
> --
>
> Key: YARN-8714
> URL: https://issues.apache.org/jira/browse/YARN-8714
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Zhankun Tang
>Priority: Major
> Attachments: YARN-8714-WIP1-trunk-001.patch
>
>
> See 
> https://docs.google.com/document/d/199J4pB3blqgV9SCNvBbTqkEoQdjoyGMjESV4MktCo0k/edit#heading=h.vkxp9edl11m7,
>  {{job run --localizations ...}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8672) TestContainerManager#testLocalingResourceWhileContainerRunning occasionally times out

2018-11-12 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684662#comment-16684662
 ] 

Hadoop QA commented on YARN-8672:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} YARN-8672 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-8672 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12947934/YARN-8672.006.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/22518/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> TestContainerManager#testLocalingResourceWhileContainerRunning occasionally 
> times out
> -
>
> Key: YARN-8672
> URL: https://issues.apache.org/jira/browse/YARN-8672
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.2.0
>Reporter: Jason Lowe
>Assignee: Chandni Singh
>Priority: Major
> Attachments: YARN-8672.001.patch, YARN-8672.002.patch, 
> YARN-8672.003.patch, YARN-8672.004.patch, YARN-8672.005.patch, 
> YARN-8672.006.patch
>
>
> Precommit builds have been failing in 
> TestContainerManager#testLocalingResourceWhileContainerRunning.  I have been 
> able to reproduce the problem without any patch applied if I run the test 
> enough times.  It looks like something is removing container tokens from the 
> nmPrivate area just as a new localizer starts.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8672) TestContainerManager#testLocalingResourceWhileContainerRunning occasionally times out

2018-11-12 Thread Chandni Singh (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684652#comment-16684652
 ] 

Chandni Singh commented on YARN-8672:
-

[~eyang] Please take a look at patch 6. I tested this with 
LinuxContainerExecutor and didn't see any issues. Haven't changed 
container-executor C code. 
The container-executor  copies the token file to the working directory of the 
{{ContainerLocalizer}} and retains the original name of the token file. So, 
just passing an additional argument to {{ContainerLocalizer}} with the token 
file name which the localizer reads.

> TestContainerManager#testLocalingResourceWhileContainerRunning occasionally 
> times out
> -
>
> Key: YARN-8672
> URL: https://issues.apache.org/jira/browse/YARN-8672
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.2.0
>Reporter: Jason Lowe
>Assignee: Chandni Singh
>Priority: Major
> Attachments: YARN-8672.001.patch, YARN-8672.002.patch, 
> YARN-8672.003.patch, YARN-8672.004.patch, YARN-8672.005.patch, 
> YARN-8672.006.patch
>
>
> Precommit builds have been failing in 
> TestContainerManager#testLocalingResourceWhileContainerRunning.  I have been 
> able to reproduce the problem without any patch applied if I run the test 
> enough times.  It looks like something is removing container tokens from the 
> nmPrivate area just as a new localizer starts.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8672) TestContainerManager#testLocalingResourceWhileContainerRunning occasionally times out

2018-11-12 Thread Chandni Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chandni Singh updated YARN-8672:

Attachment: YARN-8672.006.patch

> TestContainerManager#testLocalingResourceWhileContainerRunning occasionally 
> times out
> -
>
> Key: YARN-8672
> URL: https://issues.apache.org/jira/browse/YARN-8672
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.2.0
>Reporter: Jason Lowe
>Assignee: Chandni Singh
>Priority: Major
> Attachments: YARN-8672.001.patch, YARN-8672.002.patch, 
> YARN-8672.003.patch, YARN-8672.004.patch, YARN-8672.005.patch, 
> YARN-8672.006.patch
>
>
> Precommit builds have been failing in 
> TestContainerManager#testLocalingResourceWhileContainerRunning.  I have been 
> able to reproduce the problem without any patch applied if I run the test 
> enough times.  It looks like something is removing container tokens from the 
> nmPrivate area just as a new localizer starts.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9001) [Submarine] Use AppAdminClient instead of ServiceClient to sumbit jobs

2018-11-12 Thread Zac Zhou (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684639#comment-16684639
 ] 

Zac Zhou commented on YARN-9001:


Sure, Wangda

The following test cases have been executed:
 # submarine run job command with and without "wait_job_finish" parameter
 # submarine show job command
 # yarn app -status command
 # yarn app -destroy command

Thanks, 

 

> [Submarine] Use AppAdminClient instead of ServiceClient to sumbit jobs
> --
>
> Key: YARN-9001
> URL: https://issues.apache.org/jira/browse/YARN-9001
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Zac Zhou
>Assignee: Zac Zhou
>Priority: Major
> Attachments: YARN-9001.001.patch, YARN-9001.002.patch, 
> YARN-9001.003.patch, YARN-9001.004.patch
>
>
> For now, submarine submit a service to yarn by using ServiceClient, We should 
> change it to AppAdminClient 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8898) Fix FederationInterceptor#allocate to set application priority in allocateResponse

2018-11-12 Thread Subru Krishnan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684634#comment-16684634
 ] 

Subru Krishnan commented on YARN-8898:
--

Thanks [~bibinchundatt] and [~botong] for providing context.

I feel the solution is 2 parts:
 # Save the {{ApplicationSubmissionContext}} in the _FederationStateStore_ and 
use it to submit UAMs.
 # Delegate certain APIs to _AMRMProxy_ via the Router, like we do presently 
for *killApplication*.

So for the scope of this Jira I prefer solution 2 as:
 * it doesn't involve changes to the core wire protocol
 * is future proof if we require more (or different) fields in future.

 [~bibinchundatt], does it make sense? Sincerely apologize for the delay as I 
see you already have a patch with solution 1.

 

Also, it looks to me that only the _ApplicationSubmissionContext_ (in 
non-federated mode) is persisted in the  _RMStateStore_ and if there's a update 
of an Application priority followed by RM failover, the priority will revert to 
the original one at submission?

 

 

 

> Fix FederationInterceptor#allocate to set application priority in 
> allocateResponse
> --
>
> Key: YARN-8898
> URL: https://issues.apache.org/jira/browse/YARN-8898
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Bibin A Chundatt
>Assignee: Bilwa S T
>Priority: Major
> Attachments: YARN-8898.wip.patch
>
>
> In case of FederationInterceptor#mergeAllocateResponses skips 
> application_priority in response returned



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8997) [Submarine] Small refactors of modifier, condition check and redundant local variables

2018-11-12 Thread Zhankun Tang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684610#comment-16684610
 ] 

Zhankun Tang commented on YARN-8997:


[~giovanni.fumarola] . Thanks for the review!

> [Submarine] Small refactors of modifier, condition check and redundant local 
> variables 
> ---
>
> Key: YARN-8997
> URL: https://issues.apache.org/jira/browse/YARN-8997
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Zhankun Tang
>Assignee: Zhankun Tang
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: YARN-8997-trunk-001.patch, YARN-8997-trunk-002.patch
>
>
> In YarnServiceJobSubmitter#needHdfs. Below code can be simplified to just one 
> line.
> {code:java}
> if (content != null && content.contains("hdfs://")) {
>   return true;
> }
> return false;{code}
> {code:java}
> return content != null && content.contains("hdfs://");{code}
> In CliUtils#argsForHelp
> {code:java}
> if (args[0].equals("-h") || args[0].equals("--help")) {
>   return true;
> }
> {code}
> Can be simlified to:
> {code:java}
>  return args[0].equals("-h") || args[0].equals("--help");{code}
> And several redundant variables can be removed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9013) [GPG] fix order of steps cleaning Registry entries in ApplicationCleaner

2018-11-12 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684607#comment-16684607
 ] 

Hadoop QA commented on YARN-9013:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-7402 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
17s{color} | {color:green} YARN-7402 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} YARN-7402 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} YARN-7402 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} YARN-7402 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 10s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
37s{color} | {color:green} YARN-7402 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} YARN-7402 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 12s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator:
 The patch generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 33s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
41s{color} | {color:green} hadoop-yarn-server-globalpolicygenerator in the 
patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 54m 18s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | YARN-9013 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12947889/YARN-9013-YARN-7402.v1.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 5c40326330f4 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | YARN-7402 / e1017a6 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/22517/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-globalpolicygenerator.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/22517/testReport/ |
| Max. process+thread count | 445 (vs. ulimit of 1) |
| modules | C: 

[jira] [Commented] (YARN-8761) Service AM support for decommissioning component instances

2018-11-12 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684593#comment-16684593
 ] 

Hudson commented on YARN-8761:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15413 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15413/])
YARN-8761. Service AM support for decommissioning component instances.   
(eyang: rev 4c465f5535054dad2ef0b18128fb115129f6939e)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/ApplicationCLI.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/utils/ServiceApiUtil.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/proto/ClientAMProtocol.proto
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/impl/pb/client/ClientAMProtocolPBClientImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/component/Component.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/api/AppAdminClient.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/client/ServiceClient.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/impl/pb/service/ClientAMProtocolPBServiceImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/yarn-service/YarnServiceAPI.md
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-api/src/main/java/org/apache/hadoop/yarn/service/client/ApiServiceClient.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-api/src/main/resources/definition/YARN-Simplified-V1-API-Layer-For-Services.yaml
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/YarnCommands.md
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/test/java/org/apache/hadoop/yarn/service/ServiceTestUtils.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/test/java/org/apache/hadoop/yarn/service/component/TestComponentDecommissionInstances.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/api/records/Component.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/cli/TestYarnCLI.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-api/src/main/java/org/apache/hadoop/yarn/service/webapp/ApiServer.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/ClientAMService.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/component/ComponentEventType.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/component/instance/ComponentInstance.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/ClientAMProtocol.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/component/ComponentEvent.java


> Service AM support for decommissioning component instances
> --
>
> Key: YARN-8761
> URL: https://issues.apache.org/jira/browse/YARN-8761
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Billie Rinaldi
>Assignee: Billie Rinaldi
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: YARN-8761.01.patch, YARN-8761.02.patch, 
> YARN-8761.03.patch, YARN-8761.04.patch, YARN-8761.05.patch
>
>
> The idea behind this feature is to have a flex down where specific component 
> instances are removed. Currently on a flex down, the service AM chooses for 
> removal the 

[jira] [Commented] (YARN-8982) [Router] Add locality policy

2018-11-12 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684591#comment-16684591
 ] 

Hadoop QA commented on YARN-8982:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 53s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  8s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
24s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 55m 33s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | YARN-8982 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12947921/YARN-8982.v2.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux af362a49945e 3.13.0-144-generic #193-Ubuntu SMP Thu Mar 15 
17:03:53 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / b6d4e19 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/22516/testReport/ |
| Max. process+thread count | 307 (vs. ulimit of 1) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/22516/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> [Router] Add locality policy 
> -
>
>  

[jira] [Commented] (YARN-8761) Service AM support for decommissioning component instances

2018-11-12 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684581#comment-16684581
 ] 

Eric Yang commented on YARN-8761:
-

This feature will allow decommission of containers.  The hostname sequence 
number will not be linear after one of the instances is decommissioned.  We may 
need to build a restore container feature to keep YARN service hostname linear. 
 Some applications may assume that hostnames are always linear or equal to 
number of containers to spawn for the service.  This patch will change the 
undocumented behavior.  Other than that patch 05 looks good to me.

+1 committing shortly.

> Service AM support for decommissioning component instances
> --
>
> Key: YARN-8761
> URL: https://issues.apache.org/jira/browse/YARN-8761
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Billie Rinaldi
>Assignee: Billie Rinaldi
>Priority: Major
> Attachments: YARN-8761.01.patch, YARN-8761.02.patch, 
> YARN-8761.03.patch, YARN-8761.04.patch, YARN-8761.05.patch
>
>
> The idea behind this feature is to have a flex down where specific component 
> instances are removed. Currently on a flex down, the service AM chooses for 
> removal the component instances with the highest IDs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8761) Service AM support for decommissioning component instances

2018-11-12 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684564#comment-16684564
 ] 

Hadoop QA commented on YARN-8761:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 41s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
39s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  7m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
20s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 28s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 16 new + 392 unchanged - 1 fixed = 408 total (was 393) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 22s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
39s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
42s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 26m 
29s{color} | {color:green} hadoop-yarn-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 15m 
50s{color} | {color:green} hadoop-yarn-services-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m  
4s{color} | {color:green} hadoop-yarn-services-api in the patch passed. {color} 
|
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
26s{color} | {color:green} hadoop-yarn-site in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | 

[jira] [Commented] (YARN-7898) [FederationStateStore] Create a proxy chain for FederationStateStore API in the Router

2018-11-12 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-7898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684535#comment-16684535
 ] 

Hadoop QA commented on YARN-7898:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 14m 
40s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-7402 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  3m 
21s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
 9s{color} | {color:green} YARN-7402 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m  
4s{color} | {color:green} YARN-7402 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
30s{color} | {color:green} YARN-7402 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
54s{color} | {color:green} YARN-7402 passed {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 15m 
55s{color} | {color:red} branch has errors when building and testing our client 
artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
57s{color} | {color:green} YARN-7402 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
27s{color} | {color:green} YARN-7402 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  8m 
31s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 33s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 2 new + 234 unchanged - 0 fixed = 236 total (was 234) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 11m 
55s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
22s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 44s{color} 
| {color:red} hadoop-yarn-api in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  1m  3s{color} 
| {color:red} hadoop-yarn-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 47s{color} 
| {color:red} hadoop-yarn-server-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 33s{color} 
| {color:red} hadoop-yarn-server-router in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
42s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}113m 21s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | YARN-7898 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12947904/YARN-7898-YARN-7402.v6.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  

[jira] [Updated] (YARN-8982) [Router] Add locality policy

2018-11-12 Thread Giovanni Matteo Fumarola (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8982?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giovanni Matteo Fumarola updated YARN-8982:
---
Attachment: YARN-8982.v2.patch

> [Router] Add locality policy 
> -
>
> Key: YARN-8982
> URL: https://issues.apache.org/jira/browse/YARN-8982
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>Assignee: Giovanni Matteo Fumarola
>Priority: Major
> Attachments: YARN-8982.v1.patch, YARN-8982.v2.patch
>
>
> This jira tracks the effort to add a new policy in the Router.
> This policy will allow the Router to pick the SubCluster based on the node 
> that the client requested.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8986) publish all exposed ports to random ports when using bridge network

2018-11-12 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684529#comment-16684529
 ] 

Eric Yang commented on YARN-8986:
-

[~Charo Zhang] User can define their own bridge network, and the name might not 
be the same as "bridge".  It would be good to look up docker network ls and 
only add -P, if the reference driver type is bridge.

> publish all exposed ports to random ports when using bridge network
> ---
>
> Key: YARN-8986
> URL: https://issues.apache.org/jira/browse/YARN-8986
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Affects Versions: 3.1.1
>Reporter: Charo Zhang
>Assignee: Charo Zhang
>Priority: Minor
>  Labels: Docker
> Fix For: 3.1.2
>
> Attachments: 20181108155450.png
>
>
> it's better to publish all exposed ports to random ports or support port 
> mapping for bridge network when using bridge network for docker container.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-8986) publish all exposed ports to random ports when using bridge network

2018-11-12 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang reassigned YARN-8986:
---

Assignee: Charo Zhang

> publish all exposed ports to random ports when using bridge network
> ---
>
> Key: YARN-8986
> URL: https://issues.apache.org/jira/browse/YARN-8986
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Affects Versions: 3.1.1
>Reporter: Charo Zhang
>Assignee: Charo Zhang
>Priority: Minor
>  Labels: Docker
> Fix For: 3.1.2
>
> Attachments: 20181108155450.png
>
>
> it's better to publish all exposed ports to random ports or support port 
> mapping for bridge network when using bridge network for docker container.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-5168) Add port mapping handling when docker container use bridge network

2018-11-12 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-5168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang reassigned YARN-5168:
---

Assignee: Eric Yang

> Add port mapping handling when docker container use bridge network
> --
>
> Key: YARN-5168
> URL: https://issues.apache.org/jira/browse/YARN-5168
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jun Gong
>Assignee: Eric Yang
>Priority: Major
>  Labels: Docker
>
> YARN-4007 addresses different network setups when launching the docker 
> container. We need support port mapping when docker container uses bridge 
> network.
> The following problems are what we faced:
> 1. Add "-P" to map docker container's exposed ports to automatically.
> 2. Add "-p" to let user specify specific ports to map.
> 3. Add service registry support for bridge network case, then app could find 
> each other. It could be done out of YARN, however it might be more convenient 
> to support it natively in YARN.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9014) OCI/squashfs container runtime

2018-11-12 Thread Jason Lowe (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684493#comment-16684493
 ] 

Jason Lowe commented on YARN-9014:
--

Attached a rough draft of the document.  There's quite a bit of details needed 
on how the image-tag-to-manifest and layer-hash-to-URI plugins work, but it 
should convey the general idea of how the container runtime works at a high 
level.

> OCI/squashfs container runtime
> --
>
> Key: YARN-9014
> URL: https://issues.apache.org/jira/browse/YARN-9014
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Jason Lowe
>Assignee: Jason Lowe
>Priority: Major
>  Labels: Docker
> Attachments: OciSquashfsRuntime.v001.pdf
>
>
> This JIRA tracks a YARN container runtime that supports running containers in 
> images built by Docker but the runtime does not use Docker directly, and 
> Docker does not have to be installed on the nodes.  The runtime leverages the 
> [OCI runtime standard|https://github.com/opencontainers/runtime-spec] to 
> launch containers, so an OCI-compliant runtime like {{runc}} is required.  
> {{runc}} has the benefit of not requiring a daemon like {{dockerd}} to be 
> running in order to launch/control containers.
> The layers comprising the Docker image are uploaded to HDFS as 
> [squashfs|http://tldp.org/HOWTO/SquashFS-HOWTO/whatis.html] images, enabling 
> the runtime to efficiently download and execute directly on the compressed 
> layers.  This saves image unpack time and space on the local disk.  The image 
> layers, like other entries in the YARN distributed cache, can be spread 
> across the YARN local disks, increasing the available space for storing 
> container images on each node.
> A design document will be posted shortly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9014) OCI/squashfs container runtime

2018-11-12 Thread Jason Lowe (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9014?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe updated YARN-9014:
-
Attachment: OciSquashfsRuntime.v001.pdf

> OCI/squashfs container runtime
> --
>
> Key: YARN-9014
> URL: https://issues.apache.org/jira/browse/YARN-9014
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Jason Lowe
>Assignee: Jason Lowe
>Priority: Major
>  Labels: Docker
> Attachments: OciSquashfsRuntime.v001.pdf
>
>
> This JIRA tracks a YARN container runtime that supports running containers in 
> images built by Docker but the runtime does not use Docker directly, and 
> Docker does not have to be installed on the nodes.  The runtime leverages the 
> [OCI runtime standard|https://github.com/opencontainers/runtime-spec] to 
> launch containers, so an OCI-compliant runtime like {{runc}} is required.  
> {{runc}} has the benefit of not requiring a daemon like {{dockerd}} to be 
> running in order to launch/control containers.
> The layers comprising the Docker image are uploaded to HDFS as 
> [squashfs|http://tldp.org/HOWTO/SquashFS-HOWTO/whatis.html] images, enabling 
> the runtime to efficiently download and execute directly on the compressed 
> layers.  This saves image unpack time and space on the local disk.  The image 
> layers, like other entries in the YARN distributed cache, can be spread 
> across the YARN local disks, increasing the available space for storing 
> container images on each node.
> A design document will be posted shortly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8997) [Submarine] Small refactors of modifier, condition check and redundant local variables

2018-11-12 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684476#comment-16684476
 ] 

Hudson commented on YARN-8997:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15411 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15411/])
YARN-8997. [Submarine] Small refactors of modifier, condition check and 
(gifuma: rev e269c3fb5a938e4359232628175569dbbd1a12c1)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine/src/main/java/org/apache/hadoop/yarn/submarine/runtimes/yarnservice/YarnServiceJobMonitor.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine/src/main/java/org/apache/hadoop/yarn/submarine/runtimes/yarnservice/YarnServiceUtils.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine/src/main/java/org/apache/hadoop/yarn/submarine/client/cli/CliUtils.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine/src/main/java/org/apache/hadoop/yarn/submarine/runtimes/yarnservice/YarnServiceJobSubmitter.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine/src/main/java/org/apache/hadoop/yarn/submarine/runtimes/common/FSBasedSubmarineStorageImpl.java


> [Submarine] Small refactors of modifier, condition check and redundant local 
> variables 
> ---
>
> Key: YARN-8997
> URL: https://issues.apache.org/jira/browse/YARN-8997
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Zhankun Tang
>Assignee: Zhankun Tang
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: YARN-8997-trunk-001.patch, YARN-8997-trunk-002.patch
>
>
> In YarnServiceJobSubmitter#needHdfs. Below code can be simplified to just one 
> line.
> {code:java}
> if (content != null && content.contains("hdfs://")) {
>   return true;
> }
> return false;{code}
> {code:java}
> return content != null && content.contains("hdfs://");{code}
> In CliUtils#argsForHelp
> {code:java}
> if (args[0].equals("-h") || args[0].equals("--help")) {
>   return true;
> }
> {code}
> Can be simlified to:
> {code:java}
>  return args[0].equals("-h") || args[0].equals("--help");{code}
> And several redundant variables can be removed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9001) [Submarine] Use AppAdminClient instead of ServiceClient to sumbit jobs

2018-11-12 Thread Wangda Tan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684468#comment-16684468
 ] 

Wangda Tan commented on YARN-9001:
--

[~yuan_zac], checked the patch, in general patch looks good, could u comment 
what tests you have done? 

Thanks,

> [Submarine] Use AppAdminClient instead of ServiceClient to sumbit jobs
> --
>
> Key: YARN-9001
> URL: https://issues.apache.org/jira/browse/YARN-9001
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Zac Zhou
>Assignee: Zac Zhou
>Priority: Major
> Attachments: YARN-9001.001.patch, YARN-9001.002.patch, 
> YARN-9001.003.patch, YARN-9001.004.patch
>
>
> For now, submarine submit a service to yarn by using ServiceClient, We should 
> change it to AppAdminClient 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8997) [Submarine] Small refactors of modifier, condition check and redundant local variables

2018-11-12 Thread Giovanni Matteo Fumarola (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giovanni Matteo Fumarola updated YARN-8997:
---
Fix Version/s: 3.3.0

> [Submarine] Small refactors of modifier, condition check and redundant local 
> variables 
> ---
>
> Key: YARN-8997
> URL: https://issues.apache.org/jira/browse/YARN-8997
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Zhankun Tang
>Assignee: Zhankun Tang
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: YARN-8997-trunk-001.patch, YARN-8997-trunk-002.patch
>
>
> In YarnServiceJobSubmitter#needHdfs. Below code can be simplified to just one 
> line.
> {code:java}
> if (content != null && content.contains("hdfs://")) {
>   return true;
> }
> return false;{code}
> {code:java}
> return content != null && content.contains("hdfs://");{code}
> In CliUtils#argsForHelp
> {code:java}
> if (args[0].equals("-h") || args[0].equals("--help")) {
>   return true;
> }
> {code}
> Can be simlified to:
> {code:java}
>  return args[0].equals("-h") || args[0].equals("--help");{code}
> And several redundant variables can be removed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9014) OCI/squashfs container runtime

2018-11-12 Thread Eric Badger (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9014?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Badger updated YARN-9014:
--
Labels: Docker  (was: )

> OCI/squashfs container runtime
> --
>
> Key: YARN-9014
> URL: https://issues.apache.org/jira/browse/YARN-9014
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Jason Lowe
>Assignee: Jason Lowe
>Priority: Major
>  Labels: Docker
>
> This JIRA tracks a YARN container runtime that supports running containers in 
> images built by Docker but the runtime does not use Docker directly, and 
> Docker does not have to be installed on the nodes.  The runtime leverages the 
> [OCI runtime standard|https://github.com/opencontainers/runtime-spec] to 
> launch containers, so an OCI-compliant runtime like {{runc}} is required.  
> {{runc}} has the benefit of not requiring a daemon like {{dockerd}} to be 
> running in order to launch/control containers.
> The layers comprising the Docker image are uploaded to HDFS as 
> [squashfs|http://tldp.org/HOWTO/SquashFS-HOWTO/whatis.html] images, enabling 
> the runtime to efficiently download and execute directly on the compressed 
> layers.  This saves image unpack time and space on the local disk.  The image 
> layers, like other entries in the YARN distributed cache, can be spread 
> across the YARN local disks, increasing the available space for storing 
> container images on each node.
> A design document will be posted shortly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9003) Support multi-homed network for docker container

2018-11-12 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684454#comment-16684454
 ] 

Hadoop QA commented on YARN-9003:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 17s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  7m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 19s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 19m  
8s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
20s{color} | {color:green} hadoop-yarn-site in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
38s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 95m 48s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | YARN-9003 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12947895/YARN-9003.001.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  

[jira] [Created] (YARN-9014) OCI/squashfs container runtime

2018-11-12 Thread Jason Lowe (JIRA)
Jason Lowe created YARN-9014:


 Summary: OCI/squashfs container runtime
 Key: YARN-9014
 URL: https://issues.apache.org/jira/browse/YARN-9014
 Project: Hadoop YARN
  Issue Type: New Feature
Reporter: Jason Lowe
Assignee: Jason Lowe


This JIRA tracks a YARN container runtime that supports running containers in 
images built by Docker but the runtime does not use Docker directly, and Docker 
does not have to be installed on the nodes.  The runtime leverages the [OCI 
runtime standard|https://github.com/opencontainers/runtime-spec] to launch 
containers, so an OCI-compliant runtime like {{runc}} is required.  {{runc}} 
has the benefit of not requiring a daemon like {{dockerd}} to be running in 
order to launch/control containers.

The layers comprising the Docker image are uploaded to HDFS as 
[squashfs|http://tldp.org/HOWTO/SquashFS-HOWTO/whatis.html] images, enabling 
the runtime to efficiently download and execute directly on the compressed 
layers.  This saves image unpack time and space on the local disk.  The image 
layers, like other entries in the YARN distributed cache, can be spread across 
the YARN local disks, increasing the available space for storing container 
images on each node.

A design document will be posted shortly.




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8761) Service AM support for decommissioning component instances

2018-11-12 Thread Billie Rinaldi (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684436#comment-16684436
 ] 

Billie Rinaldi commented on YARN-8761:
--

Rebased to trunk.

> Service AM support for decommissioning component instances
> --
>
> Key: YARN-8761
> URL: https://issues.apache.org/jira/browse/YARN-8761
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Billie Rinaldi
>Assignee: Billie Rinaldi
>Priority: Major
> Attachments: YARN-8761.01.patch, YARN-8761.02.patch, 
> YARN-8761.03.patch, YARN-8761.04.patch, YARN-8761.05.patch
>
>
> The idea behind this feature is to have a flex down where specific component 
> instances are removed. Currently on a flex down, the service AM chooses for 
> removal the component instances with the highest IDs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8761) Service AM support for decommissioning component instances

2018-11-12 Thread Billie Rinaldi (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Billie Rinaldi updated YARN-8761:
-
Attachment: YARN-8761.05.patch

> Service AM support for decommissioning component instances
> --
>
> Key: YARN-8761
> URL: https://issues.apache.org/jira/browse/YARN-8761
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Billie Rinaldi
>Assignee: Billie Rinaldi
>Priority: Major
> Attachments: YARN-8761.01.patch, YARN-8761.02.patch, 
> YARN-8761.03.patch, YARN-8761.04.patch, YARN-8761.05.patch
>
>
> The idea behind this feature is to have a flex down where specific component 
> instances are removed. Currently on a flex down, the service AM chooses for 
> removal the component instances with the highest IDs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7898) [FederationStateStore] Create a proxy chain for FederationStateStore API in the Router

2018-11-12 Thread Giovanni Matteo Fumarola (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-7898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giovanni Matteo Fumarola updated YARN-7898:
---
Attachment: YARN-7898-YARN-7402.v6.patch

> [FederationStateStore] Create a proxy chain for FederationStateStore API in 
> the Router
> --
>
> Key: YARN-7898
> URL: https://issues.apache.org/jira/browse/YARN-7898
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>Assignee: Giovanni Matteo Fumarola
>Priority: Major
> Attachments: StateStoreProxy StressTest.jpg, 
> YARN-7898-YARN-7402.proto.patch, YARN-7898-YARN-7402.v1.patch, 
> YARN-7898-YARN-7402.v2.patch, YARN-7898-YARN-7402.v3.patch, 
> YARN-7898-YARN-7402.v4.patch, YARN-7898-YARN-7402.v5.patch, 
> YARN-7898-YARN-7402.v6.patch
>
>
> As detailed in the proposal in the umbrella JIRA, we are introducing a new 
> component that routes client request to appropriate FederationStateStore. 
> This JIRA tracks the creation of a proxy for FederationStateStore in the 
> Router.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7898) [FederationStateStore] Create a proxy chain for FederationStateStore API in the Router

2018-11-12 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-7898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684387#comment-16684387
 ] 

Hadoop QA commented on YARN-7898:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 14m  
6s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-7402 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  3m 
31s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
 7s{color} | {color:green} YARN-7402 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
50s{color} | {color:green} YARN-7402 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
31s{color} | {color:green} YARN-7402 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
52s{color} | {color:green} YARN-7402 passed {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 15m 
43s{color} | {color:red} branch has errors when building and testing our client 
artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
43s{color} | {color:green} YARN-7402 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
27s{color} | {color:green} YARN-7402 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
19s{color} | {color:red} hadoop-yarn-server-router in the patch failed. {color} 
|
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  3m 
18s{color} | {color:red} hadoop-yarn in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  3m 18s{color} 
| {color:red} hadoop-yarn in the patch failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 18s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 2 new + 234 unchanged - 0 fixed = 236 total (was 234) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
21s{color} | {color:red} hadoop-yarn-server-router in the patch failed. {color} 
|
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 11m 
41s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
20s{color} | {color:red} hadoop-yarn-server-router in the patch failed. {color} 
|
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
44s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 34s{color} 
| {color:red} hadoop-yarn-api in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 49s{color} 
| {color:red} hadoop-yarn-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 37s{color} 
| {color:red} hadoop-yarn-server-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 21s{color} 
| {color:red} hadoop-yarn-server-router in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}102m 38s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | YARN-7898 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12947885/YARN-7898-YARN-7402.v5.patch
 |
| Optional Tests |  

[jira] [Commented] (YARN-9013) [GPG] fix order of steps cleaning Registry entries in ApplicationCleaner

2018-11-12 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684369#comment-16684369
 ] 

Hadoop QA commented on YARN-9013:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  8m  
9s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-7402 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
10s{color} | {color:green} YARN-7402 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
25s{color} | {color:green} YARN-7402 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} YARN-7402 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} YARN-7402 passed {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 12m  
7s{color} | {color:red} branch has errors when building and testing our client 
artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
34s{color} | {color:green} YARN-7402 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} YARN-7402 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 11s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator:
 The patch generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 12m  
9s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 23s{color} 
| {color:red} hadoop-yarn-server-globalpolicygenerator in the patch failed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 62m 12s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | YARN-9013 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12947889/YARN-9013-YARN-7402.v1.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux c881031bb3b0 3.13.0-144-generic #193-Ubuntu SMP Thu Mar 15 
17:03:53 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | YARN-7402 / 1ca57be |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/22512/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-globalpolicygenerator.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/22512/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-globalpolicygenerator.txt
 |
|  Test Results | 

[jira] [Commented] (YARN-8992) Fair scheduler can delete a dynamic queue while an application attempt is being added to the queue

2018-11-12 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684362#comment-16684362
 ] 

Hadoop QA commented on YARN-8992:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 13s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 32s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 9 new + 25 unchanged - 0 fixed = 34 total (was 25) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 30s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}105m 29s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}158m 40s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.rmapp.TestApplicationLifetimeMonitor |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | YARN-8992 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12947872/YARN-8992.001.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux a2c6c60a619a 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 18fe65d |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/22510/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit | 

[jira] [Updated] (YARN-9003) Support multi-homed network for docker container

2018-11-12 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-9003:

Attachment: YARN-9003.001.patch

> Support multi-homed network for docker container
> 
>
> Key: YARN-9003
> URL: https://issues.apache.org/jira/browse/YARN-9003
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: docker
> Attachments: YARN-9003.001.patch
>
>
> Docker network can be defined as configuration properties - docker.network to 
> setup docker container to connect to a specific network in YARN service.  
> Docker can run multi-homed network by specifying --net=bridge 
> --net=private-net.  This is useful to expose small number of  front end 
> container and ports, while the rest of the infrastructure runs in private 
> network.  This task is to add support for specifying multiple docker networks 
> to YARN service and docker support.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6523) Newly retrieved security Tokens are sent as part of each heartbeat to each node from RM which is not desirable in large cluster

2018-11-12 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-6523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684343#comment-16684343
 ] 

Hadoop QA commented on YARN-6523:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m  7s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
18s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  2m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 42s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
25s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 19m  
0s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}110m 23s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}204m 34s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | YARN-6523 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12947859/YARN-6523.006.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  cc  |
| uname | Linux ab2f290be120 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Updated] (YARN-9013) [GPG] fix order of steps cleaning Registry entries in ApplicationCleaner

2018-11-12 Thread Botong Huang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9013?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Botong Huang updated YARN-9013:
---
Attachment: YARN-9013-YARN-7402.v1.patch

> [GPG] fix order of steps cleaning Registry entries in ApplicationCleaner
> 
>
> Key: YARN-9013
> URL: https://issues.apache.org/jira/browse/YARN-9013
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Botong Huang
>Assignee: Botong Huang
>Priority: Major
> Attachments: YARN-9013-YARN-7402.v1.patch
>
>
> ApplicationCleaner today deletes the entry for all finished (non-running) 
> application in YarnRegistry using this logic:
>  # GPG gets the list of running applications from Router.
>  # GPG gets the full list of applications in registry
>  # GPG deletes in registry every app in 2 that’s not in 1
> The problem is that jobs that started between 1 and 2 meets the criteria in 
> 3, and thus get deleted by mistake. The fix/right order should be 2->1->3, 
> rather than 1->2->3.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7898) [FederationStateStore] Create a proxy chain for FederationStateStore API in the Router

2018-11-12 Thread Giovanni Matteo Fumarola (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-7898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giovanni Matteo Fumarola updated YARN-7898:
---
Attachment: YARN-7898-YARN-7402.v5.patch

> [FederationStateStore] Create a proxy chain for FederationStateStore API in 
> the Router
> --
>
> Key: YARN-7898
> URL: https://issues.apache.org/jira/browse/YARN-7898
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>Assignee: Giovanni Matteo Fumarola
>Priority: Major
> Attachments: StateStoreProxy StressTest.jpg, 
> YARN-7898-YARN-7402.proto.patch, YARN-7898-YARN-7402.v1.patch, 
> YARN-7898-YARN-7402.v2.patch, YARN-7898-YARN-7402.v3.patch, 
> YARN-7898-YARN-7402.v4.patch, YARN-7898-YARN-7402.v5.patch
>
>
> As detailed in the proposal in the umbrella JIRA, we are introducing a new 
> component that routes client request to appropriate FederationStateStore. 
> This JIRA tracks the creation of a proxy for FederationStateStore in the 
> Router.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9008) Extend YARN distributed shell with file localization feature

2018-11-12 Thread Peter Bacsko (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684249#comment-16684249
 ] 

Peter Bacsko commented on YARN-9008:


Also inviting [~haibochen] for a review.

> Extend YARN distributed shell with file localization feature
> 
>
> Key: YARN-9008
> URL: https://issues.apache.org/jira/browse/YARN-9008
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: yarn
>Affects Versions: 2.9.1, 3.1.1
>Reporter: Peter Bacsko
>Assignee: Peter Bacsko
>Priority: Major
> Attachments: YARN-9008-001.patch, YARN-9008-002.patch, 
> YARN-9008-003.patch
>
>
> YARN distributed shell is a very handy tool to test various features of YARN.
> However, it lacks support for file localization - that is, you define files 
> in the command line that you wish to be localized remotely. This can be 
> extremely useful in certain scenarios.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8776) Container Executor change to create stdin/stdout pipeline

2018-11-12 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8776?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684248#comment-16684248
 ] 

Hudson commented on YARN-8776:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15409 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15409/])
YARN-8776. Implement Container Exec feature in LinuxContainerExecutor. (billie: 
rev 1f9c4f32e842529be5980e395587f135452372bb)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/privileged/PrivilegedOperation.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/DockerLinuxContainerRuntime.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/LinuxContainerExecutor.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/DefaultLinuxContainerRuntime.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/runtime/ContainerRuntime.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/executor/ContainerExecContext.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestLinuxContainerExecutor.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/DelegatingLinuxContainerRuntime.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/docker/DockerExecCommand.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/monitor/TestContainersMonitorResourceChange.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/webapp/ContainerShellWebSocket.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/ContainerExecutor.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestContainerExecutor.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/privileged/PrivilegedOperationExecutor.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/webapp/WebServer.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/MockLinuxContainerRuntime.java


> Container Executor change to create stdin/stdout pipeline
> -
>
> Key: YARN-8776
> URL: https://issues.apache.org/jira/browse/YARN-8776
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Zian Chen
>Assignee: Eric Yang
>Priority: Major
>  Labels: Docker
> Fix For: 3.3.0
>
> Attachments: YARN-8776.001.patch, YARN-8776.002.patch, 
> YARN-8776.003.patch, YARN-8776.004.patch, YARN-8776.005.patch, 
> YARN-8776.006.patch, YARN-8776.007.patch
>
>
> The pipeline is built to connect the stdin/stdout channel from WebSocket 
> servlet through container-executor to docker executor. So when the WebSocket 
> servlet is started, we need to invoke container-executor “dockerExec” method 
> (which will be implemented) to create a new docker executor and use “docker 
> exec -it $ContainerId” command which executes an interactive bash shell on 
> the container.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org

[jira] [Commented] (YARN-8898) Fix FederationInterceptor#allocate to set application priority in allocateResponse

2018-11-12 Thread Bibin A Chundatt (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684243#comment-16684243
 ] 

Bibin A Chundatt commented on YARN-8898:


[~botong]

Attached patch based on solution 1, adding fields in registration response.

> Fix FederationInterceptor#allocate to set application priority in 
> allocateResponse
> --
>
> Key: YARN-8898
> URL: https://issues.apache.org/jira/browse/YARN-8898
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Bibin A Chundatt
>Assignee: Bilwa S T
>Priority: Major
> Attachments: YARN-8898.wip.patch
>
>
> In case of FederationInterceptor#mergeAllocateResponses skips 
> application_priority in response returned



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8672) TestContainerManager#testLocalingResourceWhileContainerRunning occasionally times out

2018-11-12 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684242#comment-16684242
 ] 

Eric Yang commented on YARN-8672:
-

[~csingh] The token file is used in two purposes.  One copy of token file is 
used to localize resource, and another copy is used to run the task.  The copy 
used to run task doesn't change frequently.  The race condition happens for 
localize resource where the token file is create/deleted after each 
localization.  This JIRA must focus on changing filename for the copy that is 
used for localize resource to avoid token file overwrite during multiple 
localization threads.  Therefore, we might not need to change logic in 
container-executor C code.  Can you validate if we are agreeing on the same 
problem?

> TestContainerManager#testLocalingResourceWhileContainerRunning occasionally 
> times out
> -
>
> Key: YARN-8672
> URL: https://issues.apache.org/jira/browse/YARN-8672
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.2.0
>Reporter: Jason Lowe
>Assignee: Chandni Singh
>Priority: Major
> Attachments: YARN-8672.001.patch, YARN-8672.002.patch, 
> YARN-8672.003.patch, YARN-8672.004.patch, YARN-8672.005.patch
>
>
> Precommit builds have been failing in 
> TestContainerManager#testLocalingResourceWhileContainerRunning.  I have been 
> able to reproduce the problem without any patch applied if I run the test 
> enough times.  It looks like something is removing container tokens from the 
> nmPrivate area just as a new localizer starts.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8898) Fix FederationInterceptor#allocate to set application priority in allocateResponse

2018-11-12 Thread Bibin A Chundatt (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bibin A Chundatt updated YARN-8898:
---
Attachment: YARN-8898.wip.patch

> Fix FederationInterceptor#allocate to set application priority in 
> allocateResponse
> --
>
> Key: YARN-8898
> URL: https://issues.apache.org/jira/browse/YARN-8898
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Bibin A Chundatt
>Assignee: Bilwa S T
>Priority: Major
> Attachments: YARN-8898.wip.patch
>
>
> In case of FederationInterceptor#mergeAllocateResponses skips 
> application_priority in response returned



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-8776) Container Executor change to create stdin/stdout pipeline

2018-11-12 Thread Billie Rinaldi (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8776?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Billie Rinaldi reassigned YARN-8776:


Assignee: Eric Yang  (was: Zian Chen)

> Container Executor change to create stdin/stdout pipeline
> -
>
> Key: YARN-8776
> URL: https://issues.apache.org/jira/browse/YARN-8776
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Zian Chen
>Assignee: Eric Yang
>Priority: Major
>  Labels: Docker
> Attachments: YARN-8776.001.patch, YARN-8776.002.patch, 
> YARN-8776.003.patch, YARN-8776.004.patch, YARN-8776.005.patch, 
> YARN-8776.006.patch, YARN-8776.007.patch
>
>
> The pipeline is built to connect the stdin/stdout channel from WebSocket 
> servlet through container-executor to docker executor. So when the WebSocket 
> servlet is started, we need to invoke container-executor “dockerExec” method 
> (which will be implemented) to create a new docker executor and use “docker 
> exec -it $ContainerId” command which executes an interactive bash shell on 
> the container.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8776) Container Executor change to create stdin/stdout pipeline

2018-11-12 Thread Billie Rinaldi (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8776?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684228#comment-16684228
 ] 

Billie Rinaldi commented on YARN-8776:
--

+1 for patch 7. Thanks [~eyang] and [~Zian Chen]!

> Container Executor change to create stdin/stdout pipeline
> -
>
> Key: YARN-8776
> URL: https://issues.apache.org/jira/browse/YARN-8776
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Zian Chen
>Assignee: Zian Chen
>Priority: Major
>  Labels: Docker
> Attachments: YARN-8776.001.patch, YARN-8776.002.patch, 
> YARN-8776.003.patch, YARN-8776.004.patch, YARN-8776.005.patch, 
> YARN-8776.006.patch, YARN-8776.007.patch
>
>
> The pipeline is built to connect the stdin/stdout channel from WebSocket 
> servlet through container-executor to docker executor. So when the WebSocket 
> servlet is started, we need to invoke container-executor “dockerExec” method 
> (which will be implemented) to create a new docker executor and use “docker 
> exec -it $ContainerId” command which executes an interactive bash shell on 
> the container.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9013) [GPG] fix order of steps cleaning Registry entries in ApplicationCleaner

2018-11-12 Thread Botong Huang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9013?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Botong Huang updated YARN-9013:
---
Parent Issue: YARN-7402  (was: YARN-5597)

> [GPG] fix order of steps cleaning Registry entries in ApplicationCleaner
> 
>
> Key: YARN-9013
> URL: https://issues.apache.org/jira/browse/YARN-9013
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Botong Huang
>Assignee: Botong Huang
>Priority: Major
>
> ApplicationCleaner today deletes the entry for all finished (non-running) 
> application in YarnRegistry using this logic:
>  # GPG gets the list of running applications from Router.
>  # GPG gets the full list of applications in registry
>  # GPG deletes in registry every app in 2 that’s not in 1
> The problem is that jobs that started between 1 and 2 meets the criteria in 
> 3, and thus get deleted by mistake. The fix/right order should be 2->1->3, 
> rather than 1->2->3.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9013) [GPG] fix order of steps cleaning Registry entries in ApplicationCleaner

2018-11-12 Thread Botong Huang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9013?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Botong Huang updated YARN-9013:
---
Issue Type: Sub-task  (was: Task)
Parent: YARN-5597

> [GPG] fix order of steps cleaning Registry entries in ApplicationCleaner
> 
>
> Key: YARN-9013
> URL: https://issues.apache.org/jira/browse/YARN-9013
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Botong Huang
>Assignee: Botong Huang
>Priority: Major
>
> ApplicationCleaner today deletes the entry for all finished (non-running) 
> application in YarnRegistry using this logic:
>  # GPG gets the list of running applications from Router.
>  # GPG gets the full list of applications in registry
>  # GPG deletes in registry every app in 2 that’s not in 1
> The problem is that jobs that started between 1 and 2 meets the criteria in 
> 3, and thus get deleted by mistake. The fix/right order should be 2->1->3, 
> rather than 1->2->3.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-9013) [GPG] fix order of steps cleaning Registry entries in ApplicationCleaner

2018-11-12 Thread Botong Huang (JIRA)
Botong Huang created YARN-9013:
--

 Summary: [GPG] fix order of steps cleaning Registry entries in 
ApplicationCleaner
 Key: YARN-9013
 URL: https://issues.apache.org/jira/browse/YARN-9013
 Project: Hadoop YARN
  Issue Type: Task
Reporter: Botong Huang
Assignee: Botong Huang


ApplicationCleaner today deletes the entry for all finished (non-running) 
application in YarnRegistry using this logic:
 # GPG gets the list of running applications from Router.
 # GPG gets the full list of applications in registry
 # GPG deletes in registry every app in 2 that’s not in 1

The problem is that jobs that started between 1 and 2 meets the criteria in 3, 
and thus get deleted by mistake. The fix/right order should be 2->1->3, rather 
than 1->2->3.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8991) nodemanager not cleaning blockmgr directories inside appcache

2018-11-12 Thread Hidayat Teonadi (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684192#comment-16684192
 ] 

Hidayat Teonadi commented on YARN-8991:
---

thanks, I filed SPARK-26020

 

> nodemanager not cleaning blockmgr directories inside appcache 
> --
>
> Key: YARN-8991
> URL: https://issues.apache.org/jira/browse/YARN-8991
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.6.0
>Reporter: Hidayat Teonadi
>Priority: Major
> Attachments: yarn-nm-log.txt
>
>
> Hi, I'm running spark on yarn and have enabled the Spark Shuffle Service. I'm 
> noticing that during the lifetime of my spark streaming application, the nm 
> appcache folder is building up with blockmgr directories (filled with 
> shuffle_*.data).
> Looking into the nm logs, it seems like the blockmgr directories is not part 
> of the cleanup process of the application. Eventually disk will fill up and 
> app will crash. I have both 
> {{yarn.nodemanager.localizer.cache.cleanup.interval-ms}} and 
> {{yarn.nodemanager.localizer.cache.target-size-mb}} set, so I don't think its 
> a configuration issue.
> What is stumping me is the executor ID listed by spark during the external 
> shuffle block registration doesn't match the executor ID listed in yarn's nm 
> log. Maybe this executorID disconnect explains why the cleanup is not done ? 
> I'm assuming that blockmgr directories are supposed to be cleaned up ?
>  
> {noformat}
> 2018-11-05 15:01:21,349 INFO 
> org.apache.spark.network.shuffle.ExternalShuffleBlockResolver: Registered 
> executor AppExecId{appId=application_1541045942679_0193, execId=1299} with 
> ExecutorShuffleInfo{localDirs=[/mnt1/yarn/nm/usercache/auction_importer/appcache/application_1541045942679_0193/blockmgr-b9703ae3-722c-47d1-a374-abf1cc954f42],
>  subDirsPerLocalDir=64, 
> shuffleManager=org.apache.spark.shuffle.sort.SortShuffleManager}
>  {noformat}
>  
> seems similar to https://issues.apache.org/jira/browse/YARN-7070, although 
> I'm not sure if the behavior I'm seeing is spark use related.
> [https://stackoverflow.com/questions/52923386/spark-streaming-job-doesnt-delete-shuffle-files]
>  has a stop gap solution of cleaning up via cron.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8986) publish all exposed ports to random ports when using bridge network

2018-11-12 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684144#comment-16684144
 ] 

Eric Yang commented on YARN-8986:
-

[~Charo Zhang] This task looks like a duplicate of YARN-5168.  I think we can 
further divide the task into:

# Container-executor changes to support -P or --publish-all=true. (YARN-8986).
# Aggregate exposed port numbers and display on UI to provide quick link to the 
exposed port.  (YARN-5168)
# Multi-homed network support for bridge + overlay network (YARN-9003).

Does this work for you?

> publish all exposed ports to random ports when using bridge network
> ---
>
> Key: YARN-8986
> URL: https://issues.apache.org/jira/browse/YARN-8986
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Affects Versions: 3.1.1
>Reporter: Charo Zhang
>Priority: Minor
>  Labels: Docker
> Fix For: 3.1.2
>
> Attachments: 20181108155450.png
>
>
> it's better to publish all exposed ports to random ports or support port 
> mapping for bridge network when using bridge network for docker container.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8986) publish all exposed ports to random ports when using bridge network

2018-11-12 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-8986:

Issue Type: Sub-task  (was: Improvement)
Parent: YARN-8472

> publish all exposed ports to random ports when using bridge network
> ---
>
> Key: YARN-8986
> URL: https://issues.apache.org/jira/browse/YARN-8986
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Affects Versions: 3.1.1
>Reporter: Charo Zhang
>Priority: Minor
>  Labels: Docker
> Fix For: 3.1.2
>
> Attachments: 20181108155450.png
>
>
> it's better to publish all exposed ports to random ports or support port 
> mapping for bridge network when using bridge network for docker container.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6523) Newly retrieved security Tokens are sent as part of each heartbeat to each node from RM which is not desirable in large cluster

2018-11-12 Thread Manikandan R (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-6523?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikandan R updated YARN-6523:
---
Attachment: YARN-6523.006.patch

> Newly retrieved security Tokens are sent as part of each heartbeat to each 
> node from RM which is not desirable in large cluster
> ---
>
> Key: YARN-6523
> URL: https://issues.apache.org/jira/browse/YARN-6523
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: RM
>Affects Versions: 2.8.0, 2.7.3
>Reporter: Naganarasimha G R
>Assignee: Manikandan R
>Priority: Major
> Attachments: YARN-6523.001.patch, YARN-6523.002.patch, 
> YARN-6523.003.patch, YARN-6523.004.patch, YARN-6523.005.patch, 
> YARN-6523.006.patch
>
>
> Currently as part of heartbeat response RM sets all application's tokens 
> though all applications might not be active on the node. On top of it 
> NodeHeartbeatResponsePBImpl converts tokens for each app into 
> SystemCredentialsForAppsProto. Hence for each node and each heartbeat too 
> many SystemCredentialsForAppsProto objects were getting created.
> We hit a OOM while testing for 2000 concurrent apps on 500 nodes cluster with 
> 8GB RAM configured for RM



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6523) Newly retrieved security Tokens are sent as part of each heartbeat to each node from RM which is not desirable in large cluster

2018-11-12 Thread Manikandan R (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-6523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684104#comment-16684104
 ] 

Manikandan R commented on YARN-6523:


{quote}Does the registration request and response really need a token sequence 
number field? {quote}

Added token sequence no only in Registration response. Thought it would be more 
cleaner approach to have the sequence no upfront and pass as part of first node 
heartbeat itself. Anyways, removed now so that NM's StatusUpdaterImpl pass 0 in 
first heartbeat request and from then it would get set based on value received 
as part of node heartbeat response from RM.

{quote}Has the RM failover scenario been considered?{quote}

Since RMContext has tokenSeqeunceNo and initialised to 1 during the start, in 
cases of any restart it would again initialised to 1 and after all NM's 
re-registration process, each NM's first node heartbeat response would be 
having credentials for sure as there would be difference in value.

Taken care of all other comments. Attaching patch for review.



> Newly retrieved security Tokens are sent as part of each heartbeat to each 
> node from RM which is not desirable in large cluster
> ---
>
> Key: YARN-6523
> URL: https://issues.apache.org/jira/browse/YARN-6523
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: RM
>Affects Versions: 2.8.0, 2.7.3
>Reporter: Naganarasimha G R
>Assignee: Manikandan R
>Priority: Major
> Attachments: YARN-6523.001.patch, YARN-6523.002.patch, 
> YARN-6523.003.patch, YARN-6523.004.patch, YARN-6523.005.patch
>
>
> Currently as part of heartbeat response RM sets all application's tokens 
> though all applications might not be active on the node. On top of it 
> NodeHeartbeatResponsePBImpl converts tokens for each app into 
> SystemCredentialsForAppsProto. Hence for each node and each heartbeat too 
> many SystemCredentialsForAppsProto objects were getting created.
> We hit a OOM while testing for 2000 concurrent apps on 500 nodes cluster with 
> 8GB RAM configured for RM



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9008) Extend YARN distributed shell with file localization feature

2018-11-12 Thread Peter Bacsko (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684001#comment-16684001
 ] 

Peter Bacsko commented on YARN-9008:


All tests passed locally.

> Extend YARN distributed shell with file localization feature
> 
>
> Key: YARN-9008
> URL: https://issues.apache.org/jira/browse/YARN-9008
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: yarn
>Affects Versions: 2.9.1, 3.1.1
>Reporter: Peter Bacsko
>Assignee: Peter Bacsko
>Priority: Major
> Attachments: YARN-9008-001.patch, YARN-9008-002.patch, 
> YARN-9008-003.patch
>
>
> YARN distributed shell is a very handy tool to test various features of YARN.
> However, it lacks support for file localization - that is, you define files 
> in the command line that you wish to be localized remotely. This can be 
> extremely useful in certain scenarios.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8877) [CSI] Extend service spec to allow setting resource attributes

2018-11-12 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16683986#comment-16683986
 ] 

Hudson commented on YARN-8877:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15407 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15407/])
YARN-8877. [CSI] Extend service spec to allow setting resource (sunilg: rev 
42f3a7082a90bc71f0e86dc1e50b0c77b05489cf)
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/test/resources/org/apache/hadoop/yarn/service/conf/examples/external3.json
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/yarn-service/YarnServiceAPI.md
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/test/java/org/apache/hadoop/yarn/service/TestServiceAM.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/component/Component.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/test/java/org/apache/hadoop/yarn/service/conf/TestAppJsonResolve.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ResourceInformation.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/test/java/org/apache/hadoop/yarn/service/conf/ExampleAppJson.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/api/records/ResourceInformation.java


> [CSI] Extend service spec to allow setting resource attributes
> --
>
> Key: YARN-8877
> URL: https://issues.apache.org/jira/browse/YARN-8877
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: YARN-8877.001.patch, YARN-8877.002.patch
>
>
> Extend yarn native service spec to support setting resource attributes in the 
> spec file.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9008) Extend YARN distributed shell with file localization feature

2018-11-12 Thread Peter Bacsko (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16683964#comment-16683964
 ] 

Peter Bacsko commented on YARN-9008:


1. ASF warnings - can be ignored
2. Checkstyle - line length problems, trivial to fix
3. Test failures - for some reason it was not possible to create new threads. 
Not sure why that happened, I'll re-run the tests locally.

> Extend YARN distributed shell with file localization feature
> 
>
> Key: YARN-9008
> URL: https://issues.apache.org/jira/browse/YARN-9008
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: yarn
>Affects Versions: 2.9.1, 3.1.1
>Reporter: Peter Bacsko
>Assignee: Peter Bacsko
>Priority: Major
> Attachments: YARN-9008-001.patch, YARN-9008-002.patch, 
> YARN-9008-003.patch
>
>
> YARN distributed shell is a very handy tool to test various features of YARN.
> However, it lacks support for file localization - that is, you define files 
> in the command line that you wish to be localized remotely. This can be 
> extremely useful in certain scenarios.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-8948) PlacementRule interface should be for all YarnSchedulers

2018-11-12 Thread Bibin A Chundatt (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16681647#comment-16681647
 ] 

Bibin A Chundatt edited comment on YARN-8948 at 11/12/18 3:40 PM:
--

[~suma.shivaprasad]/[~sunilg]/[~cheersyang]  please review patch attached.


was (Author: bibinchundatt):
[~suma.shivaprasad]/[~sunilg]/@weiwei yan  please review patch attached.

> PlacementRule interface should be for all YarnSchedulers
> 
>
> Key: YARN-8948
> URL: https://issues.apache.org/jira/browse/YARN-8948
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>Priority: Major
> Attachments: YARN-8948.001.patch, YARN-8948.002.patch, 
> YARN-8948.003.patch
>
>
> *Issue 1:*
> YARN-3635 intention was to add PlacementRule interface common for all 
> YarnSchedules.
> {code}
> 33  public abstract boolean initialize(
> 34  CapacitySchedulerContext schedulerContext) throws IOException;
> {code}
> PlacementRule initialization is done using CapacitySchedulerContext binding 
> to CapacityScheduler
> *Issue 2:*
> {{yarn.scheduler.queue-placement-rules}} doesn't work as expected in Capacity 
> Scheduler
> {quote}
> * **Queue Mapping Interface based on Default or User Defined Placement 
> Rules** - This feature allows users to map a job to a specific queue based on 
> some default placement rule. For instance based on user & group, or 
> application name. User can also define their own placement rule.
> {quote}
> As per current UserGroupMapping is always added in placementRule. 
> {{CapacityScheduler#updatePlacementRules}}
> {code}
> // Initialize placement rules
> Collection placementRuleStrs = conf.getStringCollection(
> YarnConfiguration.QUEUE_PLACEMENT_RULES);
> List placementRules = new ArrayList<>();
> ...
> // add UserGroupMappingPlacementRule if absent
> distingushRuleSet.add(YarnConfiguration.USER_GROUP_PLACEMENT_RULE);
> {code}
> PlacementRule configuration order is not maintained 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8877) [CSI] Extend service spec to allow setting resource attributes

2018-11-12 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16683959#comment-16683959
 ] 

Sunil Govindan commented on YARN-8877:
--

Thanks [~cheersyang]. Committed to trunk.

Thanks [~leftnoteasy] for additional review.

> [CSI] Extend service spec to allow setting resource attributes
> --
>
> Key: YARN-8877
> URL: https://issues.apache.org/jira/browse/YARN-8877
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: YARN-8877.001.patch, YARN-8877.002.patch
>
>
> Extend yarn native service spec to support setting resource attributes in the 
> spec file.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-8948) PlacementRule interface should be for all YarnSchedulers

2018-11-12 Thread Bibin A Chundatt (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16681647#comment-16681647
 ] 

Bibin A Chundatt edited comment on YARN-8948 at 11/12/18 3:39 PM:
--

[~suma.shivaprasad]/[~sunilg]/@weiwei yan  please review patch attached.


was (Author: bibinchundatt):
[~suma.shivaprasad]/[~sunilg]  please review patch attached.

> PlacementRule interface should be for all YarnSchedulers
> 
>
> Key: YARN-8948
> URL: https://issues.apache.org/jira/browse/YARN-8948
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>Priority: Major
> Attachments: YARN-8948.001.patch, YARN-8948.002.patch, 
> YARN-8948.003.patch
>
>
> *Issue 1:*
> YARN-3635 intention was to add PlacementRule interface common for all 
> YarnSchedules.
> {code}
> 33  public abstract boolean initialize(
> 34  CapacitySchedulerContext schedulerContext) throws IOException;
> {code}
> PlacementRule initialization is done using CapacitySchedulerContext binding 
> to CapacityScheduler
> *Issue 2:*
> {{yarn.scheduler.queue-placement-rules}} doesn't work as expected in Capacity 
> Scheduler
> {quote}
> * **Queue Mapping Interface based on Default or User Defined Placement 
> Rules** - This feature allows users to map a job to a specific queue based on 
> some default placement rule. For instance based on user & group, or 
> application name. User can also define their own placement rule.
> {quote}
> As per current UserGroupMapping is always added in placementRule. 
> {{CapacityScheduler#updatePlacementRules}}
> {code}
> // Initialize placement rules
> Collection placementRuleStrs = conf.getStringCollection(
> YarnConfiguration.QUEUE_PLACEMENT_RULES);
> List placementRules = new ArrayList<>();
> ...
> // add UserGroupMappingPlacementRule if absent
> distingushRuleSet.add(YarnConfiguration.USER_GROUP_PLACEMENT_RULE);
> {code}
> PlacementRule configuration order is not maintained 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8877) [CSI] Extend service spec to allow setting resource attributes

2018-11-12 Thread Sunil Govindan (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil Govindan updated YARN-8877:
-
Summary: [CSI] Extend service spec to allow setting resource attributes  
(was: Extend service spec to allow setting resource attributes)

> [CSI] Extend service spec to allow setting resource attributes
> --
>
> Key: YARN-8877
> URL: https://issues.apache.org/jira/browse/YARN-8877
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Major
> Attachments: YARN-8877.001.patch, YARN-8877.002.patch
>
>
> Extend yarn native service spec to support setting resource attributes in the 
> spec file.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8987) Usability improvements node-attributes CLI

2018-11-12 Thread Bibin A Chundatt (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8987?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16683955#comment-16683955
 ] 

Bibin A Chundatt commented on YARN-8987:


Thank you [~cheersyang] for review and commit..

> Usability improvements node-attributes CLI
> --
>
> Key: YARN-8987
> URL: https://issues.apache.org/jira/browse/YARN-8987
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Weiwei Yang
>Assignee: Bibin A Chundatt
>Priority: Critical
> Fix For: 3.3.0, 3.2.1
>
> Attachments: YARN-8987.001.patch, YARN-8987.002.patch, 
> YARN-8987.003.patch
>
>
> I setup a single node cluster, then trying to add node-attributes with CLI,
> first I tried:
> {code:java}
> ./bin/yarn nodeattributes -add localhost:hostname(STRING)=localhost
> {code}
> this command returns exit code 0, however the node-attribute was not added.
> Then I tried to replace "localhost" with the host ID, and it worked.
> We need to ensure the command fails with proper error message when adding was 
> not succeed.
> Similarly, when I remove a node-attribute that doesn't exist, I still get 
> return code 0.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8991) nodemanager not cleaning blockmgr directories inside appcache

2018-11-12 Thread Thomas Graves (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16683946#comment-16683946
 ] 

Thomas Graves commented on YARN-8991:
-

if its while its running then you should file this with Spark. Its very similar 
to https://issues.apache.org/jira/browse/SPARK-17233.

The spark external shuffle service doesn't supports that at this point.   The 
problem with that is that you may have an Spark Executor running on one host, 
generate some map output data to shuffle and then that executor exits as its 
not needed anymore.  When a reduce starts it just talked to the Yarn 
nodemanager and the external shuffle server to get the map output.   Now there 
is no executor left on the node to cleanup the shuffle output.   Support would 
have to be added for like the driver to tell the spark external shuffle service 
to cleanup.

If you don't use dynamic allocation and the external shuffle service it should 
cleanup properly.

> nodemanager not cleaning blockmgr directories inside appcache 
> --
>
> Key: YARN-8991
> URL: https://issues.apache.org/jira/browse/YARN-8991
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.6.0
>Reporter: Hidayat Teonadi
>Priority: Major
> Attachments: yarn-nm-log.txt
>
>
> Hi, I'm running spark on yarn and have enabled the Spark Shuffle Service. I'm 
> noticing that during the lifetime of my spark streaming application, the nm 
> appcache folder is building up with blockmgr directories (filled with 
> shuffle_*.data).
> Looking into the nm logs, it seems like the blockmgr directories is not part 
> of the cleanup process of the application. Eventually disk will fill up and 
> app will crash. I have both 
> {{yarn.nodemanager.localizer.cache.cleanup.interval-ms}} and 
> {{yarn.nodemanager.localizer.cache.target-size-mb}} set, so I don't think its 
> a configuration issue.
> What is stumping me is the executor ID listed by spark during the external 
> shuffle block registration doesn't match the executor ID listed in yarn's nm 
> log. Maybe this executorID disconnect explains why the cleanup is not done ? 
> I'm assuming that blockmgr directories are supposed to be cleaned up ?
>  
> {noformat}
> 2018-11-05 15:01:21,349 INFO 
> org.apache.spark.network.shuffle.ExternalShuffleBlockResolver: Registered 
> executor AppExecId{appId=application_1541045942679_0193, execId=1299} with 
> ExecutorShuffleInfo{localDirs=[/mnt1/yarn/nm/usercache/auction_importer/appcache/application_1541045942679_0193/blockmgr-b9703ae3-722c-47d1-a374-abf1cc954f42],
>  subDirsPerLocalDir=64, 
> shuffleManager=org.apache.spark.shuffle.sort.SortShuffleManager}
>  {noformat}
>  
> seems similar to https://issues.apache.org/jira/browse/YARN-7070, although 
> I'm not sure if the behavior I'm seeing is spark use related.
> [https://stackoverflow.com/questions/52923386/spark-streaming-job-doesnt-delete-shuffle-files]
>  has a stop gap solution of cleaning up via cron.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8882) Add a shared device mapping manager for device plugin to use

2018-11-12 Thread Zhankun Tang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhankun Tang updated YARN-8882:
---
Attachment: YARN-8882-trunk.003.patch

> Add a shared device mapping manager for device plugin to use
> 
>
> Key: YARN-8882
> URL: https://issues.apache.org/jira/browse/YARN-8882
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Zhankun Tang
>Assignee: Zhankun Tang
>Priority: Major
> Attachments: YARN-8882-trunk.001.patch, YARN-8882-trunk.002.patch, 
> YARN-8882-trunk.003.patch
>
>
> Since quite a few devices uses FIFO policy to assign devices to the 
> container, we use a shared device manager to handle all types of devices.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8882) Add a shared device mapping manager for device plugin to use

2018-11-12 Thread Zhankun Tang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhankun Tang updated YARN-8882:
---
Attachment: YARN-8882-trunk.002.patch

> Add a shared device mapping manager for device plugin to use
> 
>
> Key: YARN-8882
> URL: https://issues.apache.org/jira/browse/YARN-8882
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Zhankun Tang
>Assignee: Zhankun Tang
>Priority: Major
> Attachments: YARN-8882-trunk.001.patch, YARN-8882-trunk.002.patch
>
>
> Since quite a few devices uses FIFO policy to assign devices to the 
> container, we use a shared device manager to handle all types of devices.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9008) Extend YARN distributed shell with file localization feature

2018-11-12 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16683792#comment-16683792
 ] 

Hadoop QA commented on YARN-9008:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 11s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 16s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell:
 The patch generated 2 new + 205 unchanged - 0 fixed = 207 total (was 205) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 37s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 22m 34s{color} 
| {color:red} hadoop-yarn-applications-distributedshell in the patch failed. 
{color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
26s{color} | {color:red} The patch generated 2 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 71m 41s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.applications.distributedshell.TestDistributedShell |
|   | 
hadoop.yarn.applications.distributedshell.TestDistributedShellWithNodeLabels |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | YARN-9008 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12947811/YARN-9008-003.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux a435d810ebb8 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / c741109 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/22508/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt
 |
| unit | 

[jira] [Commented] (YARN-9001) [Submarine] Use AppAdminClient instead of ServiceClient to sumbit jobs

2018-11-12 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16683791#comment-16683791
 ] 

Hadoop QA commented on YARN-9001:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
 6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 24s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications: The patch generated 0 
new + 44 unchanged - 1 fixed = 44 total (was 45) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 25s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m 
15s{color} | {color:green} hadoop-yarn-services-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
32s{color} | {color:green} hadoop-yarn-submarine in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 72m 54s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | YARN-9001 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12947810/YARN-9001.004.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 180945a46e0e 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / c741109 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 

[jira] [Updated] (YARN-9008) Extend YARN distributed shell with file localization feature

2018-11-12 Thread Peter Bacsko (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9008?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Bacsko updated YARN-9008:
---
Attachment: YARN-9008-003.patch

> Extend YARN distributed shell with file localization feature
> 
>
> Key: YARN-9008
> URL: https://issues.apache.org/jira/browse/YARN-9008
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: yarn
>Affects Versions: 2.9.1, 3.1.1
>Reporter: Peter Bacsko
>Assignee: Peter Bacsko
>Priority: Major
> Attachments: YARN-9008-001.patch, YARN-9008-002.patch, 
> YARN-9008-003.patch
>
>
> YARN distributed shell is a very handy tool to test various features of YARN.
> However, it lacks support for file localization - that is, you define files 
> in the command line that you wish to be localized remotely. This can be 
> extremely useful in certain scenarios.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9001) [Submarine] Use AppAdminClient instead of ServiceClient to sumbit jobs

2018-11-12 Thread Zac Zhou (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16683661#comment-16683661
 ] 

Zac Zhou commented on YARN-9001:


The ut error seems is not related to the patch, resubmit the patch

> [Submarine] Use AppAdminClient instead of ServiceClient to sumbit jobs
> --
>
> Key: YARN-9001
> URL: https://issues.apache.org/jira/browse/YARN-9001
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Zac Zhou
>Assignee: Zac Zhou
>Priority: Major
> Attachments: YARN-9001.001.patch, YARN-9001.002.patch, 
> YARN-9001.003.patch, YARN-9001.004.patch
>
>
> For now, submarine submit a service to yarn by using ServiceClient, We should 
> change it to AppAdminClient 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9001) [Submarine] Use AppAdminClient instead of ServiceClient to sumbit jobs

2018-11-12 Thread Zac Zhou (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zac Zhou updated YARN-9001:
---
Attachment: YARN-9001.004.patch

> [Submarine] Use AppAdminClient instead of ServiceClient to sumbit jobs
> --
>
> Key: YARN-9001
> URL: https://issues.apache.org/jira/browse/YARN-9001
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Zac Zhou
>Assignee: Zac Zhou
>Priority: Major
> Attachments: YARN-9001.001.patch, YARN-9001.002.patch, 
> YARN-9001.003.patch, YARN-9001.004.patch
>
>
> For now, submarine submit a service to yarn by using ServiceClient, We should 
> change it to AppAdminClient 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-8964) UI2 should use clusters/{cluster name} for all ATSv2 REST APIs

2018-11-12 Thread Akhil PB (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8964?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akhil PB reassigned YARN-8964:
--

Assignee: Akhil PB

> UI2 should use clusters/{cluster name} for all ATSv2 REST APIs
> --
>
> Key: YARN-8964
> URL: https://issues.apache.org/jira/browse/YARN-8964
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Rohith Sharma K S
>Assignee: Akhil PB
>Priority: Major
>
> UI2 makes a REST call to TimelineReader without cluster name. It is advised 
> to make a REST call with clusters/{cluster name} so that remote 
> TimelineReader daemon could serve for different clusters.
> *Example*:
> *Current*: /ws/v2/timeline/flows/
> *Change*: /ws/v2/timeline/*clusters/\{cluster name\}*/flows/
> *yarn.resourcemanager.cluster-id *is configured with cluster. So, this config 
> could be used to get cluster-id
> cc:/ [~sunilg] [~akhilpb]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8958) Schedulable entities leak in fair ordering policy when recovering containers between remove app attempt and remove app

2018-11-12 Thread Weiwei Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16683566#comment-16683566
 ] 

Weiwei Yang commented on YARN-8958:
---

Hi [~Tao Yang]

When invoke FairOrderingPolicy#containerAllocated, #containerReleased from 
\{{LeafQueue}}, they all hold the writeLock of the \{{LeafQueue}}, similarly, 
#addSchedulableEntity and #removeSchedulableEntity also hold the same 
writeLock. In this case, how this race condition would happen?

> Schedulable entities leak in fair ordering policy when recovering containers 
> between remove app attempt and remove app
> --
>
> Key: YARN-8958
> URL: https://issues.apache.org/jira/browse/YARN-8958
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacityscheduler
>Affects Versions: 3.2.1
>Reporter: Tao Yang
>Assignee: Tao Yang
>Priority: Major
> Attachments: YARN-8958.001.patch, YARN-8958.002.patch
>
>
> We found a NPE in ClientRMService#getApplications when querying apps with 
> specified queue. The cause is that there is one app which can't be found by 
> calling RMContextImpl#getRMApps(is finished and swapped out of memory) but 
> still can be queried from fair ordering policy.
> To reproduce schedulable entities leak in fair ordering policy:
> (1) create app1 and launch container1 on node1
> (2) restart RM
> (3) remove app1 attempt, app1 is removed from the schedulable entities.
> (4) recover container1 after node1 reconnected to RM, then the state of 
> contianer1 is changed to COMPLETED, app1 is bring back to entitiesToReorder 
> after container released, then app1 will be added back into schedulable 
> entities after calling FairOrderingPolicy#getAssignmentIterator by scheduler.
> (5) remove app1
> To solve this problem, we should make sure schedulableEntities can only be 
> affected by add or remove app attempt, new entity should not be added into 
> schedulableEntities by reordering process.
> {code:java}
>   protected void reorderSchedulableEntity(S schedulableEntity) {
> //remove, update comparable data, and reinsert to update position in order
> schedulableEntities.remove(schedulableEntity);
> updateSchedulingResourceUsage(
>   schedulableEntity.getSchedulingResourceUsage());
> schedulableEntities.add(schedulableEntity);
>   }
> {code}
> Related codes above can be improved as follow to make sure only existent 
> entity can be re-add into schedulableEntities.
> {code:java}
>   protected void reorderSchedulableEntity(S schedulableEntity) {
> //remove, update comparable data, and reinsert to update position in order
> boolean exists = schedulableEntities.remove(schedulableEntity);
> updateSchedulingResourceUsage(
>   schedulableEntity.getSchedulingResourceUsage());
> if (exists) {
>   schedulableEntities.add(schedulableEntity);
> } else {
>   LOG.info("Skip reordering non-existent schedulable entity: "
>   + schedulableEntity.getId());
> }
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8987) Usability improvements node-attributes CLI

2018-11-12 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8987?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16683538#comment-16683538
 ] 

Hudson commented on YARN-8987:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15405 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15405/])
YARN-8987. Usability improvements node-attributes CLI. Contributed by  (wwei: 
rev c741109522d2913b87638957c64b94dee6b51029)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/AdminService.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestRMAdminService.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/NodeAttributesCLI.java


> Usability improvements node-attributes CLI
> --
>
> Key: YARN-8987
> URL: https://issues.apache.org/jira/browse/YARN-8987
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Weiwei Yang
>Assignee: Bibin A Chundatt
>Priority: Critical
> Fix For: 3.3.0, 3.2.1
>
> Attachments: YARN-8987.001.patch, YARN-8987.002.patch, 
> YARN-8987.003.patch
>
>
> I setup a single node cluster, then trying to add node-attributes with CLI,
> first I tried:
> {code:java}
> ./bin/yarn nodeattributes -add localhost:hostname(STRING)=localhost
> {code}
> this command returns exit code 0, however the node-attribute was not added.
> Then I tried to replace "localhost" with the host ID, and it worked.
> We need to ensure the command fails with proper error message when adding was 
> not succeed.
> Similarly, when I remove a node-attribute that doesn't exist, I still get 
> return code 0.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-9012) NPE in scheduler UI when min-capacity is not configured

2018-11-12 Thread tianjuan (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

tianjuan reassigned YARN-9012:
--

Assignee: tianjuan

> NPE in scheduler UI when min-capacity is not configured
> ---
>
> Key: YARN-9012
> URL: https://issues.apache.org/jira/browse/YARN-9012
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.1.1
>Reporter: tianjuan
>Assignee: tianjuan
>Priority: Major
> Fix For: 3.1.1
>
>
> I encountered the following while reviewing and testing branch YARN-5881.
> The design document from YARN-5881 says that for max-capacity:
> {quote} For any queue: If min-resource not set, it is automatically set to 0. 
> (Same as today) 
> {quote}
> When I try leaving blank {{yarn.scheduler.capacity..capacity}}, 
> the RMUI scheduler page refuses to render. It looks like it's in 
> {{CapacitySchedulerPage$ LeafQueueInfoBlock}}:
> {noformat}
> at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.NullPointerException
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.renderQueueCapacityInfo(CapacitySchedulerPage.java:163)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.renderLeafQueueInfoWithPartition(CapacitySchedulerPage.java:108)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.render(CapacitySchedulerPage.java:97)
>  at org.apache.hadoop.yarn.webapp.view.HtmlBlock.render(HtmlBlock.java:69)
>  at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.renderPartial(HtmlBlock.java:79)
>  at org.apache.hadoop.yarn.webapp.View.render(View.java:243)
>  at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock$Block.subView(HtmlBlock.java:43)
>  at 
> org.apache.hadoop.yarn.webapp.hamlet2.HamletImpl$EImp._v(HamletImpl.java:117)
>  at org.apache.hadoop.yarn.webapp.hamlet2.Hamlet$LI.__(Hamlet.java:7709)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$QueueBlock.render(CapacitySchedulerPage.java:342){noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9012) NPE in scheduler UI when min-capacity is not configured

2018-11-12 Thread tianjuan (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

tianjuan updated YARN-9012:
---
Description: 
I encountered the following while reviewing and testing branch YARN-5881.

The design document from YARN-5881 says that for max-capacity:
{quote} For any queue: If min-resource not set, it is automatically set to 0. 
(Same as today) 
{quote}
When I try leaving blank {{yarn.scheduler.capacity..capacity}}, the 
RMUI scheduler page refuses to render. It looks like it's in 
{{CapacitySchedulerPage$ LeafQueueInfoBlock}}:
{noformat}
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.NullPointerException
 at 
org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.renderQueueCapacityInfo(CapacitySchedulerPage.java:163)
 at 
org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.renderLeafQueueInfoWithPartition(CapacitySchedulerPage.java:108)
 at 
org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.render(CapacitySchedulerPage.java:97)
 at org.apache.hadoop.yarn.webapp.view.HtmlBlock.render(HtmlBlock.java:69)
 at 
org.apache.hadoop.yarn.webapp.view.HtmlBlock.renderPartial(HtmlBlock.java:79)
 at org.apache.hadoop.yarn.webapp.View.render(View.java:243)
 at 
org.apache.hadoop.yarn.webapp.view.HtmlBlock$Block.subView(HtmlBlock.java:43)
 at 
org.apache.hadoop.yarn.webapp.hamlet2.HamletImpl$EImp._v(HamletImpl.java:117)
 at org.apache.hadoop.yarn.webapp.hamlet2.Hamlet$LI.__(Hamlet.java:7709)
 at 
org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$QueueBlock.render(CapacitySchedulerPage.java:342){noformat}

  was:
I encountered the following while reviewing and testing branch YARN-5881.

The design document from YARN-5881 says that for max-capacity:
{quote} For any queue: If min-resource not set, it is automatically set to 0. 
(Same as today) 
{quote}
When I try leaving blank {{yarn.scheduler.capacity.< queue-path>.capacity}}, 
the RMUI scheduler page refuses to render. It looks like it's in 
{{CapacitySchedulerPage$ LeafQueueInfoBlock}}:
{noformat}
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.NullPointerException
 at 
org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.renderQueueCapacityInfo(CapacitySchedulerPage.java:163)
 at 
org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.renderLeafQueueInfoWithPartition(CapacitySchedulerPage.java:108)
 at 
org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.render(CapacitySchedulerPage.java:97)
 at org.apache.hadoop.yarn.webapp.view.HtmlBlock.render(HtmlBlock.java:69)
 at 
org.apache.hadoop.yarn.webapp.view.HtmlBlock.renderPartial(HtmlBlock.java:79)
 at org.apache.hadoop.yarn.webapp.View.render(View.java:243)
 at 
org.apache.hadoop.yarn.webapp.view.HtmlBlock$Block.subView(HtmlBlock.java:43)
 at 
org.apache.hadoop.yarn.webapp.hamlet2.HamletImpl$EImp._v(HamletImpl.java:117)
 at org.apache.hadoop.yarn.webapp.hamlet2.Hamlet$LI.__(Hamlet.java:7709)
 at 
org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$QueueBlock.render(CapacitySchedulerPage.java:342){noformat}


> NPE in scheduler UI when min-capacity is not configured
> ---
>
> Key: YARN-9012
> URL: https://issues.apache.org/jira/browse/YARN-9012
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.1.1
>Reporter: tianjuan
>Priority: Major
> Fix For: 3.1.1
>
>
> I encountered the following while reviewing and testing branch YARN-5881.
> The design document from YARN-5881 says that for max-capacity:
> {quote} For any queue: If min-resource not set, it is automatically set to 0. 
> (Same as today) 
> {quote}
> When I try leaving blank {{yarn.scheduler.capacity..capacity}}, 
> the RMUI scheduler page refuses to render. It looks like it's in 
> {{CapacitySchedulerPage$ LeafQueueInfoBlock}}:
> {noformat}
> at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.NullPointerException
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.renderQueueCapacityInfo(CapacitySchedulerPage.java:163)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.renderLeafQueueInfoWithPartition(CapacitySchedulerPage.java:108)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.render(CapacitySchedulerPage.java:97)
>  at org.apache.hadoop.yarn.webapp.view.HtmlBlock.render(HtmlBlock.java:69)
>  at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.renderPartial(HtmlBlock.java:79)
>  at org.apache.hadoop.yarn.webapp.View.render(View.java:243)

[jira] [Created] (YARN-9012) NPE in scheduler UI when min-capacity is not configured

2018-11-12 Thread tianjuan (JIRA)
tianjuan created YARN-9012:
--

 Summary: NPE in scheduler UI when min-capacity is not configured
 Key: YARN-9012
 URL: https://issues.apache.org/jira/browse/YARN-9012
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 3.1.1
Reporter: tianjuan
 Fix For: 3.1.1


I encountered the following while reviewing and testing branch YARN-5881.

The design document from YARN-5881 says that for max-capacity:
{quote} For any queue: If min-resource not set, it is automatically set to 0. 
(Same as today) 
{quote}
When I try leaving blank {{yarn.scheduler.capacity.< queue-path>.capacity}}, 
the RMUI scheduler page refuses to render. It looks like it's in 
{{CapacitySchedulerPage$ LeafQueueInfoBlock}}:
{noformat}
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.NullPointerException
 at 
org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.renderQueueCapacityInfo(CapacitySchedulerPage.java:163)
 at 
org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.renderLeafQueueInfoWithPartition(CapacitySchedulerPage.java:108)
 at 
org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.render(CapacitySchedulerPage.java:97)
 at org.apache.hadoop.yarn.webapp.view.HtmlBlock.render(HtmlBlock.java:69)
 at 
org.apache.hadoop.yarn.webapp.view.HtmlBlock.renderPartial(HtmlBlock.java:79)
 at org.apache.hadoop.yarn.webapp.View.render(View.java:243)
 at 
org.apache.hadoop.yarn.webapp.view.HtmlBlock$Block.subView(HtmlBlock.java:43)
 at 
org.apache.hadoop.yarn.webapp.hamlet2.HamletImpl$EImp._v(HamletImpl.java:117)
 at org.apache.hadoop.yarn.webapp.hamlet2.Hamlet$LI.__(Hamlet.java:7709)
 at 
org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$QueueBlock.render(CapacitySchedulerPage.java:342){noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9008) Extend YARN distributed shell with file localization feature

2018-11-12 Thread Peter Bacsko (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16683529#comment-16683529
 ] 

Peter Bacsko commented on YARN-9008:


Will soon upload patch v3 which includes some tests.

> Extend YARN distributed shell with file localization feature
> 
>
> Key: YARN-9008
> URL: https://issues.apache.org/jira/browse/YARN-9008
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: yarn
>Affects Versions: 2.9.1, 3.1.1
>Reporter: Peter Bacsko
>Assignee: Peter Bacsko
>Priority: Major
> Attachments: YARN-9008-001.patch, YARN-9008-002.patch
>
>
> YARN distributed shell is a very handy tool to test various features of YARN.
> However, it lacks support for file localization - that is, you define files 
> in the command line that you wish to be localized remotely. This can be 
> extremely useful in certain scenarios.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-9011) Race condition during decommissioning

2018-11-12 Thread Peter Bacsko (JIRA)
Peter Bacsko created YARN-9011:
--

 Summary: Race condition during decommissioning
 Key: YARN-9011
 URL: https://issues.apache.org/jira/browse/YARN-9011
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 3.1.1
Reporter: Peter Bacsko
Assignee: Antal Bálint Steinbach


During internal testing, we found a nasty race condition which occurs during 
decommissioning.

Node manager, incorrect behaviour:
{noformat}
2018-06-18 21:00:17,634 WARN 
org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Received 
SHUTDOWN signal from Resourcemanager as part of heartbeat, hence shutting down.
2018-06-18 21:00:17,634 WARN 
org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Message from 
ResourceManager: Disallowed NodeManager nodeId: node-6.hostname.com:8041 
hostname:node-6.hostname.com
{noformat}

Node manager, expected behaviour:
{noformat}
2018-06-18 21:07:37,377 WARN 
org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Received 
SHUTDOWN signal from Resourcemanager as part of heartbeat, hence shutting down.
2018-06-18 21:07:37,377 WARN 
org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Message from 
ResourceManager: DECOMMISSIONING  node-6.hostname.com:8041 is ready to be 
decommissioned
{noformat}

Note the two different messages from the RM ("Disallowed NodeManager" vs 
"DECOMMISSIONING"). The problem is that {{ResourceTrackerService}} can see an 
inconsistent state of nodes while they're being updated:

{noformat}
2018-06-18 21:00:17,575 INFO 
org.apache.hadoop.yarn.server.resourcemanager.NodesListManager: hostsReader 
include:{172.26.12.198,node-7.hostname.com,node-2.hostname.com,node-5.hostname.com,172.26.8.205,node-8.hostname.com,172.26.23.76,172.26.22.223,node-6.hostname.com,172.26.9.218,node-4.hostname.com,node-3.hostname.com,172.26.13.167,node-9.hostname.com,172.26.21.221,172.26.10.219}
 exclude:{node-6.hostname.com}
2018-06-18 21:00:17,575 INFO 
org.apache.hadoop.yarn.server.resourcemanager.NodesListManager: Gracefully 
decommission node node-6.hostname.com:8041 with state RUNNING
2018-06-18 21:00:17,575 INFO 
org.apache.hadoop.yarn.server.resourcemanager.ResourceTrackerService: 
Disallowed NodeManager nodeId: node-6.hostname.com:8041 node: 
node-6.hostname.com
2018-06-18 21:00:17,576 INFO 
org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeImpl: Put Node 
node-6.hostname.com:8041 in DECOMMISSIONING.
2018-06-18 21:00:17,575 INFO 
org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=yarn 
IP=172.26.22.115OPERATION=refreshNodes  TARGET=AdminService 
RESULT=SUCCESS
2018-06-18 21:00:17,577 INFO 
org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeImpl: Preserve 
original total capability: 
2018-06-18 21:00:17,577 INFO 
org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeImpl: 
node-6.hostname.com:8041 Node Transitioned from RUNNING to DECOMMISSIONING
{noformat}

When the decommissioning succeeds, there is no output logged from 
{{ResourceTrackerService}}.




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-8987) Usability improvements node-attributes CLI

2018-11-12 Thread Weiwei Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8987?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang reassigned YARN-8987:
-

Assignee: Bibin A Chundatt

> Usability improvements node-attributes CLI
> --
>
> Key: YARN-8987
> URL: https://issues.apache.org/jira/browse/YARN-8987
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Weiwei Yang
>Assignee: Bibin A Chundatt
>Priority: Critical
> Attachments: YARN-8987.001.patch, YARN-8987.002.patch, 
> YARN-8987.003.patch
>
>
> I setup a single node cluster, then trying to add node-attributes with CLI,
> first I tried:
> {code:java}
> ./bin/yarn nodeattributes -add localhost:hostname(STRING)=localhost
> {code}
> this command returns exit code 0, however the node-attribute was not added.
> Then I tried to replace "localhost" with the host ID, and it worked.
> We need to ensure the command fails with proper error message when adding was 
> not succeed.
> Similarly, when I remove a node-attribute that doesn't exist, I still get 
> return code 0.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8987) Usability improvements node-attributes CLI

2018-11-12 Thread Weiwei Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8987?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16683509#comment-16683509
 ] 

Weiwei Yang commented on YARN-8987:
---

Thanks [~bibinchundatt], the latest patch looks good to me. I tested on my 
cluster, and it resolved the problem.

+1, I will commit this shortly.

> Usability improvements node-attributes CLI
> --
>
> Key: YARN-8987
> URL: https://issues.apache.org/jira/browse/YARN-8987
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Weiwei Yang
>Priority: Critical
> Attachments: YARN-8987.001.patch, YARN-8987.002.patch, 
> YARN-8987.003.patch
>
>
> I setup a single node cluster, then trying to add node-attributes with CLI,
> first I tried:
> {code:java}
> ./bin/yarn nodeattributes -add localhost:hostname(STRING)=localhost
> {code}
> this command returns exit code 0, however the node-attribute was not added.
> Then I tried to replace "localhost" with the host ID, and it worked.
> We need to ensure the command fails with proper error message when adding was 
> not succeed.
> Similarly, when I remove a node-attribute that doesn't exist, I still get 
> return code 0.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8882) Add a shared device mapping manager for device plugin to use

2018-11-12 Thread Zhankun Tang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhankun Tang updated YARN-8882:
---
Attachment: YARN-8882-trunk.001.patch

> Add a shared device mapping manager for device plugin to use
> 
>
> Key: YARN-8882
> URL: https://issues.apache.org/jira/browse/YARN-8882
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Zhankun Tang
>Assignee: Zhankun Tang
>Priority: Major
> Attachments: YARN-8882-trunk.001.patch
>
>
> Since quite a few devices uses FIFO policy to assign devices to the 
> container, we use a shared device manager to handle all types of devices.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8882) Add a shared device mapping manager for device plugin to use

2018-11-12 Thread Zhankun Tang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhankun Tang updated YARN-8882:
---
Attachment: (was: YARN-8882-trunk.001.patch)

> Add a shared device mapping manager for device plugin to use
> 
>
> Key: YARN-8882
> URL: https://issues.apache.org/jira/browse/YARN-8882
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Zhankun Tang
>Assignee: Zhankun Tang
>Priority: Major
>
> Since quite a few devices uses FIFO policy to assign devices to the 
> container, we use a shared device manager to handle all types of devices.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8882) Add a shared device mapping manager for device plugin to use

2018-11-12 Thread Zhankun Tang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhankun Tang updated YARN-8882:
---
Attachment: YARN-8882-trunk.001.patch

> Add a shared device mapping manager for device plugin to use
> 
>
> Key: YARN-8882
> URL: https://issues.apache.org/jira/browse/YARN-8882
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Zhankun Tang
>Assignee: Zhankun Tang
>Priority: Major
> Attachments: YARN-8882-trunk.001.patch
>
>
> Since quite a few devices uses FIFO policy to assign devices to the 
> container, we use a shared device manager to handle all types of devices.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8882) Add a shared device mapping manager for device plugin to use

2018-11-12 Thread Zhankun Tang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhankun Tang updated YARN-8882:
---
Attachment: YARN-8882-trunk.001.path

> Add a shared device mapping manager for device plugin to use
> 
>
> Key: YARN-8882
> URL: https://issues.apache.org/jira/browse/YARN-8882
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Zhankun Tang
>Assignee: Zhankun Tang
>Priority: Major
>
> Since quite a few devices uses FIFO policy to assign devices to the 
> container, we use a shared device manager to handle all types of devices.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8882) Add a shared device mapping manager for device plugin to use

2018-11-12 Thread Zhankun Tang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhankun Tang updated YARN-8882:
---
Attachment: (was: YARN-8882-trunk.001.path)

> Add a shared device mapping manager for device plugin to use
> 
>
> Key: YARN-8882
> URL: https://issues.apache.org/jira/browse/YARN-8882
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Zhankun Tang
>Assignee: Zhankun Tang
>Priority: Major
>
> Since quite a few devices uses FIFO policy to assign devices to the 
> container, we use a shared device manager to handle all types of devices.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9008) Extend YARN distributed shell with file localization feature

2018-11-12 Thread Peter Bacsko (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16683476#comment-16683476
 ] 

Peter Bacsko commented on YARN-9008:


[~snemeth] [~bsteinbach] [~templedf] please review this.

> Extend YARN distributed shell with file localization feature
> 
>
> Key: YARN-9008
> URL: https://issues.apache.org/jira/browse/YARN-9008
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: yarn
>Affects Versions: 2.9.1, 3.1.1
>Reporter: Peter Bacsko
>Assignee: Peter Bacsko
>Priority: Major
> Attachments: YARN-9008-001.patch, YARN-9008-002.patch
>
>
> YARN distributed shell is a very handy tool to test various features of YARN.
> However, it lacks support for file localization - that is, you define files 
> in the command line that you wish to be localized remotely. This can be 
> extremely useful in certain scenarios.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8986) publish all exposed ports to random ports when using bridge network

2018-11-12 Thread Charo Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Charo Zhang updated YARN-8986:
--
Labels: Docker  (was: )

> publish all exposed ports to random ports when using bridge network
> ---
>
> Key: YARN-8986
> URL: https://issues.apache.org/jira/browse/YARN-8986
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: yarn
>Affects Versions: 3.1.1
>Reporter: Charo Zhang
>Priority: Minor
>  Labels: Docker
> Fix For: 3.1.2
>
> Attachments: 20181108155450.png
>
>
> it's better to publish all exposed ports to random ports or support port 
> mapping for bridge network when using bridge network for docker container.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8303) YarnClient should contact TimelineReader for application/attempt/container report

2018-11-12 Thread Rohith Sharma K S (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16683440#comment-16683440
 ] 

Rohith Sharma K S commented on YARN-8303:
-

[~abmodi] Could you update the patch ?

> YarnClient should contact TimelineReader for application/attempt/container 
> report
> -
>
> Key: YARN-8303
> URL: https://issues.apache.org/jira/browse/YARN-8303
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Rohith Sharma K S
>Assignee: Abhishek Modi
>Priority: Critical
> Attachments: YARN-8303.001.patch, YARN-8303.002.patch, 
> YARN-8303.poc.patch
>
>
> YarnClient get app/attempt/container information from RM. If RM doesn't have 
> then queried to ahsClient. When ATSv2 is only enabled, yarnClient will result 
> empty. 
> YarnClient is used by many users which result in empty information for 
> app/attempt/container report. 
> Proposal is to have adapter from yarn client so that app/attempt/container 
> reports can be generated from AHSv2Client which does REST API to 
> TimelineReader and get the entity and convert it into app/attempt/container 
> report.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8986) publish all exposed ports to random ports when using bridge network

2018-11-12 Thread Charo Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Charo Zhang updated YARN-8986:
--
Description: 
it's better to publish all exposed ports to random ports or support port 
mapping for bridge network when using bridge network for docker container.

 

  was:
it's better to publish all exposed ports to random ports when using bridge 
network for docker container.

 


> publish all exposed ports to random ports when using bridge network
> ---
>
> Key: YARN-8986
> URL: https://issues.apache.org/jira/browse/YARN-8986
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: yarn
>Affects Versions: 3.1.1
>Reporter: Charo Zhang
>Priority: Minor
> Fix For: 3.1.2
>
> Attachments: 20181108155450.png
>
>
> it's better to publish all exposed ports to random ports or support port 
> mapping for bridge network when using bridge network for docker container.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



  1   2   >