[jira] [Commented] (YARN-8734) Readiness check for remote service

2018-09-14 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16615551#comment-16615551
 ] 

Hadoop QA commented on YARN-8734:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
26s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
39s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  3m 
53s{color} | {color:red} hadoop-yarn in trunk failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 17s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  8m 
24s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 19s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 1 new + 52 unchanged - 0 fixed = 53 total (was 52) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 46s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 19m  6s{color} 
| {color:red} hadoop-yarn-services-core in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
48s{color} | {color:green} hadoop-yarn-services-api in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
16s{color} | {color:green} hadoop-yarn-site in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 88m 17s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.service.TestServiceManager |
|   | hadoop.yarn.service.TestServiceAM |
|   | hadoop.yarn.service.TestYarnNativeServices |
\\
\\
|| Subsystem || Report/Notes ||
| 

[jira] [Commented] (YARN-8777) Container Executor C binary change to execute interactive docker command

2018-09-14 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16615535#comment-16615535
 ] 

Hadoop QA commented on YARN-8777:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
29m 50s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 35s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 18m 
49s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 63m 47s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | YARN-8777 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12939810/YARN-8777.001.patch |
| Optional Tests |  dupname  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux 7eeb6a220832 4.4.0-133-generic #159-Ubuntu SMP Fri Aug 10 
07:31:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / b95aa56 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/21843/testReport/ |
| Max. process+thread count | 407 (vs. ulimit of 1) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/21843/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Container Executor C binary change to execute interactive docker command
> 
>
> Key: YARN-8777
> URL: https://issues.apache.org/jira/browse/YARN-8777
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Zian Chen
>Assignee: Eric Yang
>Priority: Major
>  Labels: Docker
> Attachments: YARN-8777.001.patch
>
>
> Since Container Executor provides Container execution using the native 
> container-executor binary, we also need to make changes to accept new 
> “dockerExec” method to invoke the corresponding native function to execute 
> docker exec command to the running container.



--
This message was sent by 

[jira] [Updated] (YARN-8734) Readiness check for remote service

2018-09-14 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8734?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-8734:

Attachment: YARN-8734.003.patch

> Readiness check for remote service
> --
>
> Key: YARN-8734
> URL: https://issues.apache.org/jira/browse/YARN-8734
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: yarn-native-services
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: Dependency check vs.pdf, YARN-8734.001.patch, 
> YARN-8734.002.patch, YARN-8734.003.patch
>
>
> When a service is deploying, there can be remote service dependency.  It 
> would be nice to describe ZooKeeper as a dependent service, and the service 
> has reached a stable state, then deploy HBase.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8777) Container Executor C binary change to execute interactive docker command

2018-09-14 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16615506#comment-16615506
 ] 

Eric Yang commented on YARN-8777:
-

To run a interactive test, create a cmd file for container executor with 
content:
{code}
[docker-command-execution]
  docker-command=exec
  name=container_1536945486532_0004_01_09
{code}

Where name is the container name or ID.

Then run container-executor with the test.cmd file:
{code}
container-executor --run-docker test.cmd
{code}

> Container Executor C binary change to execute interactive docker command
> 
>
> Key: YARN-8777
> URL: https://issues.apache.org/jira/browse/YARN-8777
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Zian Chen
>Assignee: Eric Yang
>Priority: Major
>  Labels: Docker
> Attachments: YARN-8777.001.patch
>
>
> Since Container Executor provides Container execution using the native 
> container-executor binary, we also need to make changes to accept new 
> “dockerExec” method to invoke the corresponding native function to execute 
> docker exec command to the running container.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-8777) Container Executor C binary change to execute interactive docker command

2018-09-14 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang reassigned YARN-8777:
---

Assignee: Eric Yang

> Container Executor C binary change to execute interactive docker command
> 
>
> Key: YARN-8777
> URL: https://issues.apache.org/jira/browse/YARN-8777
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Zian Chen
>Assignee: Eric Yang
>Priority: Major
>  Labels: Docker
> Attachments: YARN-8777.001.patch
>
>
> Since Container Executor provides Container execution using the native 
> container-executor binary, we also need to make changes to accept new 
> “dockerExec” method to invoke the corresponding native function to execute 
> docker exec command to the running container.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8777) Container Executor C binary change to execute interactive docker command

2018-09-14 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-8777:

Attachment: YARN-8777.001.patch

> Container Executor C binary change to execute interactive docker command
> 
>
> Key: YARN-8777
> URL: https://issues.apache.org/jira/browse/YARN-8777
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Zian Chen
>Priority: Major
>  Labels: Docker
> Attachments: YARN-8777.001.patch
>
>
> Since Container Executor provides Container execution using the native 
> container-executor binary, we also need to make changes to accept new 
> “dockerExec” method to invoke the corresponding native function to execute 
> docker exec command to the running container.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8774) Memory leak when CapacityScheduler allocates from reserved container with non-default label

2018-09-14 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16615459#comment-16615459
 ] 

Hadoop QA commented on YARN-8774:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 45s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 43s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 72m 
52s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}127m 32s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | YARN-8774 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12939733/YARN-8774.001.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 8ba820716263 4.4.0-133-generic #159-Ubuntu SMP Fri Aug 10 
07:31:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 5470de4 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/21842/testReport/ |
| Max. process+thread count | 861 (vs. ulimit of 1) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/21842/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Memory leak when CapacityScheduler allocates from 

[jira] [Created] (YARN-8779) Fix few discrepancies between YARN Service swagger spec and code

2018-09-14 Thread Gour Saha (JIRA)
Gour Saha created YARN-8779:
---

 Summary: Fix few discrepancies between YARN Service swagger spec 
and code
 Key: YARN-8779
 URL: https://issues.apache.org/jira/browse/YARN-8779
 Project: Hadoop YARN
  Issue Type: Bug
  Components: yarn-native-services
Affects Versions: 3.1.1, 3.1.0
Reporter: Gour Saha


Following issues were identified in YARN Service swagger definition during an 
effort to integrate with a running service by generating Java and Go 
client-side stubs from the spec -
 
1.
*restartPolicy* is wrong and should be *restart_policy*
 
2.
A DELETE request to a non-existing service (or a previously existing but 
deleted service) throws an ApiException instead of something like 
NotFoundException (the equivalent of 404). Note, DELETE of an existing service 
behaves fine.
 
3.
The response code of DELETE request is 200. The spec says 204. Since the 
response has a payload, the spec should be updated to 200 instead of 204.
 
4.
 _DefaultApi.java_ client's _appV1ServicesServiceNameGetWithHttpInfo_ method 
does not return a Service object. Swagger definition has the below bug in GET 
response of */app/v1/services/\{service_name}* -
{code:java}
type: object
items:
  $ref: '#/definitions/Service'
{code}
It should be -
{code:java}
$ref: '#/definitions/Service'
{code}
 
5.
Serialization issues were seen in all enum classes - ServiceState.java, 
ContainerState.java, ComponentState.java, PlacementType.java and 
PlacementScope.java.

Java client threw the below exception for ServiceState -
{code:java}
Caused by: com.fasterxml.jackson.databind.exc.MismatchedInputException: Cannot 
construct instance of `org.apache.cb.yarn.service.api.records.ServiceState` 
(although at least one Creator exists): no String-argument constructor/factory 
method to deserialize from String value ('ACCEPTED')
 at [Source: 
(org.glassfish.jersey.message.internal.ReaderInterceptorExecutor$UnCloseableInputStream);
 line: 1, column: 121] (through reference chain: 
org.apache.cb.yarn.service.api.records.Service["state”])
{code}
For Golang we saw this for ContainerState -
{code:java}
ERRO[2018-08-12T23:32:31.851-07:00] During GET request: json: cannot unmarshal 
string into Go struct field Container.state of type yarnmodel.ContainerState 
{code}
 
6.
*launch_time* actually returns an integer but swagger definition says date. 
Hence, the following exception is seen on the client side -
{code:java}
Caused by: com.fasterxml.jackson.databind.exc.MismatchedInputException: 
Unexpected token (VALUE_NUMBER_INT), expected START_ARRAY: Expected array or 
string.
 at [Source: 
(org.glassfish.jersey.message.internal.ReaderInterceptorExecutor$UnCloseableInputStream);
 line: 1, column: 477] (through reference chain: 
org.apache.cb.yarn.service.api.records.Service["components"]->java.util.ArrayList[0]->org.apache.cb.yarn.service.api.records.Component["containers"]->java.util.ArrayList[0]->org.apache.cb.yarn.service.api.records.Container["launch_time”])
{code}
 
8.
*user.name* query param with a valid value is required for all API calls to an 
unsecure cluster. This is not defined in the spec.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8637) [GPG] Add FederationStateStore getAppInfo API for GlobalPolicyGenerator

2018-09-14 Thread Subru Krishnan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16615397#comment-16615397
 ] 

Subru Krishnan commented on YARN-8637:
--

+1 on your proposal [~botong] from my side as I feel we already have too many 
configs and your approach also ensures that we don't have to change the API.

> [GPG] Add FederationStateStore getAppInfo API for GlobalPolicyGenerator
> ---
>
> Key: YARN-8637
> URL: https://issues.apache.org/jira/browse/YARN-8637
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Botong Huang
>Assignee: Botong Huang
>Priority: Major
> Attachments: YARN-8637-YARN-7402.v1.patch
>
>
> The core api for FederationStateStore is provided in _FederationStateStore_. 
> In this patch, we are added a _FederationGPGStateStore_ api just for GPG. 
> Specifically, we are adding the API to get full application info from 
> statestore with the starting timestamp of the app entry, so that the 
> _ApplicationCleaner_ (YARN-7599) in GPG can delete and cleanup old entries in 
> the table. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-8778) Add Command Line interface to invoke interactive docker shell

2018-09-14 Thread Zian Chen (JIRA)
Zian Chen created YARN-8778:
---

 Summary: Add Command Line interface to invoke interactive docker 
shell
 Key: YARN-8778
 URL: https://issues.apache.org/jira/browse/YARN-8778
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Zian Chen
Assignee: Zian Chen


CLI will be the mandatory interface we are providing for a user to use 
interactive docker shell feature. We will need to create a new class 
“InteractiveDockerShellCLI” to read command line into the servlet and pass all 
the way down to docker executor.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-8777) Container Executor C binary change to execute interactive docker command

2018-09-14 Thread Zian Chen (JIRA)
Zian Chen created YARN-8777:
---

 Summary: Container Executor C binary change to execute interactive 
docker command
 Key: YARN-8777
 URL: https://issues.apache.org/jira/browse/YARN-8777
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Zian Chen


Since Container Executor provides Container execution using the native 
container-executor binary, we also need to make changes to accept new 
“dockerExec” method to invoke the corresponding native function to execute 
docker exec command to the running container.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-8776) Container Executor change to create stdin/stdout pipeline

2018-09-14 Thread Zian Chen (JIRA)
Zian Chen created YARN-8776:
---

 Summary: Container Executor change to create stdin/stdout pipeline
 Key: YARN-8776
 URL: https://issues.apache.org/jira/browse/YARN-8776
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Zian Chen
Assignee: Zian Chen


The pipeline is built to connect the stdin/stdout channel from WebSocket 
servlet through container-executor to docker executor. So when the WebSocket 
servlet is started, we need to invoke container-executor “dockerExec” method 
(which will be implemented) to create a new docker executor and use “docker 
exec -it $ContainerId” command which executes an interactive bash shell on the 
container.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8734) Readiness check for remote service

2018-09-14 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16615351#comment-16615351
 ] 

Hadoop QA commented on YARN-8734:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
25s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
22s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 48s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
5s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  9m  
8s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 21s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 2 new + 45 unchanged - 0 fixed = 47 total (was 45) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 14s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 13m  5s{color} 
| {color:red} hadoop-yarn-services-core in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
55s{color} | {color:green} hadoop-yarn-services-api in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
17s{color} | {color:green} hadoop-yarn-site in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 89m  3s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.service.TestYarnNativeServices |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | YARN-8734 

[jira] [Commented] (YARN-8734) Readiness check for remote service

2018-09-14 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16615259#comment-16615259
 ] 

Eric Yang commented on YARN-8734:
-

Patch 002 generalized the remote service dependency check into utility class.  
Test case added, and some style clean up.

> Readiness check for remote service
> --
>
> Key: YARN-8734
> URL: https://issues.apache.org/jira/browse/YARN-8734
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: yarn-native-services
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: Dependency check vs.pdf, YARN-8734.001.patch, 
> YARN-8734.002.patch
>
>
> When a service is deploying, there can be remote service dependency.  It 
> would be nice to describe ZooKeeper as a dependent service, and the service 
> has reached a stable state, then deploy HBase.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8734) Readiness check for remote service

2018-09-14 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8734?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-8734:

Attachment: YARN-8734.002.patch

> Readiness check for remote service
> --
>
> Key: YARN-8734
> URL: https://issues.apache.org/jira/browse/YARN-8734
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: yarn-native-services
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: Dependency check vs.pdf, YARN-8734.001.patch, 
> YARN-8734.002.patch
>
>
> When a service is deploying, there can be remote service dependency.  It 
> would be nice to describe ZooKeeper as a dependent service, and the service 
> has reached a stable state, then deploy HBase.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8045) Reduce log output from container status calls

2018-09-14 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16615112#comment-16615112
 ] 

Hudson commented on YARN-8045:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14960 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14960/])
YARN-8045. Reduce log output from container status calls. Contributed by 
(skumpf: rev 144a55f0e3ba302327baf2e98d1e07b953dcbbfd)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/ContainerManagerImpl.java


> Reduce log output from container status calls
> -
>
> Key: YARN-8045
> URL: https://issues.apache.org/jira/browse/YARN-8045
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Shane Kumpf
>Assignee: Craig Condit
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: YARN-8045.001.patch
>
>
> Each time a container's status is returned a log entry is produced in the NM 
> from {{ContainerManagerImpl}}. The container status includes the diagnostics 
> field for the container. If the diagnostics field contains an exception, it 
> can appear as if the exception is logged repeatedly every second. The 
> diagnostics message can also span many lines, which puts pressure on the logs 
> and makes it harder to read.
> For example:
> {code}
> 2018-03-17 22:01:11,632 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl:
>  Getting container-status for container_e01_1521323860653_0001_01_05
> 2018-03-17 22:01:11,632 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl:
>  Returning ContainerStatus: [ContainerId: 
> container_e01_1521323860653_0001_01_05, ExecutionType: GUARANTEED, State: 
> RUNNING, Capability: , Diagnostics: [2018-03-17 
> 22:01:00.675]Exception from container-launch.
> Container id: container_e01_1521323860653_0001_01_05
> Exit code: -1
> Exception message: 
> Shell ouput: 
> [2018-03-17 22:01:00.750]Diagnostic message from attempt :
> [2018-03-17 22:01:00.750]Container exited with a non-zero exit code -1.
> , ExitStatus: -1, IP: null, Host: null, ContainerSubState: SCHEDULED]
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8637) [GPG] Add FederationStateStore getAppInfo API for GlobalPolicyGenerator

2018-09-14 Thread Botong Huang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16615094#comment-16615094
 ] 

Botong Huang commented on YARN-8637:


Thanks [~subru] for the comment! I looked into other _FederationStateStore_ 
implementations. I realize that since application entry timestamp is not in the 
_FederationStateStore_ api, all other implementations doesn't have it at all. 
Only the SQLServer table script added a timestamp field to the app table. 

Also having [~bibinchundatt]'s comment in YARN-7599 in mind, I am thinking 
about not introducing the timestamp into the application entry in StateStore 
API, since it will introduce more source of truth confusion about the app start 
time as opposed to timeline server. Instead, the application cleanup in 
YARN-7599 can simply depend on whether Router/YarnRM still remember the app in 
their memory. So essentially we can reply on YarnRM's cleanup config to clean 
up Application table in StateStore. 

What do you guys think? 

> [GPG] Add FederationStateStore getAppInfo API for GlobalPolicyGenerator
> ---
>
> Key: YARN-8637
> URL: https://issues.apache.org/jira/browse/YARN-8637
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Botong Huang
>Assignee: Botong Huang
>Priority: Major
> Attachments: YARN-8637-YARN-7402.v1.patch
>
>
> The core api for FederationStateStore is provided in _FederationStateStore_. 
> In this patch, we are added a _FederationGPGStateStore_ api just for GPG. 
> Specifically, we are adding the API to get full application info from 
> statestore with the starting timestamp of the app entry, so that the 
> _ApplicationCleaner_ (YARN-7599) in GPG can delete and cleanup old entries in 
> the table. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8748) Javadoc warnings within the nodemanager package

2018-09-14 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16615085#comment-16615085
 ] 

Hudson commented on YARN-8748:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14959 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14959/])
YARN-8748. Javadoc warnings within the nodemanager package. Contributed 
(skumpf: rev 78902f0250e2c6d3dea7f2b5b1fcf086a80aa727)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/TrafficControlBandwidthHandlerImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/ContainerExecutor.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/JavaSandboxLinuxContainerRuntime.java


> Javadoc warnings within the nodemanager package
> ---
>
> Key: YARN-8748
> URL: https://issues.apache.org/jira/browse/YARN-8748
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.2.0
>Reporter: Shane Kumpf
>Assignee: Craig Condit
>Priority: Trivial
> Fix For: 3.2.0
>
> Attachments: YARN-8748.001.patch
>
>
> There are a number of javadoc warnings in trunk in classes under the 
> nodemanager package. These should be addressed or suppressed.
> {code:java}
> [WARNING] Javadoc Warnings
> [WARNING] 
> /testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/ContainerExecutor.java:93:
>  warning - Tag @see: reference not found: 
> ContainerLaunch.ShellScriptBuilder#listDebugInformation
> [WARNING] 
> /testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/JavaSandboxLinuxContainerRuntime.java:118:
>  warning - YarnConfiguration#YARN_CONTAINER_SANDBOX (referenced by @value 
> tag) is an unknown reference.
> [WARNING] 
> /testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/JavaSandboxLinuxContainerRuntime.java:118:
>  warning - YarnConfiguration#YARN_CONTAINER_SANDBOX_FILE_PERMISSIONS 
> (referenced by @value tag) is an unknown reference.
> [WARNING] 
> /testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/JavaSandboxLinuxContainerRuntime.java:118:
>  warning - YarnConfiguration#YARN_CONTAINER_SANDBOX_POLICY (referenced by 
> @value tag) is an unknown reference.
> [WARNING] 
> /testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/JavaSandboxLinuxContainerRuntime.java:118:
>  warning - YarnConfiguration#YARN_CONTAINER_SANDBOX_WHITELIST_GROUP 
> (referenced by @value tag) is an unknown reference.
> [WARNING] 
> /testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/JavaSandboxLinuxContainerRuntime.java:118:
>  warning - YarnConfiguration#YARN_CONTAINER_SANDBOX_POLICY_GROUP_PREFIX 
> (referenced by @value tag) is an unknown reference.
> [WARNING] 
> /testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/JavaSandboxLinuxContainerRuntime.java:211:
>  warning - YarnConfiguration#YARN_CONTAINER_SANDBOX_WHITELIST_GROUP 
> (referenced by @value tag) is an unknown reference.
> [WARNING] 
> /testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/JavaSandboxLinuxContainerRuntime.java:211:
>  warning - NMContainerPolicyUtils#SECURITY_FLAG (referenced by @value tag) is 
> an unknown reference.
> [WARNING] 
> /testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/TrafficControlBandwidthHandlerImpl.java:248:
>  warning - @return tag has no arguments.
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (YARN-8045) Reduce log output from container status calls

2018-09-14 Thread Shane Kumpf (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16615073#comment-16615073
 ] 

Shane Kumpf commented on YARN-8045:
---

Thanks again for the contribution, [~ccondit-target]. Committed to trunk.

> Reduce log output from container status calls
> -
>
> Key: YARN-8045
> URL: https://issues.apache.org/jira/browse/YARN-8045
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Shane Kumpf
>Assignee: Craig Condit
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: YARN-8045.001.patch
>
>
> Each time a container's status is returned a log entry is produced in the NM 
> from {{ContainerManagerImpl}}. The container status includes the diagnostics 
> field for the container. If the diagnostics field contains an exception, it 
> can appear as if the exception is logged repeatedly every second. The 
> diagnostics message can also span many lines, which puts pressure on the logs 
> and makes it harder to read.
> For example:
> {code}
> 2018-03-17 22:01:11,632 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl:
>  Getting container-status for container_e01_1521323860653_0001_01_05
> 2018-03-17 22:01:11,632 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl:
>  Returning ContainerStatus: [ContainerId: 
> container_e01_1521323860653_0001_01_05, ExecutionType: GUARANTEED, State: 
> RUNNING, Capability: , Diagnostics: [2018-03-17 
> 22:01:00.675]Exception from container-launch.
> Container id: container_e01_1521323860653_0001_01_05
> Exit code: -1
> Exception message: 
> Shell ouput: 
> [2018-03-17 22:01:00.750]Diagnostic message from attempt :
> [2018-03-17 22:01:00.750]Container exited with a non-zero exit code -1.
> , ExitStatus: -1, IP: null, Host: null, ContainerSubState: SCHEDULED]
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8748) Javadoc warnings within the nodemanager package

2018-09-14 Thread Shane Kumpf (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16615055#comment-16615055
 ] 

Shane Kumpf commented on YARN-8748:
---

Thanks again for the contribution, [~ccondit-target]. Committed to trunk.

> Javadoc warnings within the nodemanager package
> ---
>
> Key: YARN-8748
> URL: https://issues.apache.org/jira/browse/YARN-8748
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.2.0
>Reporter: Shane Kumpf
>Assignee: Craig Condit
>Priority: Trivial
> Fix For: 3.2.0
>
> Attachments: YARN-8748.001.patch
>
>
> There are a number of javadoc warnings in trunk in classes under the 
> nodemanager package. These should be addressed or suppressed.
> {code:java}
> [WARNING] Javadoc Warnings
> [WARNING] 
> /testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/ContainerExecutor.java:93:
>  warning - Tag @see: reference not found: 
> ContainerLaunch.ShellScriptBuilder#listDebugInformation
> [WARNING] 
> /testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/JavaSandboxLinuxContainerRuntime.java:118:
>  warning - YarnConfiguration#YARN_CONTAINER_SANDBOX (referenced by @value 
> tag) is an unknown reference.
> [WARNING] 
> /testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/JavaSandboxLinuxContainerRuntime.java:118:
>  warning - YarnConfiguration#YARN_CONTAINER_SANDBOX_FILE_PERMISSIONS 
> (referenced by @value tag) is an unknown reference.
> [WARNING] 
> /testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/JavaSandboxLinuxContainerRuntime.java:118:
>  warning - YarnConfiguration#YARN_CONTAINER_SANDBOX_POLICY (referenced by 
> @value tag) is an unknown reference.
> [WARNING] 
> /testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/JavaSandboxLinuxContainerRuntime.java:118:
>  warning - YarnConfiguration#YARN_CONTAINER_SANDBOX_WHITELIST_GROUP 
> (referenced by @value tag) is an unknown reference.
> [WARNING] 
> /testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/JavaSandboxLinuxContainerRuntime.java:118:
>  warning - YarnConfiguration#YARN_CONTAINER_SANDBOX_POLICY_GROUP_PREFIX 
> (referenced by @value tag) is an unknown reference.
> [WARNING] 
> /testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/JavaSandboxLinuxContainerRuntime.java:211:
>  warning - YarnConfiguration#YARN_CONTAINER_SANDBOX_WHITELIST_GROUP 
> (referenced by @value tag) is an unknown reference.
> [WARNING] 
> /testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/JavaSandboxLinuxContainerRuntime.java:211:
>  warning - NMContainerPolicyUtils#SECURITY_FLAG (referenced by @value tag) is 
> an unknown reference.
> [WARNING] 
> /testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/TrafficControlBandwidthHandlerImpl.java:248:
>  warning - @return tag has no arguments.
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8706) DelayedProcessKiller is executed for Docker containers even though docker stop sends a KILL signal after the specified grace period

2018-09-14 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16615042#comment-16615042
 ] 

Hudson commented on YARN-8706:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14958 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14958/])
YARN-8706.  Allow additional flag in docker inspect call. (eyang: 
rev 99237607bf73e97b06eeb3455aa1327bfab4d5d2)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/utils/docker-util.c


> DelayedProcessKiller is executed for Docker containers even though docker 
> stop sends a KILL signal after the specified grace period
> ---
>
> Key: YARN-8706
> URL: https://issues.apache.org/jira/browse/YARN-8706
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Chandni Singh
>Assignee: Chandni Singh
>Priority: Major
>  Labels: docker
> Fix For: 3.2.0
>
> Attachments: YARN-8706.001.patch, YARN-8706.002.patch, 
> YARN-8706.003.patch, YARN-8706.004.patch, YARN-8706.addendum.001.patch
>
>
> {{DockerStopCommand}} adds a grace period of 10 seconds.
> 10 seconds is also the default grace time use by docker stop
>  [https://docs.docker.com/engine/reference/commandline/stop/]
> Documentation of the docker stop:
> {quote}the main process inside the container will receive {{SIGTERM}}, and 
> after a grace period, {{SIGKILL}}.
> {quote}
> There is a {{DelayedProcessKiller}} in {{ContainerExcecutor}} which executes 
> for all containers after a delay when {{sleepDelayBeforeSigKill>0}}. By 
> default this is set to {{250 milliseconds}} and so irrespective of the 
> container type, it will always get executed.
>  
> For a docker container, {{docker stop}} takes care of sending a {{SIGKILL}} 
> after the grace period
> - when sleepDelayBeforeSigKill > 10 seconds, then there is no point of 
> executing DelayedProcessKiller
> - when sleepDelayBeforeSigKill < 1 second, then the grace period should be 
> the smallest value, which is 1 second, because anyways we are forcing kill 
> after 250 ms
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-8706) DelayedProcessKiller is executed for Docker containers even though docker stop sends a KILL signal after the specified grace period

2018-09-14 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang resolved YARN-8706.
-
Resolution: Fixed

Thank you [~csingh] for the addendum patch.  I committed addendum patch 001 to 
trunk.

> DelayedProcessKiller is executed for Docker containers even though docker 
> stop sends a KILL signal after the specified grace period
> ---
>
> Key: YARN-8706
> URL: https://issues.apache.org/jira/browse/YARN-8706
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Chandni Singh
>Assignee: Chandni Singh
>Priority: Major
>  Labels: docker
> Fix For: 3.2.0
>
> Attachments: YARN-8706.001.patch, YARN-8706.002.patch, 
> YARN-8706.003.patch, YARN-8706.004.patch, YARN-8706.addendum.001.patch
>
>
> {{DockerStopCommand}} adds a grace period of 10 seconds.
> 10 seconds is also the default grace time use by docker stop
>  [https://docs.docker.com/engine/reference/commandline/stop/]
> Documentation of the docker stop:
> {quote}the main process inside the container will receive {{SIGTERM}}, and 
> after a grace period, {{SIGKILL}}.
> {quote}
> There is a {{DelayedProcessKiller}} in {{ContainerExcecutor}} which executes 
> for all containers after a delay when {{sleepDelayBeforeSigKill>0}}. By 
> default this is set to {{250 milliseconds}} and so irrespective of the 
> container type, it will always get executed.
>  
> For a docker container, {{docker stop}} takes care of sending a {{SIGKILL}} 
> after the grace period
> - when sleepDelayBeforeSigKill > 10 seconds, then there is no point of 
> executing DelayedProcessKiller
> - when sleepDelayBeforeSigKill < 1 second, then the grace period should be 
> the smallest value, which is 1 second, because anyways we are forcing kill 
> after 250 ms
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-8775) TestDiskFailures.testLocalDirsFailures sometimes can fail on concurrent File modifications

2018-09-14 Thread JIRA
Antal Bálint Steinbach created YARN-8775:


 Summary: TestDiskFailures.testLocalDirsFailures sometimes can fail 
on concurrent File modifications
 Key: YARN-8775
 URL: https://issues.apache.org/jira/browse/YARN-8775
 Project: Hadoop YARN
  Issue Type: Bug
  Components: test, yarn
Affects Versions: 3.0.0
Reporter: Antal Bálint Steinbach
Assignee: Antal Bálint Steinbach


The test can fail sometimes when file operations were done during the check 
done by the thread in _LocalDirsHandlerService._


{code:java}
java.lang.AssertionError: NodeManager could not identify disk failure.
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.assertTrue(Assert.java:41)
at 
org.apache.hadoop.yarn.server.TestDiskFailures.verifyDisksHealth(TestDiskFailures.java:239)
at 
org.apache.hadoop.yarn.server.TestDiskFailures.testDirsFailures(TestDiskFailures.java:202)
at 
org.apache.hadoop.yarn.server.TestDiskFailures.testLocalDirsFailures(TestDiskFailures.java:99)

Stderr


2018-09-13 08:21:49,822 INFO [main] server.TestDiskFailures 
(TestDiskFailures.java:prepareDirToFail(277)) - Prepared 
/tmp/dist-test-taskjUrf0_/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/target/org.apache.hadoop.yarn.server.TestDiskFailures/org.apache.hadoop.yarn.server.TestDiskFailures-logDir-nm-0_1
 to fail.
2018-09-13 08:21:49,823 INFO [main] server.TestDiskFailures 
(TestDiskFailures.java:prepareDirToFail(277)) - Prepared 
/tmp/dist-test-taskjUrf0_/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/target/org.apache.hadoop.yarn.server.TestDiskFailures/org.apache.hadoop.yarn.server.TestDiskFailures-logDir-nm-0_3
 to fail.
2018-09-13 08:21:49,823 WARN [DiskHealthMonitor-Timer] 
nodemanager.DirectoryCollection (DirectoryCollection.java:checkDirs(283)) - 
Directory 
/tmp/dist-test-taskjUrf0_/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/target/org.apache.hadoop.yarn.server.TestDiskFailures/org.apache.hadoop.yarn.server.TestDiskFailures-logDir-nm-0_1
 error, Not a directory: 
/tmp/dist-test-taskjUrf0_/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/target/org.apache.hadoop.yarn.server.TestDiskFailures/org.apache.hadoop.yarn.server.TestDiskFailures-logDir-nm-0_1,
 removing from list of valid directories
2018-09-13 08:21:49,824 WARN [DiskHealthMonitor-Timer] 
localizer.ResourceLocalizationService 
(ResourceLocalizationService.java:initializeLogDir(1329)) - Could not 
initialize log dir 
/tmp/dist-test-taskjUrf0_/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/target/org.apache.hadoop.yarn.server.TestDiskFailures/org.apache.hadoop.yarn.server.TestDiskFailures-logDir-nm-0_3
java.io.FileNotFoundException: Destination exists and is not a directory: 
/tmp/dist-test-taskjUrf0_/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/target/org.apache.hadoop.yarn.server.TestDiskFailures/org.apache.hadoop.yarn.server.TestDiskFailures-logDir-nm-0_3
at 
org.apache.hadoop.fs.RawLocalFileSystem.mkdirsWithOptionalPermission(RawLocalFileSystem.java:515)
at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:496)
at org.apache.hadoop.fs.FileSystem.primitiveMkdir(FileSystem.java:1081)
at 
org.apache.hadoop.fs.DelegateToFileSystem.mkdir(DelegateToFileSystem.java:178)
at org.apache.hadoop.fs.FilterFs.mkdir(FilterFs.java:205)
at org.apache.hadoop.fs.FileContext$4.next(FileContext.java:747)
at org.apache.hadoop.fs.FileContext$4.next(FileContext.java:743)
at org.apache.hadoop.fs.FSLinkResolver.resolve(FSLinkResolver.java:90)
at org.apache.hadoop.fs.FileContext.mkdir(FileContext.java:743)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService.initializeLogDir(ResourceLocalizationService.java:1324)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService.initializeLogDirs(ResourceLocalizationService.java:1318)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService.access$000(ResourceLocalizationService.java:141)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService$2.onDirsChanged(ResourceLocalizationService.java:269)
at 
org.apache.hadoop.yarn.server.nodemanager.DirectoryCollection.checkDirs(DirectoryCollection.java:317)
at 
org.apache.hadoop.yarn.server.nodemanager.LocalDirsHandlerService.checkDirs(LocalDirsHandlerService.java:452)
at 
org.apache.hadoop.yarn.server.nodemanager.LocalDirsHandlerService.access$500(LocalDirsHandlerService.java:52)
at 
org.apache.hadoop.yarn.server.nodemanager.LocalDirsHandlerService$MonitoringTimerTask.run(LocalDirsHandlerService.java:166)
at java.util.TimerThread.mainLoop(Timer.java:555)
at 

[jira] [Deleted] (YARN-8773) Blacklisting support for scheduling AMs

2018-09-14 Thread Wangda Tan (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan deleted YARN-8773:
-


> Blacklisting support for scheduling AMs 
> 
>
> Key: YARN-8773
> URL: https://issues.apache.org/jira/browse/YARN-8773
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Prabhu Joseph
>Assignee: Wangda Tan
>Priority: Major
>
> MapReduce jobs failed with both AM attempts failing on same node - the node 
> had some issue. Both AM attempts are placed on same node as there is no 
> blacklisting feature. Customer is expecting a fix for YARN-2005 + YARN-4389. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8759) Copy of "resource-types.xml" is not deleted if test fails, causes other test failures

2018-09-14 Thread JIRA


[ 
https://issues.apache.org/jira/browse/YARN-8759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16614915#comment-16614915
 ] 

Antal Bálint Steinbach commented on YARN-8759:
--

Hi [~maniraj...@gmail.com] ,

Thanks, for your comments.
 # It is not mandatory, I just set it to null, because there are some test 
which does not use/set the File field so in the tearDown the null check is 
faster than the File.exists() call.
 # In TestRMAdminCLI the _File dest_ field is set up in the test setup method 
before every test so it does not makes sense to set it to null or do a null 
check in teardown.

> Copy of "resource-types.xml" is not deleted if test fails, causes other test 
> failures
> -
>
> Key: YARN-8759
> URL: https://issues.apache.org/jira/browse/YARN-8759
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Reporter: Antal Bálint Steinbach
>Assignee: Antal Bálint Steinbach
>Priority: Major
> Attachments: YARN-8759.001.patch, YARN-8759.002.patch, 
> YARN-8759.003.patch
>
>
> resource-types.xml is copied in several tests to the test machine, but it is 
> deleted only at the end of the test. In case the test fails the file will not 
> be deleted and other tests will fail, because of the wrong configuration.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8047) RMWebApp make external class pluggable

2018-09-14 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16614911#comment-16614911
 ] 

Hadoop QA commented on YARN-8047:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  3m 
24s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 37s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
39s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  8m  
5s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 28s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 3 new + 218 unchanged - 0 fixed = 221 total (was 218) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
24s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 3 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
2m  9s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
50s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
45s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
14s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 80m 
27s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
38s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}166m 23s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | YARN-8047 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12939708/YARN-8047-002.patch |
| Optional Tests |  dupname  

[jira] [Commented] (YARN-8047) RMWebApp make external class pluggable

2018-09-14 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16614874#comment-16614874
 ] 

Hadoop QA commented on YARN-8047:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  7s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
12s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
11s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 21s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 3 new + 218 unchanged - 0 fixed = 221 total (was 218) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 29s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 49s{color} 
| {color:red} hadoop-yarn-api in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 73m 
29s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}147m 59s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.conf.TestYarnConfigurationFields |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | YARN-8047 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12925307/YARN-8047-001.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux ef70dd528219 4.4.0-133-generic #159-Ubuntu SMP Fri Aug 10 
07:31:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Commented] (YARN-8774) Memory leak when CapacityScheduler allocates from reserved container with non-default label

2018-09-14 Thread Tao Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16614866#comment-16614866
 ] 

Tao Yang commented on YARN-8774:


Attached v1 patch for review.

> Memory leak when CapacityScheduler allocates from reserved container with 
> non-default label
> ---
>
> Key: YARN-8774
> URL: https://issues.apache.org/jira/browse/YARN-8774
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacityscheduler
>Reporter: Tao Yang
>Assignee: Tao Yang
>Priority: Critical
> Attachments: YARN-8774.001.patch
>
>
> The cause is that the RMContainerImpl instance of reserved container lost its 
> node label expression, when scheduler reserves containers for non-default 
> node-label requests, it will be wrongly added into 
> LeafQueue#ignorePartitionExclusivityRMContainers and never be removed.
> To reproduce this memory leak:
> (1) create reserved container
> RegularContainerAllocator#doAllocation:  create RMContainerImpl instanceA 
> (nodeLabelExpression="")
> LeafQueue#allocateResource:  RMContainerImpl instanceA is put into  
> LeafQueue#ignorePartitionExclusivityRMContainers
> (2) allocate from reserved container
> RegularContainerAllocator#doAllocation: create RMContainerImpl instanceB 
> (nodeLabelExpression="test-label")
> (3) From now on, RMContainerImpl instanceA will be left in memory (be kept in 
> LeafQueue#ignorePartitionExclusivityRMContainers) forever until RM restarted



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8774) Memory leak when CapacityScheduler allocates from reserved container with non-default label

2018-09-14 Thread Tao Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tao Yang updated YARN-8774:
---
Attachment: YARN-8774.001.patch

> Memory leak when CapacityScheduler allocates from reserved container with 
> non-default label
> ---
>
> Key: YARN-8774
> URL: https://issues.apache.org/jira/browse/YARN-8774
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacityscheduler
>Reporter: Tao Yang
>Assignee: Tao Yang
>Priority: Critical
> Attachments: YARN-8774.001.patch
>
>
> The cause is that the RMContainerImpl instance of reserved container lost its 
> node label expression, when scheduler reserves containers for non-default 
> node-label requests, it will be wrongly added into 
> LeafQueue#ignorePartitionExclusivityRMContainers and never be removed.
> To reproduce this memory leak:
> (1) create reserved container
> RegularContainerAllocator#doAllocation:  create RMContainerImpl instanceA 
> (nodeLabelExpression="")
> LeafQueue#allocateResource:  RMContainerImpl instanceA is put into  
> LeafQueue#ignorePartitionExclusivityRMContainers
> (2) allocate from reserved container
> RegularContainerAllocator#doAllocation: create RMContainerImpl instanceB 
> (nodeLabelExpression="test-label")
> (3) From now on, RMContainerImpl instanceA will be left in memory (be kept in 
> LeafQueue#ignorePartitionExclusivityRMContainers) forever until RM restarted



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8774) Memory leak when CapacityScheduler allocates from reserved container with non-default label

2018-09-14 Thread Tao Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tao Yang updated YARN-8774:
---
Description: 
The cause is that the RMContainerImpl instance of reserved container lost its 
node label expression, when scheduler reserves containers for non-default 
node-label requests, it will be wrongly added into 
LeafQueue#ignorePartitionExclusivityRMContainers and never be removed.

To reproduce this memory leak:
(1) create reserved container
RegularContainerAllocator#doAllocation:  create RMContainerImpl instanceA 
(nodeLabelExpression="")
LeafQueue#allocateResource:  RMContainerImpl instanceA is put into  
LeafQueue#ignorePartitionExclusivityRMContainers
(2) allocate from reserved container
RegularContainerAllocator#doAllocation: create RMContainerImpl instanceB 
(nodeLabelExpression="test-label")
(3) From now on, RMContainerImpl instanceA will be left in memory (be kept in 
LeafQueue#ignorePartitionExclusivityRMContainers) forever until RM restarted

  was:
Reproduce memory leak:
(1) create reserved container
RegularContainerAllocator#doAllocation:  create RMContainerImpl instanceA 
(nodeLabelExpression="")
LeafQueue#allocateResource:  RMContainerImpl instanceA is put into  
LeafQueue#ignorePartitionExclusivityRMContainers
(2) allocate from reserved container
RegularContainerAllocator#doAllocation: create RMContainerImpl instanceB 
(nodeLabelExpression="test-label")
(3) From now on, RMContainerImpl instanceA will be left in memory (be kept in 
LeafQueue#ignorePartitionExclusivityRMContainers) forever until RM restarted


> Memory leak when CapacityScheduler allocates from reserved container with 
> non-default label
> ---
>
> Key: YARN-8774
> URL: https://issues.apache.org/jira/browse/YARN-8774
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacityscheduler
>Reporter: Tao Yang
>Assignee: Tao Yang
>Priority: Critical
>
> The cause is that the RMContainerImpl instance of reserved container lost its 
> node label expression, when scheduler reserves containers for non-default 
> node-label requests, it will be wrongly added into 
> LeafQueue#ignorePartitionExclusivityRMContainers and never be removed.
> To reproduce this memory leak:
> (1) create reserved container
> RegularContainerAllocator#doAllocation:  create RMContainerImpl instanceA 
> (nodeLabelExpression="")
> LeafQueue#allocateResource:  RMContainerImpl instanceA is put into  
> LeafQueue#ignorePartitionExclusivityRMContainers
> (2) allocate from reserved container
> RegularContainerAllocator#doAllocation: create RMContainerImpl instanceB 
> (nodeLabelExpression="test-label")
> (3) From now on, RMContainerImpl instanceA will be left in memory (be kept in 
> LeafQueue#ignorePartitionExclusivityRMContainers) forever until RM restarted



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-8774) Memory leak when CapacityScheduler allocates from reserved container with non-default label

2018-09-14 Thread Tao Yang (JIRA)
Tao Yang created YARN-8774:
--

 Summary: Memory leak when CapacityScheduler allocates from 
reserved container with non-default label
 Key: YARN-8774
 URL: https://issues.apache.org/jira/browse/YARN-8774
 Project: Hadoop YARN
  Issue Type: Bug
  Components: capacityscheduler
Reporter: Tao Yang
Assignee: Tao Yang


Reproduce memory leak:
(1) create reserved container
RegularContainerAllocator#doAllocation:  create RMContainerImpl instanceA 
(nodeLabelExpression="")
LeafQueue#allocateResource:  RMContainerImpl instanceA is put into  
LeafQueue#ignorePartitionExclusivityRMContainers
(2) allocate from reserved container
RegularContainerAllocator#doAllocation: create RMContainerImpl instanceB 
(nodeLabelExpression="test-label")
(3) From now on, RMContainerImpl instanceA will be left in memory (be kept in 
LeafQueue#ignorePartitionExclusivityRMContainers) forever until RM restarted



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8720) CapacityScheduler does not enforce max resource allocation check at queue level

2018-09-14 Thread Weiwei Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated YARN-8720:
--
Fix Version/s: 2.8.6

> CapacityScheduler does not enforce max resource allocation check at queue 
> level
> ---
>
> Key: YARN-8720
> URL: https://issues.apache.org/jira/browse/YARN-8720
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, capacityscheduler, resourcemanager
>Affects Versions: 2.7.0
>Reporter: Tarun Parimi
>Assignee: Tarun Parimi
>Priority: Major
> Fix For: 3.2.0, 2.9.2, 3.0.4, 3.1.2, 2.8.6
>
> Attachments: YARN-8720-branch-2.8.001.patch, YARN-8720.001.patch, 
> YARN-8720.002.patch
>
>
> The value of 
> yarn.scheduler.capacity..maximum-allocation-mb/vcores is not 
> strictly enforced when applications request containers. An 
> InvalidResourceRequestException is thrown only when the ResourceRequest is 
> greater than the global value of yarn.scheduler.maximum-allocation-mb/vcores 
> . So for an example configuration such as below,
>  
> {code:java}
> yarn.scheduler.maximum-allocation-mb=4096
> yarn.scheduler.capacity.root.test.maximum-allocation-mb=2048
> {code}
>  
> The below DSShell command runs successfully and asks an AM container of size 
> 4096MB which is greater than max 2048MB configured in test queue.
> {code:java}
> yarn jar $YARN_HOME/hadoop-yarn-applications-distributedshell.jar 
> -num_containers 1 -jar 
> $YARN_HOME/hadoop-yarn-applications-distributedshell.jar -shell_command 
> "sleep 60" -container_memory=4096 -master_memory=4096 -queue=test{code}
> Instead it should not launch the application and fail with 
> InvalidResourceRequestException . The child container however will be 
> requested with size 2048MB as DSShell AppMaster does the below check before 
> ResourceRequest ask with RM.
> {code:java}
> // A resource ask cannot exceed the max.
> if (containerMemory > maxMem) {
>  LOG.info("Container memory specified above max threshold of cluster."
>  + " Using max value." + ", specified=" + containerMemory + ", max="
>  + maxMem);
>  containerMemory = maxMem;
> }{code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8720) CapacityScheduler does not enforce max resource allocation check at queue level

2018-09-14 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16614699#comment-16614699
 ] 

Hadoop QA commented on YARN-8720:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 19m  
6s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2.8 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 11m 
51s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
34s{color} | {color:green} branch-2.8 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 83m 19s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
20s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}121m 19s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.resourcemanager.TestClientRMTokens |
|   | hadoop.yarn.server.resourcemanager.TestAMAuthorization |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ae3769f |
| JIRA Issue | YARN-8720 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12939688/YARN-8720-branch-2.8.001.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 36662255b494 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | branch-2.8 / 7f7a3c8 |
| maven | version: Apache Maven 3.0.5 |
| Default Java | 1.7.0_181 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/21838/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/21838/testReport/ |
| asflicense | 
https://builds.apache.org/job/PreCommit-YARN-Build/21838/artifact/out/patch-asflicense-problems.txt
 |
| Max. process+thread count | 641 (vs. ulimit of 1) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/21838/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> CapacityScheduler does not enforce max resource allocation check at 

[jira] [Issue Comment Deleted] (YARN-8645) Yarn NM fail to start when remount cpu control group

2018-09-14 Thread Bilwa S T (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bilwa S T updated YARN-8645:

Comment: was deleted

(was: Hi [~yangjiandan]
just disabling yarn.nodemanager.linux-container-executor.cgroups.mount would 
work fine .)

> Yarn NM fail to start when remount cpu control group
> 
>
> Key: YARN-8645
> URL: https://issues.apache.org/jira/browse/YARN-8645
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: Jiandan Yang 
>Priority: Major
>
> NM failed to start when we update Yarn to latest version. NM logs are as 
> follows:
> {code:java}
> 2018-08-08 16:07:01,244 INFO [main] 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.resources.CGroupsHandlerImpl:
>  Mounting controller cpu at /sys/fs/cgroup/cpu
> 2018-08-08 16:07:01,246 WARN [main] 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.privileged.PrivilegedOperationExecutor:
>  Shell execution returned exit code: 32. Privileged Execution Operation 
> Stderr:
> Feature disabled: mount cgroup
> Stdout:
> Full command array for failed execution:
> [/home/hadoop/hadoop_hbase/hadoop-current/bin/container-executor, 
> --mount-cgroups, hadoop-yarn, cpu,cpuset,cpuacct=/sys/fs/cgroup/cpu]
> 2018-08-08 16:07:01,247 ERROR [main] 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.resources.CGroupsHandlerImpl:
>  Failed to mount controller: cpu
> 2018-08-08 16:07:01,247 ERROR [main] 
> org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor: Failed to 
> bootstrap configured resource subsystems!
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.resources.ResourceHandlerException:
>  Failed to mount controller: cpu
>  {code}
> The cause of error is that 351cf87c92872d90f62c476f85ae4d02e485769c disable 
> mounting cgroups by default in container-executor, which make 
> container-executor return non-zero when executing mount-cgroups



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8645) Yarn NM fail to start when remount cpu control group

2018-09-14 Thread Bilwa S T (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16614679#comment-16614679
 ] 

Bilwa S T commented on YARN-8645:
-

Hi [~yangjiandan]
just disabling yarn.nodemanager.linux-container-executor.cgroups.mount would 
work fine .

> Yarn NM fail to start when remount cpu control group
> 
>
> Key: YARN-8645
> URL: https://issues.apache.org/jira/browse/YARN-8645
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: Jiandan Yang 
>Priority: Major
>
> NM failed to start when we update Yarn to latest version. NM logs are as 
> follows:
> {code:java}
> 2018-08-08 16:07:01,244 INFO [main] 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.resources.CGroupsHandlerImpl:
>  Mounting controller cpu at /sys/fs/cgroup/cpu
> 2018-08-08 16:07:01,246 WARN [main] 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.privileged.PrivilegedOperationExecutor:
>  Shell execution returned exit code: 32. Privileged Execution Operation 
> Stderr:
> Feature disabled: mount cgroup
> Stdout:
> Full command array for failed execution:
> [/home/hadoop/hadoop_hbase/hadoop-current/bin/container-executor, 
> --mount-cgroups, hadoop-yarn, cpu,cpuset,cpuacct=/sys/fs/cgroup/cpu]
> 2018-08-08 16:07:01,247 ERROR [main] 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.resources.CGroupsHandlerImpl:
>  Failed to mount controller: cpu
> 2018-08-08 16:07:01,247 ERROR [main] 
> org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor: Failed to 
> bootstrap configured resource subsystems!
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.resources.ResourceHandlerException:
>  Failed to mount controller: cpu
>  {code}
> The cause of error is that 351cf87c92872d90f62c476f85ae4d02e485769c disable 
> mounting cgroups by default in container-executor, which make 
> container-executor return non-zero when executing mount-cgroups



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8047) RMWebApp make external class pluggable

2018-09-14 Thread Bilwa S T (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bilwa S T updated YARN-8047:

Attachment: YARN-8047-002.patch

> RMWebApp make external class pluggable
> --
>
> Key: YARN-8047
> URL: https://issues.apache.org/jira/browse/YARN-8047
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Bibin A Chundatt
>Assignee: Bilwa S T
>Priority: Minor
> Attachments: YARN-8047-001.patch, YARN-8047-002.patch
>
>
> JIra should make sure we should be able to plugin webservices and web pages 
> of scheduler in Resourcemanager
> * RMWebApp allow to bind external classes
> * RMController allow to plugin scheduler classes



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8771) CapacityScheduler fails to unreserve when cluster resource contains empty resource type

2018-09-14 Thread Weiwei Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8771?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16614673#comment-16614673
 ] 

Weiwei Yang commented on YARN-8771:
---

[~Tao Yang], good catch and nice UT. I will help to review.

+[~sunilg] too

> CapacityScheduler fails to unreserve when cluster resource contains empty 
> resource type
> ---
>
> Key: YARN-8771
> URL: https://issues.apache.org/jira/browse/YARN-8771
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacityscheduler
>Affects Versions: 3.2.0
>Reporter: Tao Yang
>Assignee: Tao Yang
>Priority: Critical
> Attachments: YARN-8771.001.patch, YARN-8771.002.patch
>
>
> We found this problem when cluster is almost but not exhausted (93% used), 
> scheduler kept allocating for an app but always fail to commit, this can 
> blocking requests from other apps and parts of cluster resource can't be used.
> Reproduce this problem:
> (1) use DominantResourceCalculator
> (2) cluster resource has empty resource type, for example: gpu=0
> (3) scheduler allocates container for app1 who has reserved containers and 
> whose queue limit or user limit reached(used + required > limit). 
> Reference codes in RegularContainerAllocator#assignContainer:
> {code:java}
> boolean needToUnreserve =
> Resources.greaterThan(rc, clusterResource,
> resourceNeedToUnReserve, Resources.none());
> {code}
> value of resourceNeedToUnReserve can be <8GB, -6 cores, 0 gpu>, result of 
> {{Resources#greaterThan}} will be false if using DominantResourceCalculator.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8047) RMWebApp make external class pluggable

2018-09-14 Thread Bilwa S T (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bilwa S T updated YARN-8047:

Attachment: (was: YARN-8047-002.patch)

> RMWebApp make external class pluggable
> --
>
> Key: YARN-8047
> URL: https://issues.apache.org/jira/browse/YARN-8047
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Bibin A Chundatt
>Assignee: Bilwa S T
>Priority: Minor
> Attachments: YARN-8047-001.patch
>
>
> JIra should make sure we should be able to plugin webservices and web pages 
> of scheduler in Resourcemanager
> * RMWebApp allow to bind external classes
> * RMController allow to plugin scheduler classes



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8715) Make allocation tags in the placement spec optional for node-attributes

2018-09-14 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16614665#comment-16614665
 ] 

Sunil Govindan commented on YARN-8715:
--

bq.If the expression contains a single spec, the sourceTags can be optional for 
node-attribute constraint but enforced for allocation-tag constraint

Yes, this makes sense [~cheersyang]. In such cases, we can enforce for a single 
constraint.

I am fine with this patch, will get this in later today if no objections.

> Make allocation tags in the placement spec optional for node-attributes
> ---
>
> Key: YARN-8715
> URL: https://issues.apache.org/jira/browse/YARN-8715
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Major
> Attachments: YARN-8715.001.patch
>
>
> YARN-7863 adds support to specify constraints targeting to node-attributes, 
> including the support in distributed shell, but it still needs to specify 
> {{allocationTags=numOfContainers}} in the spec. We should make this optional 
> as it is not required for node-attribute expressions.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8715) Make allocation tags in the placement spec optional for node-attributes

2018-09-14 Thread Weiwei Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16614657#comment-16614657
 ] 

Weiwei Yang commented on YARN-8715:
---

Hi [~sunilg]
{quote}When will this check hit?
{quote}
If sourceTags (foo=3 field) is not specified in a spec, this check ensures it 
must be a SINGLE node-attribute constraint. In another word, expression like 
"rm.yarn.io/foo=true:xyz=1,notin,node,xyz" is not allowed (you can take a look 
at the test code I added intestParseNodeAttributeSpec). Because such expression 
doesn't make much sense. So the convention is:
 * If the expression contains multiple specs, then each of them MUST have 
sourceTags defined;
 * If the expression contains a single spec, the sourceTags can be optional for 
node-attribute constraint but enforced for allocation-tag constraint

With this check, we won't run into the problem you mentioned in 1st comment 
{quote} So if we have 2 or more specs, total number of container ll not match 
and i think DS application wont exit due to this
{quote}
because if they are 2 specs, they must have sourceTags specified. So won't get 
reset to "-num_containers".

Hope it makes sense.

Thanks

 

> Make allocation tags in the placement spec optional for node-attributes
> ---
>
> Key: YARN-8715
> URL: https://issues.apache.org/jira/browse/YARN-8715
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Major
> Attachments: YARN-8715.001.patch
>
>
> YARN-7863 adds support to specify constraints targeting to node-attributes, 
> including the support in distributed shell, but it still needs to specify 
> {{allocationTags=numOfContainers}} in the spec. We should make this optional 
> as it is not required for node-attribute expressions.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8715) Make allocation tags in the placement spec optional for node-attributes

2018-09-14 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16614641#comment-16614641
 ] 

Sunil Govindan commented on YARN-8715:
--

Hi [~cheersyang]

Thanks for the patch
{code:java}
710 // Use global num of containers when the spec doesn't specify
711 // source tags. This is allowed when using node-attribute constraints.
712 if (Strings.isNullOrEmpty(pSpec.sourceTag)
713 && pSpec.getNumContainers() == 0
714 && globalNumOfContainers > 0) {
715 pSpec.setNumContainers(globalNumOfContainers);
716 }{code}
in here, numContainers is set to each spec. So if we have 2 or more specs, 
total number of container ll not match and i think DS application wont exit due 
to this, cud u pls check once.

 
{code:java}
727 if (sourceTagSet.stream()
728 .filter(sourceTags -> sourceTags.isEmpty())
729 .findAny()
730 .isPresent()) {{code}
When will this check hit?

> Make allocation tags in the placement spec optional for node-attributes
> ---
>
> Key: YARN-8715
> URL: https://issues.apache.org/jira/browse/YARN-8715
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Major
> Attachments: YARN-8715.001.patch
>
>
> YARN-7863 adds support to specify constraints targeting to node-attributes, 
> including the support in distributed shell, but it still needs to specify 
> {{allocationTags=numOfContainers}} in the spec. We should make this optional 
> as it is not required for node-attribute expressions.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8715) Make allocation tags in the placement spec optional for node-attributes

2018-09-14 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16614625#comment-16614625
 ] 

Hadoop QA commented on YARN-8715:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
25s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
5s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 44s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 36s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
50s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 15m 
24s{color} | {color:green} hadoop-yarn-applications-distributedshell in the 
patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 89m 31s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | YARN-8715 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12939680/YARN-8715.001.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 6479ab9c5255 4.4.0-133-generic #159-Ubuntu SMP Fri Aug 10 
07:31:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 568ebec |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/21837/testReport/ |
| Max. process+thread count | 620 (vs. ulimit of 

[jira] [Commented] (YARN-8765) Extend YARN to support SET type of resources

2018-09-14 Thread Weiwei Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16614610#comment-16614610
 ] 

Weiwei Yang commented on YARN-8765:
---

Hi [~suma.shivaprasad]

Thanks for the comments. I can see there are 2 points,

1) Resource declare as a range

Resource layer everything must be specific, RANGE would be something hard to 
achieve in this level. However, we can tentative achieve that in the upper 
front. E.g at NM side, we can allow user to specify a range of IP addresses, NM 
translate that into specific IP addresses and then report back to RM. 

2) How such resource can be managed.

In the proposal, SET resource was managed by NM like rest of resources. But you 
are raising up a point to have an central resource provider to manage them 
(internally in RM or externally in a plugin).

#1 can be done as a instrument upon this proposal, #2 exceeds the scope here 
but we can discuss that in a separate ticket.

What do you think?

 

> Extend YARN to support SET type of resources
> 
>
> Key: YARN-8765
> URL: https://issues.apache.org/jira/browse/YARN-8765
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Weiwei Yang
>Priority: Major
>
> YARN-3926 evolves a new resource model in YARN by providing a general 
> resource definition mechanism. However right now only COUNTABLE type is 
> supported. To support resources that cannot be declared with a single value, 
> propose to add a SET type. This will extend YARN to manage IP address 
> resources. Design doc is attached 
> [here|https://docs.google.com/document/d/1U9hj1xX9a3c_xT_X4EP_YC0fZ7ItD5X-rB0bYne-waU/edit?usp=sharing].
> This feature is split from YARN-8446.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8720) CapacityScheduler does not enforce max resource allocation check at queue level

2018-09-14 Thread Weiwei Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16614605#comment-16614605
 ] 

Weiwei Yang commented on YARN-8720:
---

Re-open to trigger jenkins build for branch-2.8.

> CapacityScheduler does not enforce max resource allocation check at queue 
> level
> ---
>
> Key: YARN-8720
> URL: https://issues.apache.org/jira/browse/YARN-8720
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, capacityscheduler, resourcemanager
>Affects Versions: 2.7.0
>Reporter: Tarun Parimi
>Assignee: Tarun Parimi
>Priority: Major
> Fix For: 3.2.0, 2.9.2, 3.0.4, 3.1.2
>
> Attachments: YARN-8720-branch-2.8.001.patch, YARN-8720.001.patch, 
> YARN-8720.002.patch
>
>
> The value of 
> yarn.scheduler.capacity..maximum-allocation-mb/vcores is not 
> strictly enforced when applications request containers. An 
> InvalidResourceRequestException is thrown only when the ResourceRequest is 
> greater than the global value of yarn.scheduler.maximum-allocation-mb/vcores 
> . So for an example configuration such as below,
>  
> {code:java}
> yarn.scheduler.maximum-allocation-mb=4096
> yarn.scheduler.capacity.root.test.maximum-allocation-mb=2048
> {code}
>  
> The below DSShell command runs successfully and asks an AM container of size 
> 4096MB which is greater than max 2048MB configured in test queue.
> {code:java}
> yarn jar $YARN_HOME/hadoop-yarn-applications-distributedshell.jar 
> -num_containers 1 -jar 
> $YARN_HOME/hadoop-yarn-applications-distributedshell.jar -shell_command 
> "sleep 60" -container_memory=4096 -master_memory=4096 -queue=test{code}
> Instead it should not launch the application and fail with 
> InvalidResourceRequestException . The child container however will be 
> requested with size 2048MB as DSShell AppMaster does the below check before 
> ResourceRequest ask with RM.
> {code:java}
> // A resource ask cannot exceed the max.
> if (containerMemory > maxMem) {
>  LOG.info("Container memory specified above max threshold of cluster."
>  + " Using max value." + ", specified=" + containerMemory + ", max="
>  + maxMem);
>  containerMemory = maxMem;
> }{code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Reopened] (YARN-8720) CapacityScheduler does not enforce max resource allocation check at queue level

2018-09-14 Thread Weiwei Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang reopened YARN-8720:
---

> CapacityScheduler does not enforce max resource allocation check at queue 
> level
> ---
>
> Key: YARN-8720
> URL: https://issues.apache.org/jira/browse/YARN-8720
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, capacityscheduler, resourcemanager
>Affects Versions: 2.7.0
>Reporter: Tarun Parimi
>Assignee: Tarun Parimi
>Priority: Major
> Fix For: 3.2.0, 2.9.2, 3.0.4, 3.1.2
>
> Attachments: YARN-8720-branch-2.8.001.patch, YARN-8720.001.patch, 
> YARN-8720.002.patch
>
>
> The value of 
> yarn.scheduler.capacity..maximum-allocation-mb/vcores is not 
> strictly enforced when applications request containers. An 
> InvalidResourceRequestException is thrown only when the ResourceRequest is 
> greater than the global value of yarn.scheduler.maximum-allocation-mb/vcores 
> . So for an example configuration such as below,
>  
> {code:java}
> yarn.scheduler.maximum-allocation-mb=4096
> yarn.scheduler.capacity.root.test.maximum-allocation-mb=2048
> {code}
>  
> The below DSShell command runs successfully and asks an AM container of size 
> 4096MB which is greater than max 2048MB configured in test queue.
> {code:java}
> yarn jar $YARN_HOME/hadoop-yarn-applications-distributedshell.jar 
> -num_containers 1 -jar 
> $YARN_HOME/hadoop-yarn-applications-distributedshell.jar -shell_command 
> "sleep 60" -container_memory=4096 -master_memory=4096 -queue=test{code}
> Instead it should not launch the application and fail with 
> InvalidResourceRequestException . The child container however will be 
> requested with size 2048MB as DSShell AppMaster does the below check before 
> ResourceRequest ask with RM.
> {code:java}
> // A resource ask cannot exceed the max.
> if (containerMemory > maxMem) {
>  LOG.info("Container memory specified above max threshold of cluster."
>  + " Using max value." + ", specified=" + containerMemory + ", max="
>  + maxMem);
>  containerMemory = maxMem;
> }{code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8767) TestStreamingStatus fails

2018-09-14 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16614581#comment-16614581
 ] 

Hadoop QA commented on YARN-8767:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
28s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
 6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 15s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
21s{color} | {color:green} hadoop-tools_hadoop-streaming generated 0 new + 78 
unchanged - 5 fixed = 78 total (was 83) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 15s{color} | {color:orange} hadoop-tools/hadoop-streaming: The patch 
generated 3 new + 62 unchanged - 3 fixed = 65 total (was 65) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 59s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m  
0s{color} | {color:green} hadoop-streaming in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 59m 11s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | YARN-8767 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12939560/YARN-8767.002.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux e8d3e4c8f268 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 568ebec |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/21836/artifact/out/diff-checkstyle-hadoop-tools_hadoop-streaming.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/21836/testReport/ |
| Max. process+thread count | 713 (vs. ulimit of 1) |
| modules | C: hadoop-tools/hadoop-streaming U: hadoop-tools/hadoop-streaming |
| Console output | 

[jira] [Updated] (YARN-8720) CapacityScheduler does not enforce max resource allocation check at queue level

2018-09-14 Thread Weiwei Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated YARN-8720:
--
Fix Version/s: 2.9.2

> CapacityScheduler does not enforce max resource allocation check at queue 
> level
> ---
>
> Key: YARN-8720
> URL: https://issues.apache.org/jira/browse/YARN-8720
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, capacityscheduler, resourcemanager
>Affects Versions: 2.7.0
>Reporter: Tarun Parimi
>Assignee: Tarun Parimi
>Priority: Major
> Fix For: 3.2.0, 2.9.2, 3.0.4, 3.1.2
>
> Attachments: YARN-8720-branch-2.8.001.patch, YARN-8720.001.patch, 
> YARN-8720.002.patch
>
>
> The value of 
> yarn.scheduler.capacity..maximum-allocation-mb/vcores is not 
> strictly enforced when applications request containers. An 
> InvalidResourceRequestException is thrown only when the ResourceRequest is 
> greater than the global value of yarn.scheduler.maximum-allocation-mb/vcores 
> . So for an example configuration such as below,
>  
> {code:java}
> yarn.scheduler.maximum-allocation-mb=4096
> yarn.scheduler.capacity.root.test.maximum-allocation-mb=2048
> {code}
>  
> The below DSShell command runs successfully and asks an AM container of size 
> 4096MB which is greater than max 2048MB configured in test queue.
> {code:java}
> yarn jar $YARN_HOME/hadoop-yarn-applications-distributedshell.jar 
> -num_containers 1 -jar 
> $YARN_HOME/hadoop-yarn-applications-distributedshell.jar -shell_command 
> "sleep 60" -container_memory=4096 -master_memory=4096 -queue=test{code}
> Instead it should not launch the application and fail with 
> InvalidResourceRequestException . The child container however will be 
> requested with size 2048MB as DSShell AppMaster does the below check before 
> ResourceRequest ask with RM.
> {code:java}
> // A resource ask cannot exceed the max.
> if (containerMemory > maxMem) {
>  LOG.info("Container memory specified above max threshold of cluster."
>  + " Using max value." + ", specified=" + containerMemory + ", max="
>  + maxMem);
>  containerMemory = maxMem;
> }{code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8720) CapacityScheduler does not enforce max resource allocation check at queue level

2018-09-14 Thread Tarun Parimi (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16614560#comment-16614560
 ] 

Tarun Parimi commented on YARN-8720:


Attached a patch for branch-2.8

> CapacityScheduler does not enforce max resource allocation check at queue 
> level
> ---
>
> Key: YARN-8720
> URL: https://issues.apache.org/jira/browse/YARN-8720
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, capacityscheduler, resourcemanager
>Affects Versions: 2.7.0
>Reporter: Tarun Parimi
>Assignee: Tarun Parimi
>Priority: Major
> Fix For: 3.2.0, 3.0.4, 3.1.2
>
> Attachments: YARN-8720-branch-2.8.001.patch, YARN-8720.001.patch, 
> YARN-8720.002.patch
>
>
> The value of 
> yarn.scheduler.capacity..maximum-allocation-mb/vcores is not 
> strictly enforced when applications request containers. An 
> InvalidResourceRequestException is thrown only when the ResourceRequest is 
> greater than the global value of yarn.scheduler.maximum-allocation-mb/vcores 
> . So for an example configuration such as below,
>  
> {code:java}
> yarn.scheduler.maximum-allocation-mb=4096
> yarn.scheduler.capacity.root.test.maximum-allocation-mb=2048
> {code}
>  
> The below DSShell command runs successfully and asks an AM container of size 
> 4096MB which is greater than max 2048MB configured in test queue.
> {code:java}
> yarn jar $YARN_HOME/hadoop-yarn-applications-distributedshell.jar 
> -num_containers 1 -jar 
> $YARN_HOME/hadoop-yarn-applications-distributedshell.jar -shell_command 
> "sleep 60" -container_memory=4096 -master_memory=4096 -queue=test{code}
> Instead it should not launch the application and fail with 
> InvalidResourceRequestException . The child container however will be 
> requested with size 2048MB as DSShell AppMaster does the below check before 
> ResourceRequest ask with RM.
> {code:java}
> // A resource ask cannot exceed the max.
> if (containerMemory > maxMem) {
>  LOG.info("Container memory specified above max threshold of cluster."
>  + " Using max value." + ", specified=" + containerMemory + ", max="
>  + maxMem);
>  containerMemory = maxMem;
> }{code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8720) CapacityScheduler does not enforce max resource allocation check at queue level

2018-09-14 Thread Tarun Parimi (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tarun Parimi updated YARN-8720:
---
Attachment: YARN-8720-branch-2.8.001.patch

> CapacityScheduler does not enforce max resource allocation check at queue 
> level
> ---
>
> Key: YARN-8720
> URL: https://issues.apache.org/jira/browse/YARN-8720
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, capacityscheduler, resourcemanager
>Affects Versions: 2.7.0
>Reporter: Tarun Parimi
>Assignee: Tarun Parimi
>Priority: Major
> Fix For: 3.2.0, 3.0.4, 3.1.2
>
> Attachments: YARN-8720-branch-2.8.001.patch, YARN-8720.001.patch, 
> YARN-8720.002.patch
>
>
> The value of 
> yarn.scheduler.capacity..maximum-allocation-mb/vcores is not 
> strictly enforced when applications request containers. An 
> InvalidResourceRequestException is thrown only when the ResourceRequest is 
> greater than the global value of yarn.scheduler.maximum-allocation-mb/vcores 
> . So for an example configuration such as below,
>  
> {code:java}
> yarn.scheduler.maximum-allocation-mb=4096
> yarn.scheduler.capacity.root.test.maximum-allocation-mb=2048
> {code}
>  
> The below DSShell command runs successfully and asks an AM container of size 
> 4096MB which is greater than max 2048MB configured in test queue.
> {code:java}
> yarn jar $YARN_HOME/hadoop-yarn-applications-distributedshell.jar 
> -num_containers 1 -jar 
> $YARN_HOME/hadoop-yarn-applications-distributedshell.jar -shell_command 
> "sleep 60" -container_memory=4096 -master_memory=4096 -queue=test{code}
> Instead it should not launch the application and fail with 
> InvalidResourceRequestException . The child container however will be 
> requested with size 2048MB as DSShell AppMaster does the below check before 
> ResourceRequest ask with RM.
> {code:java}
> // A resource ask cannot exceed the max.
> if (containerMemory > maxMem) {
>  LOG.info("Container memory specified above max threshold of cluster."
>  + " Using max value." + ", specified=" + containerMemory + ", max="
>  + maxMem);
>  containerMemory = maxMem;
> }{code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8720) CapacityScheduler does not enforce max resource allocation check at queue level

2018-09-14 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16614547#comment-16614547
 ] 

Hudson commented on YARN-8720:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14956 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14956/])
YARN-8720. CapacityScheduler does not enforce max resource allocation (wwei: 
rev f1a893fdbc2dbe949cae786f08bdb2651b88d673)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestClientRMService.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/DefaultAMSProcessor.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMAppManager.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestAppManager.java


> CapacityScheduler does not enforce max resource allocation check at queue 
> level
> ---
>
> Key: YARN-8720
> URL: https://issues.apache.org/jira/browse/YARN-8720
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, capacityscheduler, resourcemanager
>Affects Versions: 2.7.0
>Reporter: Tarun Parimi
>Assignee: Tarun Parimi
>Priority: Major
> Fix For: 3.2.0, 3.0.4, 3.1.2
>
> Attachments: YARN-8720.001.patch, YARN-8720.002.patch
>
>
> The value of 
> yarn.scheduler.capacity..maximum-allocation-mb/vcores is not 
> strictly enforced when applications request containers. An 
> InvalidResourceRequestException is thrown only when the ResourceRequest is 
> greater than the global value of yarn.scheduler.maximum-allocation-mb/vcores 
> . So for an example configuration such as below,
>  
> {code:java}
> yarn.scheduler.maximum-allocation-mb=4096
> yarn.scheduler.capacity.root.test.maximum-allocation-mb=2048
> {code}
>  
> The below DSShell command runs successfully and asks an AM container of size 
> 4096MB which is greater than max 2048MB configured in test queue.
> {code:java}
> yarn jar $YARN_HOME/hadoop-yarn-applications-distributedshell.jar 
> -num_containers 1 -jar 
> $YARN_HOME/hadoop-yarn-applications-distributedshell.jar -shell_command 
> "sleep 60" -container_memory=4096 -master_memory=4096 -queue=test{code}
> Instead it should not launch the application and fail with 
> InvalidResourceRequestException . The child container however will be 
> requested with size 2048MB as DSShell AppMaster does the below check before 
> ResourceRequest ask with RM.
> {code:java}
> // A resource ask cannot exceed the max.
> if (containerMemory > maxMem) {
>  LOG.info("Container memory specified above max threshold of cluster."
>  + " Using max value." + ", specified=" + containerMemory + ", max="
>  + maxMem);
>  containerMemory = maxMem;
> }{code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8720) CapacityScheduler does not enforce max resource allocation check at queue level

2018-09-14 Thread Weiwei Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16614537#comment-16614537
 ] 

Weiwei Yang commented on YARN-8720:
---

Hi [~tarunparimi]

Sure, it would be nice if you can help to provide a patch for branch-2.8.

Meanwhile, let me also cherry-pick this to branch-2.9.

Thanks

> CapacityScheduler does not enforce max resource allocation check at queue 
> level
> ---
>
> Key: YARN-8720
> URL: https://issues.apache.org/jira/browse/YARN-8720
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, capacityscheduler, resourcemanager
>Affects Versions: 2.7.0
>Reporter: Tarun Parimi
>Assignee: Tarun Parimi
>Priority: Major
> Fix For: 3.2.0, 3.0.4, 3.1.2
>
> Attachments: YARN-8720.001.patch, YARN-8720.002.patch
>
>
> The value of 
> yarn.scheduler.capacity..maximum-allocation-mb/vcores is not 
> strictly enforced when applications request containers. An 
> InvalidResourceRequestException is thrown only when the ResourceRequest is 
> greater than the global value of yarn.scheduler.maximum-allocation-mb/vcores 
> . So for an example configuration such as below,
>  
> {code:java}
> yarn.scheduler.maximum-allocation-mb=4096
> yarn.scheduler.capacity.root.test.maximum-allocation-mb=2048
> {code}
>  
> The below DSShell command runs successfully and asks an AM container of size 
> 4096MB which is greater than max 2048MB configured in test queue.
> {code:java}
> yarn jar $YARN_HOME/hadoop-yarn-applications-distributedshell.jar 
> -num_containers 1 -jar 
> $YARN_HOME/hadoop-yarn-applications-distributedshell.jar -shell_command 
> "sleep 60" -container_memory=4096 -master_memory=4096 -queue=test{code}
> Instead it should not launch the application and fail with 
> InvalidResourceRequestException . The child container however will be 
> requested with size 2048MB as DSShell AppMaster does the below check before 
> ResourceRequest ask with RM.
> {code:java}
> // A resource ask cannot exceed the max.
> if (containerMemory > maxMem) {
>  LOG.info("Container memory specified above max threshold of cluster."
>  + " Using max value." + ", specified=" + containerMemory + ", max="
>  + maxMem);
>  containerMemory = maxMem;
> }{code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8720) CapacityScheduler does not enforce max resource allocation check at queue level

2018-09-14 Thread Tarun Parimi (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16614534#comment-16614534
 ] 

Tarun Parimi commented on YARN-8720:


[~cheersyang], [~sunilg] . Do you think we also need patch for branch-2.8 and 
before version since DefaultAMSProcessor introduced in YARN-6776 is not present 
there? I can attach a branch-2.8 patch which makes the change in 
ApplicationMasterService class instead.

> CapacityScheduler does not enforce max resource allocation check at queue 
> level
> ---
>
> Key: YARN-8720
> URL: https://issues.apache.org/jira/browse/YARN-8720
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, capacityscheduler, resourcemanager
>Affects Versions: 2.7.0
>Reporter: Tarun Parimi
>Assignee: Tarun Parimi
>Priority: Major
> Fix For: 3.2.0, 3.0.4, 3.1.2
>
> Attachments: YARN-8720.001.patch, YARN-8720.002.patch
>
>
> The value of 
> yarn.scheduler.capacity..maximum-allocation-mb/vcores is not 
> strictly enforced when applications request containers. An 
> InvalidResourceRequestException is thrown only when the ResourceRequest is 
> greater than the global value of yarn.scheduler.maximum-allocation-mb/vcores 
> . So for an example configuration such as below,
>  
> {code:java}
> yarn.scheduler.maximum-allocation-mb=4096
> yarn.scheduler.capacity.root.test.maximum-allocation-mb=2048
> {code}
>  
> The below DSShell command runs successfully and asks an AM container of size 
> 4096MB which is greater than max 2048MB configured in test queue.
> {code:java}
> yarn jar $YARN_HOME/hadoop-yarn-applications-distributedshell.jar 
> -num_containers 1 -jar 
> $YARN_HOME/hadoop-yarn-applications-distributedshell.jar -shell_command 
> "sleep 60" -container_memory=4096 -master_memory=4096 -queue=test{code}
> Instead it should not launch the application and fail with 
> InvalidResourceRequestException . The child container however will be 
> requested with size 2048MB as DSShell AppMaster does the below check before 
> ResourceRequest ask with RM.
> {code:java}
> // A resource ask cannot exceed the max.
> if (containerMemory > maxMem) {
>  LOG.info("Container memory specified above max threshold of cluster."
>  + " Using max value." + ", specified=" + containerMemory + ", max="
>  + maxMem);
>  containerMemory = maxMem;
> }{code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8720) CapacityScheduler does not enforce max resource allocation check at queue level

2018-09-14 Thread Tarun Parimi (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16614529#comment-16614529
 ] 

Tarun Parimi commented on YARN-8720:


Thanks for the reviews [~cheersyang] and [~sunilg]

> CapacityScheduler does not enforce max resource allocation check at queue 
> level
> ---
>
> Key: YARN-8720
> URL: https://issues.apache.org/jira/browse/YARN-8720
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, capacityscheduler, resourcemanager
>Affects Versions: 2.7.0
>Reporter: Tarun Parimi
>Assignee: Tarun Parimi
>Priority: Major
> Fix For: 3.2.0, 3.0.4, 3.1.2
>
> Attachments: YARN-8720.001.patch, YARN-8720.002.patch
>
>
> The value of 
> yarn.scheduler.capacity..maximum-allocation-mb/vcores is not 
> strictly enforced when applications request containers. An 
> InvalidResourceRequestException is thrown only when the ResourceRequest is 
> greater than the global value of yarn.scheduler.maximum-allocation-mb/vcores 
> . So for an example configuration such as below,
>  
> {code:java}
> yarn.scheduler.maximum-allocation-mb=4096
> yarn.scheduler.capacity.root.test.maximum-allocation-mb=2048
> {code}
>  
> The below DSShell command runs successfully and asks an AM container of size 
> 4096MB which is greater than max 2048MB configured in test queue.
> {code:java}
> yarn jar $YARN_HOME/hadoop-yarn-applications-distributedshell.jar 
> -num_containers 1 -jar 
> $YARN_HOME/hadoop-yarn-applications-distributedshell.jar -shell_command 
> "sleep 60" -container_memory=4096 -master_memory=4096 -queue=test{code}
> Instead it should not launch the application and fail with 
> InvalidResourceRequestException . The child container however will be 
> requested with size 2048MB as DSShell AppMaster does the below check before 
> ResourceRequest ask with RM.
> {code:java}
> // A resource ask cannot exceed the max.
> if (containerMemory > maxMem) {
>  LOG.info("Container memory specified above max threshold of cluster."
>  + " Using max value." + ", specified=" + containerMemory + ", max="
>  + maxMem);
>  containerMemory = maxMem;
> }{code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8720) CapacityScheduler does not enforce max resource allocation check at queue level

2018-09-14 Thread Weiwei Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16614528#comment-16614528
 ] 

Weiwei Yang commented on YARN-8720:
---

Pushed to trunk, cherry-picked to all 3.x branch lines. Thanks for the 
contribution [~tarunparimi] and thanks for the review [~sunilg].

> CapacityScheduler does not enforce max resource allocation check at queue 
> level
> ---
>
> Key: YARN-8720
> URL: https://issues.apache.org/jira/browse/YARN-8720
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, capacityscheduler, resourcemanager
>Affects Versions: 2.7.0
>Reporter: Tarun Parimi
>Assignee: Tarun Parimi
>Priority: Major
> Fix For: 3.2.0, 3.0.4, 3.1.2
>
> Attachments: YARN-8720.001.patch, YARN-8720.002.patch
>
>
> The value of 
> yarn.scheduler.capacity..maximum-allocation-mb/vcores is not 
> strictly enforced when applications request containers. An 
> InvalidResourceRequestException is thrown only when the ResourceRequest is 
> greater than the global value of yarn.scheduler.maximum-allocation-mb/vcores 
> . So for an example configuration such as below,
>  
> {code:java}
> yarn.scheduler.maximum-allocation-mb=4096
> yarn.scheduler.capacity.root.test.maximum-allocation-mb=2048
> {code}
>  
> The below DSShell command runs successfully and asks an AM container of size 
> 4096MB which is greater than max 2048MB configured in test queue.
> {code:java}
> yarn jar $YARN_HOME/hadoop-yarn-applications-distributedshell.jar 
> -num_containers 1 -jar 
> $YARN_HOME/hadoop-yarn-applications-distributedshell.jar -shell_command 
> "sleep 60" -container_memory=4096 -master_memory=4096 -queue=test{code}
> Instead it should not launch the application and fail with 
> InvalidResourceRequestException . The child container however will be 
> requested with size 2048MB as DSShell AppMaster does the below check before 
> ResourceRequest ask with RM.
> {code:java}
> // A resource ask cannot exceed the max.
> if (containerMemory > maxMem) {
>  LOG.info("Container memory specified above max threshold of cluster."
>  + " Using max value." + ", specified=" + containerMemory + ", max="
>  + maxMem);
>  containerMemory = maxMem;
> }{code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8720) CapacityScheduler does not enforce max resource allocation check at queue level

2018-09-14 Thread Weiwei Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated YARN-8720:
--
Fix Version/s: 3.0.4

> CapacityScheduler does not enforce max resource allocation check at queue 
> level
> ---
>
> Key: YARN-8720
> URL: https://issues.apache.org/jira/browse/YARN-8720
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, capacityscheduler, resourcemanager
>Affects Versions: 2.7.0
>Reporter: Tarun Parimi
>Assignee: Tarun Parimi
>Priority: Major
> Fix For: 3.2.0, 3.0.4, 3.1.2
>
> Attachments: YARN-8720.001.patch, YARN-8720.002.patch
>
>
> The value of 
> yarn.scheduler.capacity..maximum-allocation-mb/vcores is not 
> strictly enforced when applications request containers. An 
> InvalidResourceRequestException is thrown only when the ResourceRequest is 
> greater than the global value of yarn.scheduler.maximum-allocation-mb/vcores 
> . So for an example configuration such as below,
>  
> {code:java}
> yarn.scheduler.maximum-allocation-mb=4096
> yarn.scheduler.capacity.root.test.maximum-allocation-mb=2048
> {code}
>  
> The below DSShell command runs successfully and asks an AM container of size 
> 4096MB which is greater than max 2048MB configured in test queue.
> {code:java}
> yarn jar $YARN_HOME/hadoop-yarn-applications-distributedshell.jar 
> -num_containers 1 -jar 
> $YARN_HOME/hadoop-yarn-applications-distributedshell.jar -shell_command 
> "sleep 60" -container_memory=4096 -master_memory=4096 -queue=test{code}
> Instead it should not launch the application and fail with 
> InvalidResourceRequestException . The child container however will be 
> requested with size 2048MB as DSShell AppMaster does the below check before 
> ResourceRequest ask with RM.
> {code:java}
> // A resource ask cannot exceed the max.
> if (containerMemory > maxMem) {
>  LOG.info("Container memory specified above max threshold of cluster."
>  + " Using max value." + ", specified=" + containerMemory + ", max="
>  + maxMem);
>  containerMemory = maxMem;
> }{code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8715) Make allocation tags in the placement spec optional for node-attributes

2018-09-14 Thread Weiwei Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated YARN-8715:
--
Attachment: YARN-8715.001.patch

> Make allocation tags in the placement spec optional for node-attributes
> ---
>
> Key: YARN-8715
> URL: https://issues.apache.org/jira/browse/YARN-8715
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Major
> Attachments: YARN-8715.001.patch
>
>
> YARN-7863 adds support to specify constraints targeting to node-attributes, 
> including the support in distributed shell, but it still needs to specify 
> {{allocationTags=numOfContainers}} in the spec. We should make this optional 
> as it is not required for node-attribute expressions.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8772) Annotation javax.annotation.Generated has moved

2018-09-14 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16614417#comment-16614417
 ] 

Hudson commented on YARN-8772:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14955 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14955/])
YARN-8772. Annotation javax.annotation.Generated has moved (aajisaka: rev 
568ebecdf49d0919db1a8d856043c10b76326c34)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/api/records/PlacementScope.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/api/records/ReadinessCheck.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/api/records/PlacementConstraint.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/api/records/ServiceStatus.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/api/records/Error.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/api/records/PlacementType.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/api/records/Configuration.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/api/records/ServiceState.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/api/records/Artifact.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/api/records/Component.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/api/records/ResourceInformation.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/api/records/KerberosPrincipal.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/api/records/Resource.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/api/records/ConfigFile.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/api/records/Container.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/api/records/PlacementPolicy.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/api/records/Service.java


> Annotation javax.annotation.Generated has moved
> ---
>
> Key: YARN-8772
> URL: https://issues.apache.org/jira/browse/YARN-8772
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: api
>Affects Versions: 3.1.1
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Minor
> Fix For: 3.2.0
>
> Attachments: YARN-8772.patch
>
>
> YARN compilation with Java 11 fails because the annotation 
> javax.annotation.Generated has moved. It is now 
> javax.annotation.processing.Generated. A simple substitution will break 
> compilation with older JDK, so it seems best to remove the annotations, which 
> are only documentation not functional.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8720) CapacityScheduler does not enforce max resource allocation check at queue level

2018-09-14 Thread Weiwei Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated YARN-8720:
--
Summary: CapacityScheduler does not enforce max resource allocation check 
at queue level  (was: CapacityScheduler does not enforce 
yarn.scheduler.capacity..maximum-allocation-mb/vcores when 
configured)

> CapacityScheduler does not enforce max resource allocation check at queue 
> level
> ---
>
> Key: YARN-8720
> URL: https://issues.apache.org/jira/browse/YARN-8720
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, capacityscheduler, resourcemanager
>Affects Versions: 2.7.0
>Reporter: Tarun Parimi
>Assignee: Tarun Parimi
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: YARN-8720.001.patch, YARN-8720.002.patch
>
>
> The value of 
> yarn.scheduler.capacity..maximum-allocation-mb/vcores is not 
> strictly enforced when applications request containers. An 
> InvalidResourceRequestException is thrown only when the ResourceRequest is 
> greater than the global value of yarn.scheduler.maximum-allocation-mb/vcores 
> . So for an example configuration such as below,
>  
> {code:java}
> yarn.scheduler.maximum-allocation-mb=4096
> yarn.scheduler.capacity.root.test.maximum-allocation-mb=2048
> {code}
>  
> The below DSShell command runs successfully and asks an AM container of size 
> 4096MB which is greater than max 2048MB configured in test queue.
> {code:java}
> yarn jar $YARN_HOME/hadoop-yarn-applications-distributedshell.jar 
> -num_containers 1 -jar 
> $YARN_HOME/hadoop-yarn-applications-distributedshell.jar -shell_command 
> "sleep 60" -container_memory=4096 -master_memory=4096 -queue=test{code}
> Instead it should not launch the application and fail with 
> InvalidResourceRequestException . The child container however will be 
> requested with size 2048MB as DSShell AppMaster does the below check before 
> ResourceRequest ask with RM.
> {code:java}
> // A resource ask cannot exceed the max.
> if (containerMemory > maxMem) {
>  LOG.info("Container memory specified above max threshold of cluster."
>  + " Using max value." + ", specified=" + containerMemory + ", max="
>  + maxMem);
>  containerMemory = maxMem;
> }{code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8772) Annotation javax.annotation.Generated has moved

2018-09-14 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16614405#comment-16614405
 ] 

Akira Ajisaka commented on YARN-8772:
-

LGTM, +1

> Annotation javax.annotation.Generated has moved
> ---
>
> Key: YARN-8772
> URL: https://issues.apache.org/jira/browse/YARN-8772
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: api
>Affects Versions: 3.1.1
>Reporter: Andrew Purtell
>Priority: Minor
> Attachments: YARN-8772.patch
>
>
> YARN compilation with Java 11 fails because the annotation 
> javax.annotation.Generated has moved. It is now 
> javax.annotation.processing.Generated. A simple substitution will break 
> compilation with older JDK, so it seems best to remove the annotations, which 
> are only documentation not functional.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-8772) Annotation javax.annotation.Generated has moved

2018-09-14 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka reassigned YARN-8772:
---

Assignee: Andrew Purtell

> Annotation javax.annotation.Generated has moved
> ---
>
> Key: YARN-8772
> URL: https://issues.apache.org/jira/browse/YARN-8772
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: api
>Affects Versions: 3.1.1
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Minor
> Attachments: YARN-8772.patch
>
>
> YARN compilation with Java 11 fails because the annotation 
> javax.annotation.Generated has moved. It is now 
> javax.annotation.processing.Generated. A simple substitution will break 
> compilation with older JDK, so it seems best to remove the annotations, which 
> are only documentation not functional.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org