[jira] [Commented] (YARN-1151) Ability to configure auxiliary services from HDFS-based JAR files

2018-04-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16429244#comment-16429244
 ] 

Hudson commented on YARN-1151:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13937 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13937/])
YARN-1151. Ability to configure auxiliary services from HDFS-based JAR (wangda: 
rev 00ebec89f101347a5da44657e388b30c57ed9deb)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/ContainerManagerImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/AuxServices.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/TestAuxServices.java


> Ability to configure auxiliary services from HDFS-based JAR files
> -
>
> Key: YARN-1151
> URL: https://issues.apache.org/jira/browse/YARN-1151
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Affects Versions: 2.1.0-beta, 2.9.0
>Reporter: john lilley
>Assignee: Xuan Gong
>Priority: Major
>  Labels: auxiliary-service, yarn
> Fix For: 3.2.0
>
> Attachments: YARN-1151.1.patch, YARN-1151.2.patch, YARN-1151.3.patch, 
> YARN-1151.4.patch, YARN-1151.5.patch, YARN-1151.6.patch, 
> YARN-1151.branch-2.poc.2.patch, YARN-1151.branch-2.poc.3.patch, 
> YARN-1151.branch-2.poc.patch, [YARN-1151] [Design] Configure auxiliary 
> services from HDFS-based JAR files.pdf
>
>
> I would like to install an auxiliary service in Hadoop YARN without actually 
> installing files/services on every node in the system.  Discussions on the 
> user@ list indicate that this is not easily done.  The reason we want an 
> auxiliary service is that our application has some persistent-data components 
> that are not appropriate for HDFS.  In fact, they are somewhat analogous to 
> the mapper output of MapReduce's shuffle, which is what led me to 
> auxiliary-services in the first place.  It would be much easier if we could 
> just place our service's JARs in HDFS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8048) Support auto-spawning of admin configured services during bootstrap of rm/apiserver

2018-04-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16429243#comment-16429243
 ] 

Hudson commented on YARN-8048:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13937 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13937/])
YARN-8048. Support auto-spawning of admin configured services during (wangda: 
rev d4e63ccca0763b452e4a0169dd932b3f32066281)
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services-api/src/test/resources/users/sync/user2/example-app1.yarnfile
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services-api/src/test/resources/users/sync/user1/example-app2.yarnfile
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/conf/YarnServiceConf.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/service/SystemServiceManager.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services-api/src/test/resources/users/sync/user1/example-app3.json
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services-api/src/test/java/org/apache/hadoop/yarn/service/client/TestSystemServiceImpl.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/test/java/org/apache/hadoop/yarn/service/TestSystemServiceManager.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ResourceManager.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services-api/src/main/java/org/apache/hadoop/yarn/service/client/SystemServiceManagerImpl.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services-api/src/test/resources/users/sync/user2/example-app2.yarnfile
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/service/package-info.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services-api/src/test/resources/users/sync/user1/example-app1.yarnfile
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services-api/pom.xml


> Support auto-spawning of admin configured services during bootstrap of 
> rm/apiserver
> ---
>
> Key: YARN-8048
> URL: https://issues.apache.org/jira/browse/YARN-8048
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: YARN-8048.001.patch, YARN-8048.002.patch, 
> YARN-8048.003.patch, YARN-8048.004.patch, YARN-8048.005.patch, 
> YARN-8048.006.patch
>
>
> Goal is to support auto-spawning of admin configured services during 
> bootstrap of resourcemanager/apiserver. 
> *Requirement:* Some of the  services might required to be consumed by yarn 
> itself ex: Hbase for atsv2. Instead of depending on user installed HBase or 
> sometimes user may not required to install HBase at all, in such conditions 
> running HBase app on YARN will help for ATSv2.
> Before YARN cluster is started, admin configure these services spec and place 
> it in common location in HDFS. At the time of RM/apiServer bootstrap, these 
> services will be submitted.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-8123) Skip compiling old hamlet package when the Java version is 10 or upper

2018-04-06 Thread Dinesh Chitlangia (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16429227#comment-16429227
 ] 

Dinesh Chitlangia edited comment on YARN-8123 at 4/7/18 4:22 AM:
-

[~ajisakaa] - I think we can use the Range Specification to do that.

 
{code:java|title=hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/pom.xml|borderStyle=solid}

 java10
  
   [10,)
  {code}


was (Author: dineshchitlangia):
[~ajisakaa] - I think we can use the Range Specification to do that.

 

{{{code:title=hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/pom.xml|borderStyle=solid}
 }}

{{}}
{{ java10}}
{{  }}
{{   [10,)}}
{{  }}

{{{code}}}







 

> Skip compiling old hamlet package when the Java version is 10 or upper
> --
>
> Key: YARN-8123
> URL: https://issues.apache.org/jira/browse/YARN-8123
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: webapp
> Environment: Java 10 or upper
>Reporter: Akira Ajisaka
>Priority: Major
>  Labels: newbie
>
> HADOOP-11423 skipped compiling old hamlet package when the Java version is 9, 
> however, it is not skipped with Java 10+. We need to fix it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-8123) Skip compiling old hamlet package when the Java version is 10 or upper

2018-04-06 Thread Dinesh Chitlangia (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16429227#comment-16429227
 ] 

Dinesh Chitlangia edited comment on YARN-8123 at 4/7/18 4:21 AM:
-

[~ajisakaa] - I think we can use the Range Specification to do that.

 

{{{code:title=hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/pom.xml|borderStyle=solid}
 }}

{{}}
{{ java10}}
{{  }}
{{   [10,)}}
{{  }}

{{{code}}}







 


was (Author: dineshchitlangia):
[~ajisakaa] - I think we can use the Range Specification to do that.
*hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/pom.xml*

  java10
  
[10,)
  

> Skip compiling old hamlet package when the Java version is 10 or upper
> --
>
> Key: YARN-8123
> URL: https://issues.apache.org/jira/browse/YARN-8123
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: webapp
> Environment: Java 10 or upper
>Reporter: Akira Ajisaka
>Priority: Major
>  Labels: newbie
>
> HADOOP-11423 skipped compiling old hamlet package when the Java version is 9, 
> however, it is not skipped with Java 10+. We need to fix it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8123) Skip compiling old hamlet package when the Java version is 10 or upper

2018-04-06 Thread Dinesh Chitlangia (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16429227#comment-16429227
 ] 

Dinesh Chitlangia commented on YARN-8123:
-

[~ajisakaa] - I think we can use the Range Specification to do that.
*hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/pom.xml*

  java10
  
[10,)
  

> Skip compiling old hamlet package when the Java version is 10 or upper
> --
>
> Key: YARN-8123
> URL: https://issues.apache.org/jira/browse/YARN-8123
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: webapp
> Environment: Java 10 or upper
>Reporter: Akira Ajisaka
>Priority: Major
>  Labels: newbie
>
> HADOOP-11423 skipped compiling old hamlet package when the Java version is 9, 
> however, it is not skipped with Java 10+. We need to fix it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8124) Service Application Master log file can't be found.

2018-04-06 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16429179#comment-16429179
 ] 

Rohith Sharma K S commented on YARN-8124:
-

I am seeing this error in OSX with sample sleep service.  I also see from 
process status that -DLOG_DIR is set to container userlog dir. Not sure why it 
didn't resolve this patch. Need to debug more on this. 

> Service Application Master log file can't be found. 
> 
>
> Key: YARN-8124
> URL: https://issues.apache.org/jira/browse/YARN-8124
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Rohith Sharma K S
>Priority: Critical
>
> It is observed that service am log file can't be found in log folder. When 
> inspected, _yarnservice-log4j.properties_ has entry for 
> log4j.appender.amlog.File=*${LOG_DIR}/serviceam.log* where LOG_DIR is not 
> resolving. 
> When changed above value to log4j.appender.amlog.File=*./serviceam.log*, able 
> to see the log. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7574) Add support for Node Labels on Auto Created Leaf Queue Template

2018-04-06 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16429141#comment-16429141
 ] 

genericqa commented on YARN-7574:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
31s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 33s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
17s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
17s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 17s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 12s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 1 new + 0 unchanged - 489 fixed = 1 total (was 489) 
{color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
17s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  3m 
57s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
18s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
14s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 18s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 47m  8s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | YARN-7574 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12917917/YARN-7574.11.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 24a1c151564e 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 024d7c0 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-YARN-Build/20260/artifact/out/patch-mvninstall-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| compile | 

[jira] [Commented] (YARN-8110) AMRMProxy recover should catch for all throwable to avoid premature exit

2018-04-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8110?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16429140#comment-16429140
 ] 

Hudson commented on YARN-8110:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13936 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13936/])
YARN-8110. AMRMProxy recover should catch for all throwable to avoid (subru: 
rev 00905efab22edd9857e0a3828c201bf70f03cb96)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/amrmproxy/AMRMProxyService.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/amrmproxy/TestAMRMProxyService.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/amrmproxy/BaseAMRMProxyTest.java


> AMRMProxy recover should catch for all throwable to avoid premature exit
> 
>
> Key: YARN-8110
> URL: https://issues.apache.org/jira/browse/YARN-8110
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Botong Huang
>Assignee: Botong Huang
>Priority: Major
> Attachments: YARN-8110.v1.patch
>
>
> In NM work preserving restart, when AMRMProxy recovers applications one by 
> one, the current catch only catch for IOException. If one app recovery throws 
> other thing (e.g. RuntimeException), it will fail the entire AMRMProxy 
> recovery. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8110) AMRMProxy recover should catch for all throwable to avoid premature exit

2018-04-06 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated YARN-8110:
-
Summary: AMRMProxy recover should catch for all throwable to avoid 
premature exit  (was: AMRMProxy recover should catch for all throwable retrying 
to recover apps)

> AMRMProxy recover should catch for all throwable to avoid premature exit
> 
>
> Key: YARN-8110
> URL: https://issues.apache.org/jira/browse/YARN-8110
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Botong Huang
>Assignee: Botong Huang
>Priority: Major
> Attachments: YARN-8110.v1.patch
>
>
> In NM work preserving restart, when AMRMProxy recovers applications one by 
> one, the current catch only catch for IOException. If one app recovery throws 
> other thing (e.g. RuntimeException), it will fail the entire AMRMProxy 
> recovery. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7574) Add support for Node Labels on Auto Created Leaf Queue Template

2018-04-06 Thread Suma Shivaprasad (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16429116#comment-16429116
 ] 

Suma Shivaprasad commented on YARN-7574:


Thanks [~sunilg] . Replies inline

{quote} 1. RMAppAttemptImpl changes seems like we could do in RMAppImpl itself 
during Start or Recover call flow. Any advantage in doing it in 
RMAppAttemptImpl? \{quote}

The scheduler creates it during addApplication event . So we wont be able to do 
it in RMAppImpl start flow

{quote}

2. GuaranteedOrZeroCapacityOverTimePolicy#init code comments are not complete 
{{//Should this be used inste}}

{quote}

Removed comment

{quote} 3. In same above method {{Set parentQueueLabels = 
parentQueue.getNodeLabelsForQueue();}} could be outside for loop.\{quote}

Fixed

{quote} 4. In {{initializeLeafQueueTemplate}}, all calculations are done per 
label. For non-exclusive label, given there is a demand from default label, it 
can borrow resource from other labels. How could we handle this here? cc/ 
[~leftnoteasy] \{quote}

Discussed with Wangda. Can take this as followup item since this will involve 
major changes to current policy implementation 

{quote} 5. You might need to review the checkstyle failures as possible. 
\{quote}

Fixed most of them

{quote} 6. In TestAppManager, we pass null now. I think its should be empty 
string rt. \{quote}

Fixed

Also added a dispatcher event to mark application attempt and subsequently 
application as failed if queue is not found. However there are multiple 
validations before this for queue existence before we reach here and doesnt hit 
this code block in normal flow.

 

 

 

 

> Add support for Node Labels on Auto Created Leaf Queue Template
> ---
>
> Key: YARN-7574
> URL: https://issues.apache.org/jira/browse/YARN-7574
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
>Priority: Major
> Attachments: YARN-7574.1.patch, YARN-7574.10.patch, 
> YARN-7574.11.patch, YARN-7574.2.patch, YARN-7574.3.patch, YARN-7574.4.patch, 
> YARN-7574.5.patch, YARN-7574.6.patch, YARN-7574.7.patch, YARN-7574.8.patch, 
> YARN-7574.9.patch
>
>
> YARN-7473 adds support for auto created leaf queues to inherit node labels 
> capacities from parent queues. Howebver there is no support for leaf queue 
> template to allow different configured capacities for different node labels. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7574) Add support for Node Labels on Auto Created Leaf Queue Template

2018-04-06 Thread Suma Shivaprasad (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suma Shivaprasad updated YARN-7574:
---
Attachment: YARN-7574.11.patch

> Add support for Node Labels on Auto Created Leaf Queue Template
> ---
>
> Key: YARN-7574
> URL: https://issues.apache.org/jira/browse/YARN-7574
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
>Priority: Major
> Attachments: YARN-7574.1.patch, YARN-7574.10.patch, 
> YARN-7574.11.patch, YARN-7574.2.patch, YARN-7574.3.patch, YARN-7574.4.patch, 
> YARN-7574.5.patch, YARN-7574.6.patch, YARN-7574.7.patch, YARN-7574.8.patch, 
> YARN-7574.9.patch
>
>
> YARN-7473 adds support for auto created leaf queues to inherit node labels 
> capacities from parent queues. Howebver there is no support for leaf queue 
> template to allow different configured capacities for different node labels. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8120) JVM can crash with SIGSEGV when exiting due to custom leveldb logger

2018-04-06 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16429055#comment-16429055
 ] 

genericqa commented on YARN-8120:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
28s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
1s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 29m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m  3s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
59s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 28m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
14s{color} | {color:green} root: The patch generated 0 new + 87 unchanged - 1 
fixed = 87 total (was 88) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 43s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
0s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 19m 
16s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 65m 
20s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
37s{color} | {color:green} hadoop-mapreduce-client-shuffle in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
20s{color} | {color:green} hadoop-mapreduce-client-hs in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
38s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}223m 54s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | YARN-8120 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12917879/YARN-8120.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  

[jira] [Commented] (YARN-7221) Add security check for privileged docker container

2018-04-06 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16429031#comment-16429031
 ] 

Eric Yang commented on YARN-7221:
-

Hi [~ebadger] [~jlowe], do we agree on the last change to check submitting user 
for sudo privileges instead of 
yarn.nodemanager.linux-container-executor.nonsecure-mode.local-user?

> Add security check for privileged docker container
> --
>
> Key: YARN-7221
> URL: https://issues.apache.org/jira/browse/YARN-7221
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 3.0.0, 3.1.0
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: YARN-7221.001.patch, YARN-7221.002.patch, 
> YARN-7221.003.patch, YARN-7221.004.patch, YARN-7221.005.patch, 
> YARN-7221.006.patch, YARN-7221.007.patch, YARN-7221.008.patch, 
> YARN-7221.009.patch, YARN-7221.010.patch, YARN-7221.011.patch, 
> YARN-7221.012.patch, YARN-7221.013.patch, YARN-7221.014.patch, 
> YARN-7221.015.patch, YARN-7221.016.patch, YARN-7221.017.patch, 
> YARN-7221.018.patch, YARN-7221.019.patch, YARN-7221.020.patch
>
>
> When a docker is running with privileges, majority of the use case is to have 
> some program running with root then drop privileges to another user.  i.e. 
> httpd to start with privileged and bind to port 80, then drop privileges to 
> www user.  
> # We should add security check for submitting users, to verify they have 
> "sudo" access to run privileged container.  
> # We should remove --user=uid:gid for privileged containers.  
>  
> Docker can be launched with --privileged=true, and --user=uid:gid flag.  With 
> this parameter combinations, user will not have access to become root user.  
> All docker exec command will be drop to uid:gid user to run instead of 
> granting privileges.  User can gain root privileges if container file system 
> contains files that give user extra power, but this type of image is 
> considered as dangerous.  Non-privileged user can launch container with 
> special bits to acquire same level of root power.  Hence, we lose control of 
> which image should be run with --privileges, and who have sudo rights to use 
> privileged container images.  As the result, we should check for sudo access 
> then decide to parameterize --privileged=true OR --user=uid:gid.  This will 
> avoid leading developer down the wrong path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8064) Docker ".cmd" files should not be put in hadoop.tmp.dir

2018-04-06 Thread Eric Badger (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16428917#comment-16428917
 ] 

Eric Badger commented on YARN-8064:
---

[~shaneku...@gmail.com], yea I only omitted it the first time around because I 
started to change the method and it looked like a pretty big pain. I would've 
kept going, but I misread it as the deprecated {{DockerContainerExecutor}} and 
didn't think it was worthwhile to change a whole bunch of deprecated code. I 
forgot that that had been removed from trunk. Now that I have to fix 
{{DockerCommandExecutor}}, though, it looks like this patch will get a bit 
uglier. Hopefully I can find a decent way to get access to the nmContext and 
container in {{DockerCommandExecutor}}

> Docker ".cmd" files should not be put in hadoop.tmp.dir
> ---
>
> Key: YARN-8064
> URL: https://issues.apache.org/jira/browse/YARN-8064
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Badger
>Assignee: Eric Badger
>Priority: Major
> Attachments: YARN-8064.001.patch, YARN-8064.002.patch, 
> YARN-8064.003.patch, YARN-8064.004.patch, YARN-8064.005.patch
>
>
> Currently all of the docker command files are being put into 
> {{hadoop.tmp.dir}}, which doesn't get cleaned up. So, eventually all of the 
> inodes will fill up and no more tasks will be able to run



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8064) Docker ".cmd" files should not be put in hadoop.tmp.dir

2018-04-06 Thread Shane Kumpf (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16428911#comment-16428911
 ] 

Shane Kumpf commented on YARN-8064:
---

Thanks for the patch [~ebadger]! I'm glad to see this being addressed.

Adding to [~Jim_Brennan]'s comment, the current patch isn't relocating all of 
the _.cmd_ files to _nmPrivate_ because of the differences between the two 
{{DockerClient#writeCommandToTempFile}} methods. Anything using 
{{DockerCommandExecutor}} will still put the _.cmd_ files in 
{{hadoop.tmp.dir}}. I think we'll need to consolidate these two methods and fix 
the callers.

> Docker ".cmd" files should not be put in hadoop.tmp.dir
> ---
>
> Key: YARN-8064
> URL: https://issues.apache.org/jira/browse/YARN-8064
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Badger
>Assignee: Eric Badger
>Priority: Major
> Attachments: YARN-8064.001.patch, YARN-8064.002.patch, 
> YARN-8064.003.patch, YARN-8064.004.patch, YARN-8064.005.patch
>
>
> Currently all of the docker command files are being put into 
> {{hadoop.tmp.dir}}, which doesn't get cleaned up. So, eventually all of the 
> inodes will fill up and no more tasks will be able to run



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-8125) YARNUIV2 does not failover from standby to active endpoint like previous YARN UI

2018-04-06 Thread Phil Zampino (JIRA)
Phil Zampino created YARN-8125:
--

 Summary: YARNUIV2 does not failover from standby to active 
endpoint like previous YARN UI
 Key: YARN-8125
 URL: https://issues.apache.org/jira/browse/YARN-8125
 Project: Hadoop YARN
  Issue Type: Bug
  Components: yarn-ui-v2
Reporter: Phil Zampino


If the YARN UI is accessed via the standby resource manager endpoint, it 
automatically redirects the requests to the active resource manager endpoint. 
YARNUIV2 should behave the same way.

Apache Knox 1.0.0 introduced the ability to dynamically determine proxied 
RESOURCEMANAGER and YARNUI service endpoints based on YARN configuration from 
Ambari. This functionality works for RM and YARNUI because even though the YARN 
config may reference the standby RM endpoint, requests are automatically 
redirected to the active endpoint.

If YARNUIV2 behaves differently, then Knox will not be able to support its own 
dynamic configuration behavior when proxying YARNUIV2.

KNOX-1212 adds the integration with Knox, but KNOX-1236 is blocked by this 
issue.




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8064) Docker ".cmd" files should not be put in hadoop.tmp.dir

2018-04-06 Thread Eric Badger (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16428863#comment-16428863
 ] 

Eric Badger commented on YARN-8064:
---

[~Jim_Brennan], good catch. I missed this as I thought it was a remnant of the 
old {{DockerContainerExector}}. However, it's a call from 
{{DockerCommandExecutor}}. So yes, I should fix that. I will put up another 
patch

> Docker ".cmd" files should not be put in hadoop.tmp.dir
> ---
>
> Key: YARN-8064
> URL: https://issues.apache.org/jira/browse/YARN-8064
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Badger
>Assignee: Eric Badger
>Priority: Major
> Attachments: YARN-8064.001.patch, YARN-8064.002.patch, 
> YARN-8064.003.patch, YARN-8064.004.patch, YARN-8064.005.patch
>
>
> Currently all of the docker command files are being put into 
> {{hadoop.tmp.dir}}, which doesn't get cleaned up. So, eventually all of the 
> inodes will fill up and no more tasks will be able to run



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8064) Docker ".cmd" files should not be put in hadoop.tmp.dir

2018-04-06 Thread Jim Brennan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16428841#comment-16428841
 ] 

Jim Brennan commented on YARN-8064:
---

[~ebadger], one question - why are we retaining the old version of 
writeCommandToTempFile(), which is still being used by executeDockerCommand()?  
Might be good to have comments that describe under which conditions each 
version should be used.

 

 

 

> Docker ".cmd" files should not be put in hadoop.tmp.dir
> ---
>
> Key: YARN-8064
> URL: https://issues.apache.org/jira/browse/YARN-8064
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Badger
>Assignee: Eric Badger
>Priority: Major
> Attachments: YARN-8064.001.patch, YARN-8064.002.patch, 
> YARN-8064.003.patch, YARN-8064.004.patch, YARN-8064.005.patch
>
>
> Currently all of the docker command files are being put into 
> {{hadoop.tmp.dir}}, which doesn't get cleaned up. So, eventually all of the 
> inodes will fill up and no more tasks will be able to run



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5881) [Umbrella] Enable configuration of queue capacity in terms of absolute resources

2018-04-06 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated YARN-5881:
--
Summary: [Umbrella] Enable configuration of queue capacity in terms of 
absolute resources  (was: Enable configuration of queue capacity in terms of 
absolute resources)

> [Umbrella] Enable configuration of queue capacity in terms of absolute 
> resources
> 
>
> Key: YARN-5881
> URL: https://issues.apache.org/jira/browse/YARN-5881
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Sean Po
>Assignee: Sunil G
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: 
> YARN-5881.Support.Absolute.Min.Max.Resource.In.Capacity.Scheduler.design-doc.v1.pdf,
>  YARN-5881.v0.patch, YARN-5881.v1.patch
>
>
> Currently, Yarn RM supports the configuration of queue capacity in terms of a 
> proportion to cluster capacity. In the context of Yarn being used as a public 
> cloud service, it makes more sense if queues can be configured absolutely. 
> This will allow administrators to set usage limits more concretely and 
> simplify customer expectations for cluster allocation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7117) [Umbrella] Capacity Scheduler: Support Auto Creation of Leaf Queues While Doing Queue Mapping

2018-04-06 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated YARN-7117:
--
Summary: [Umbrella] Capacity Scheduler: Support Auto Creation of Leaf 
Queues While Doing Queue Mapping  (was: Capacity Scheduler: Support Auto 
Creation of Leaf Queues While Doing Queue Mapping)

> [Umbrella] Capacity Scheduler: Support Auto Creation of Leaf Queues While 
> Doing Queue Mapping
> -
>
> Key: YARN-7117
> URL: https://issues.apache.org/jira/browse/YARN-7117
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: capacity scheduler
>Reporter: Wangda Tan
>Assignee: Suma Shivaprasad
>Priority: Major
> Attachments: 
> YARN-7117.Capacity.Scheduler.Support.Auto.Creation.Of.Leaf.Queue.pdf, 
> YARN-7117.poc.1.patch, YARN-7117.poc.patch, YARN-7117_Workflow.pdf
>
>
> Currently Capacity Scheduler doesn't support auto creation of queues when 
> doing queue mapping. We saw more and more use cases which has complex queue 
> mapping policies configured to handle application to queues mapping. 
> The most common use case of CapacityScheduler queue mapping is to create one 
> queue for each user/group. However update {{capacity-scheduler.xml}} and 
> {{RMAdmin:refreshQueues}} needs to be done when new user/group onboard. One 
> of the option to solve the problem is automatically create queues when new 
> user/group arrives.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Reopened] (YARN-6223) [Umbrella] Natively support GPU configuration/discovery/scheduling/isolation on YARN

2018-04-06 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli reopened YARN-6223:
---

> [Umbrella] Natively support GPU configuration/discovery/scheduling/isolation 
> on YARN
> 
>
> Key: YARN-6223
> URL: https://issues.apache.org/jira/browse/YARN-6223
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: YARN-6223.Natively-support-GPU-on-YARN-v1.pdf, 
> YARN-6223.wip.1.patch, YARN-6223.wip.2.patch, YARN-6223.wip.3.patch
>
>
> As varieties of workloads are moving to YARN, including machine learning / 
> deep learning which can speed up by leveraging GPU computation power. 
> Workloads should be able to request GPU from YARN as simple as CPU and memory.
> *To make a complete GPU story, we should support following pieces:*
> 1) GPU discovery/configuration: Admin can either config GPU resources and 
> architectures on each node, or more advanced, NodeManager can automatically 
> discover GPU resources and architectures and report to ResourceManager 
> 2) GPU scheduling: YARN scheduler should account GPU as a resource type just 
> like CPU and memory.
> 3) GPU isolation/monitoring: once launch a task with GPU resources, 
> NodeManager should properly isolate and monitor task's resource usage.
> For #2, YARN-3926 can support it natively. For #3, YARN-3611 has introduced 
> an extensible framework to support isolation for different resource types and 
> different runtimes.
> *Related JIRAs:*
> There're a couple of JIRAs (YARN-4122/YARN-5517) filed with similar goals but 
> different solutions:
> For scheduling:
> - YARN-4122/YARN-5517 are all adding a new GPU resource type to Resource 
> protocol instead of leveraging YARN-3926.
> For isolation:
> - And YARN-4122 proposed to use CGroups to do isolation which cannot solve 
> the problem listed at 
> https://github.com/NVIDIA/nvidia-docker/wiki/GPU-isolation#challenges such as 
> minor device number mapping; load nvidia_uvm module; mismatch of CUDA/driver 
> versions, etc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Reopened] (YARN-5881) Enable configuration of queue capacity in terms of absolute resources

2018-04-06 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli reopened YARN-5881:
---

> Enable configuration of queue capacity in terms of absolute resources
> -
>
> Key: YARN-5881
> URL: https://issues.apache.org/jira/browse/YARN-5881
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Sean Po
>Assignee: Sunil G
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: 
> YARN-5881.Support.Absolute.Min.Max.Resource.In.Capacity.Scheduler.design-doc.v1.pdf,
>  YARN-5881.v0.patch, YARN-5881.v1.patch
>
>
> Currently, Yarn RM supports the configuration of queue capacity in terms of a 
> proportion to cluster capacity. In the context of Yarn being used as a public 
> cloud service, it makes more sense if queues can be configured absolutely. 
> This will allow administrators to set usage limits more concretely and 
> simplify customer expectations for cluster allocation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-6223) [Umbrella] Natively support GPU configuration/discovery/scheduling/isolation on YARN

2018-04-06 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli resolved YARN-6223.
---
Resolution: Fixed

> [Umbrella] Natively support GPU configuration/discovery/scheduling/isolation 
> on YARN
> 
>
> Key: YARN-6223
> URL: https://issues.apache.org/jira/browse/YARN-6223
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: YARN-6223.Natively-support-GPU-on-YARN-v1.pdf, 
> YARN-6223.wip.1.patch, YARN-6223.wip.2.patch, YARN-6223.wip.3.patch
>
>
> As varieties of workloads are moving to YARN, including machine learning / 
> deep learning which can speed up by leveraging GPU computation power. 
> Workloads should be able to request GPU from YARN as simple as CPU and memory.
> *To make a complete GPU story, we should support following pieces:*
> 1) GPU discovery/configuration: Admin can either config GPU resources and 
> architectures on each node, or more advanced, NodeManager can automatically 
> discover GPU resources and architectures and report to ResourceManager 
> 2) GPU scheduling: YARN scheduler should account GPU as a resource type just 
> like CPU and memory.
> 3) GPU isolation/monitoring: once launch a task with GPU resources, 
> NodeManager should properly isolate and monitor task's resource usage.
> For #2, YARN-3926 can support it natively. For #3, YARN-3611 has introduced 
> an extensible framework to support isolation for different resource types and 
> different runtimes.
> *Related JIRAs:*
> There're a couple of JIRAs (YARN-4122/YARN-5517) filed with similar goals but 
> different solutions:
> For scheduling:
> - YARN-4122/YARN-5517 are all adding a new GPU resource type to Resource 
> protocol instead of leveraging YARN-3926.
> For isolation:
> - And YARN-4122 proposed to use CGroups to do isolation which cannot solve 
> the problem listed at 
> https://github.com/NVIDIA/nvidia-docker/wiki/GPU-isolation#challenges such as 
> minor device number mapping; load nvidia_uvm module; mismatch of CUDA/driver 
> versions, etc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-5983) [Umbrella] Support for FPGA as a Resource in YARN

2018-04-06 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli resolved YARN-5983.
---
Resolution: Fixed

> [Umbrella] Support for FPGA as a Resource in YARN
> -
>
> Key: YARN-5983
> URL: https://issues.apache.org/jira/browse/YARN-5983
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: yarn
>Reporter: Zhankun Tang
>Assignee: Zhankun Tang
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: YARN-5983-Support-FPGA-resource-on-NM-side_v1.pdf, 
> YARN-5983-implementation-notes.pdf, YARN-5983_end-to-end_test_report.pdf
>
>
> As various big data workload running on YARN, CPU will no longer scale 
> eventually and heterogeneous systems will become more important. ML/DL is a 
> rising star in recent years, applications focused on these areas have to 
> utilize GPU or FPGA to boost performance. Also, hardware vendors such as 
> Intel also invest in such hardware. It is most likely that FPGA will become 
> popular in data centers like CPU in the near future.
> So YARN as a resource managing and scheduling system, would be great to 
> evolve to support this. This JIRA proposes FPGA to be a first-class citizen. 
> The changes roughly includes:
> 1. FPGA resource detection and heartbeat
> 2. Scheduler changes (YARN-3926 invlolved)
> 3. FPGA related preparation and isolation before launch container
> We know that YARN-3926 is trying to extend current resource model. But still 
> we can leave some FPGA related discussion here



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-5881) Enable configuration of queue capacity in terms of absolute resources

2018-04-06 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli resolved YARN-5881.
---
Resolution: Fixed

> Enable configuration of queue capacity in terms of absolute resources
> -
>
> Key: YARN-5881
> URL: https://issues.apache.org/jira/browse/YARN-5881
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Sean Po
>Assignee: Sunil G
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: 
> YARN-5881.Support.Absolute.Min.Max.Resource.In.Capacity.Scheduler.design-doc.v1.pdf,
>  YARN-5881.v0.patch, YARN-5881.v1.patch
>
>
> Currently, Yarn RM supports the configuration of queue capacity in terms of a 
> proportion to cluster capacity. In the context of Yarn being used as a public 
> cloud service, it makes more sense if queues can be configured absolutely. 
> This will allow administrators to set usage limits more concretely and 
> simplify customer expectations for cluster allocation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Reopened] (YARN-5983) [Umbrella] Support for FPGA as a Resource in YARN

2018-04-06 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli reopened YARN-5983:
---

> [Umbrella] Support for FPGA as a Resource in YARN
> -
>
> Key: YARN-5983
> URL: https://issues.apache.org/jira/browse/YARN-5983
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: yarn
>Reporter: Zhankun Tang
>Assignee: Zhankun Tang
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: YARN-5983-Support-FPGA-resource-on-NM-side_v1.pdf, 
> YARN-5983-implementation-notes.pdf, YARN-5983_end-to-end_test_report.pdf
>
>
> As various big data workload running on YARN, CPU will no longer scale 
> eventually and heterogeneous systems will become more important. ML/DL is a 
> rising star in recent years, applications focused on these areas have to 
> utilize GPU or FPGA to boost performance. Also, hardware vendors such as 
> Intel also invest in such hardware. It is most likely that FPGA will become 
> popular in data centers like CPU in the near future.
> So YARN as a resource managing and scheduling system, would be great to 
> evolve to support this. This JIRA proposes FPGA to be a first-class citizen. 
> The changes roughly includes:
> 1. FPGA resource detection and heartbeat
> 2. Scheduler changes (YARN-3926 invlolved)
> 3. FPGA related preparation and isolation before launch container
> We know that YARN-3926 is trying to extend current resource model. But still 
> we can leave some FPGA related discussion here



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8064) Docker ".cmd" files should not be put in hadoop.tmp.dir

2018-04-06 Thread Eric Badger (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16428704#comment-16428704
 ] 

Eric Badger commented on YARN-8064:
---

[~jlowe], [~shaneku...@gmail.com], [~Jim_Brennan], I believe this JIRA is ready 
for review as well

> Docker ".cmd" files should not be put in hadoop.tmp.dir
> ---
>
> Key: YARN-8064
> URL: https://issues.apache.org/jira/browse/YARN-8064
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Badger
>Assignee: Eric Badger
>Priority: Major
> Attachments: YARN-8064.001.patch, YARN-8064.002.patch, 
> YARN-8064.003.patch, YARN-8064.004.patch, YARN-8064.005.patch
>
>
> Currently all of the docker command files are being put into 
> {{hadoop.tmp.dir}}, which doesn't get cleaned up. So, eventually all of the 
> inodes will fill up and no more tasks will be able to run



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8064) Docker ".cmd" files should not be put in hadoop.tmp.dir

2018-04-06 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16428701#comment-16428701
 ] 

genericqa commented on YARN-8064:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
26s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 11s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 38s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 19m 
25s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 75m 58s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | YARN-8064 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12917875/YARN-8064.005.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 9ed429434ed4 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 024d7c0 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/20258/testReport/ |
| Max. process+thread count | 302 (vs. ulimit of 1) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/20258/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Docker ".cmd" files should not be put in hadoop.tmp.dir
> 

[jira] [Updated] (YARN-8120) JVM can crash with SIGSEGV when exiting due to custom leveldb logger

2018-04-06 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe updated YARN-8120:
-
Component/s: (was: timelineserver)

Patch that removes the use of a custom logger from leveldb instances that were 
using one.

> JVM can crash with SIGSEGV when exiting due to custom leveldb logger
> 
>
> Key: YARN-8120
> URL: https://issues.apache.org/jira/browse/YARN-8120
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager, resourcemanager
>Reporter: Jason Lowe
>Assignee: Jason Lowe
>Priority: Major
> Attachments: YARN-8120.001.patch
>
>
> The JVM can crash upon exit with a SIGSEGV when leveldb is configured with a 
> custom user logger as is done with LeveldbLogger.  See 
> https://github.com/fusesource/leveldbjni/issues/36 for details.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8120) JVM can crash with SIGSEGV when exiting due to custom leveldb logger

2018-04-06 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe updated YARN-8120:
-
Attachment: YARN-8120.001.patch

> JVM can crash with SIGSEGV when exiting due to custom leveldb logger
> 
>
> Key: YARN-8120
> URL: https://issues.apache.org/jira/browse/YARN-8120
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager, resourcemanager
>Reporter: Jason Lowe
>Assignee: Jason Lowe
>Priority: Major
> Attachments: YARN-8120.001.patch
>
>
> The JVM can crash upon exit with a SIGSEGV when leveldb is configured with a 
> custom user logger as is done with LeveldbLogger.  See 
> https://github.com/fusesource/leveldbjni/issues/36 for details.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-8060) Create default readiness check for service components

2018-04-06 Thread Shane Kumpf (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16428626#comment-16428626
 ] 

Shane Kumpf edited comment on YARN-8060 at 4/6/18 5:21 PM:
---

Thanks for the patch, [~billie.rinaldi]! This is a much needed check given the 
delay we see with the IP address becoming available. I have tested the IP and 
DNS portions of this readiness check and am getting the desired results.

Few suggestions:
 # There is one case where I think having a default readiness check that 
depends on IP might be an issue, which is when {{\-\-net=none}}. In that case 
the container will never get an IP address. While {{\-\-net=none}} is the only 
case I can come up with, there may be others where this check could be 
problematic. Could we consider a configuration that would allow for disabling 
the default check?
 # With this patch, the container correctly stays in a {{RUNNING_BUT_UNREADY}} 
state until the default readiness check passes, but as a user it's unclear why 
the container is still in that state. Could we add logging to the AM that shows 
the status of the readiness checks?
 # {{ServiceRegistryUtils.registryDNSLookupExists}} could use additional 
comments. Can you elaborate on the need for the second lookup?
 # The service API docs were updated, but the description for the HTTP 
readiness check doesn't mention that the DEFAULT checks will also be executed. 
I'd like to see the DEFAULT check behavior outlined somewhere in the docs.
 # The checkstyle issues look valid if you could address those.


was (Author: shaneku...@gmail.com):
Thanks for the patch, [~billie.rinaldi]! This is a much needed check given the 
delay we see with the IP address becoming available. I have tested the IP and 
DNS portions of this readiness check and am getting the desired results.

Few suggestions:
 # There is one case where I think having a default readiness check that 
depends on IP might be an issue, which is when {{--net=none}}. In that case the 
container will never get an IP address. While {{--net=none}} is the only case I 
can come up with, there may be others where this check could be problematic. 
Could we consider a configuration that would allow for disabling the default 
check?
 # With this patch, the container correctly stays in a RUNNING_BUT_UNREADY 
state until the default readiness check passes, but as a user it's unclear why 
the container is still in that state. Could we add logging to the AM that shows 
the status of the readiness checks?
 # {{ServiceRegistryUtils.registryDNSLookupExists}} could use additional 
comments. Can you elaborate on the need for the second lookup?
 # The service API docs were updated, but the description for the HTTP 
readiness check doesn't mention that the DEFAULT checks will also be executed. 
I'd like to see the DEFAULT check mentioned somewhere in the docs.
 # The checkstyle issues look valid if you could address those.

> Create default readiness check for service components
> -
>
> Key: YARN-8060
> URL: https://issues.apache.org/jira/browse/YARN-8060
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Reporter: Billie Rinaldi
>Assignee: Billie Rinaldi
>Priority: Major
> Attachments: YARN-8060.1.patch
>
>
> It is currently possible for a component instance to have READY status before 
> the AM retrieves an IP for the container. We should make sure the IP has been 
> retrieved before marking the instance as READY.
> This default probe could also have an option to check for a DNS entry for the 
> instance's hostname if a DNS address is provided.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8060) Create default readiness check for service components

2018-04-06 Thread Shane Kumpf (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16428626#comment-16428626
 ] 

Shane Kumpf commented on YARN-8060:
---

Thanks for the patch, [~billie.rinaldi]! This is a much needed check given the 
delay we see with the IP address becoming available. I have tested the IP and 
DNS portions of this readiness check and am getting the desired results.

Few suggestions:
 # There is one case where I think having a default readiness check that 
depends on IP might be an issue, which is when {{--net=none}}. In that case the 
container will never get an IP address. While {{--net=none}} is the only case I 
can come up with, there may be others where this check could be problematic. 
Could we consider a configuration that would allow for disabling the default 
check?
 # With this patch, the container correctly stays in a RUNNING_BUT_UNREADY 
state until the default readiness check passes, but as a user it's unclear why 
the container is still in that state. Could we add logging to the AM that shows 
the status of the readiness checks?
 # {{ServiceRegistryUtils.registryDNSLookupExists}} could use additional 
comments. Can you elaborate on the need for the second lookup?
 # The service API docs were updated, but the description for the HTTP 
readiness check doesn't mention that the DEFAULT checks will also be executed. 
I'd like to see the DEFAULT check mentioned somewhere in the docs.
 # The checkstyle issues look valid if you could address those.

> Create default readiness check for service components
> -
>
> Key: YARN-8060
> URL: https://issues.apache.org/jira/browse/YARN-8060
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Reporter: Billie Rinaldi
>Assignee: Billie Rinaldi
>Priority: Major
> Attachments: YARN-8060.1.patch
>
>
> It is currently possible for a component instance to have READY status before 
> the AM retrieves an IP for the container. We should make sure the IP has been 
> retrieved before marking the instance as READY.
> This default probe could also have an option to check for a DNS entry for the 
> instance's hostname if a DNS address is provided.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7574) Add support for Node Labels on Auto Created Leaf Queue Template

2018-04-06 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16428606#comment-16428606
 ] 

Sunil G commented on YARN-7574:
---

Hi [~suma.shivaprasad]

Thanks for the patch. Few comments.

1. RMAppAttemptImpl changes seems like we could do in RMAppImpl itself during 
Start or Recover call flow. Any advantage in doing it in RMAppAttemptImpl?

2. GuaranteedOrZeroCapacityOverTimePolicy#init code comments are not complete 
{{//Should this be used inste}}

3. In same above method {{Set parentQueueLabels = 
parentQueue.getNodeLabelsForQueue();}} could be outside for loop.

4. In {{initializeLeafQueueTemplate}}, all calculations are done per label. For 
non-exclusive label, given there is a demand from default label, it can borrow 
resource from other labels. How could we handle this here? cc/ [~leftnoteasy]

5. You might need to review the checkstyle failures as possible.

6. In TestAppManager, we pass null now. I think its should be empty string rt.

> Add support for Node Labels on Auto Created Leaf Queue Template
> ---
>
> Key: YARN-7574
> URL: https://issues.apache.org/jira/browse/YARN-7574
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
>Priority: Major
> Attachments: YARN-7574.1.patch, YARN-7574.10.patch, 
> YARN-7574.2.patch, YARN-7574.3.patch, YARN-7574.4.patch, YARN-7574.5.patch, 
> YARN-7574.6.patch, YARN-7574.7.patch, YARN-7574.8.patch, YARN-7574.9.patch
>
>
> YARN-7473 adds support for auto created leaf queues to inherit node labels 
> capacities from parent queues. Howebver there is no support for leaf queue 
> template to allow different configured capacities for different node labels. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8107) Give an informative message when incorrect format is used in ATSv2 filter attributes

2018-04-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16428595#comment-16428595
 ] 

Hudson commented on YARN-8107:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13935 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13935/])
YARN-8107. Give an informative message when incorrect format is used in 
(haibochen: rev 024d7c08704e6a5fcc1f53a8f56a44c84c8d5fa0)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/reader/TimelineParserForCompareExpr.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/reader/TestTimelineReaderWebServicesUtils.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/reader/TimelineParserForEqualityExpr.java


> Give an informative message when incorrect format is used in ATSv2 filter 
> attributes
> 
>
> Key: YARN-8107
> URL: https://issues.apache.org/jira/browse/YARN-8107
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: ATSv2
>Reporter: Charan Hebri
>Assignee: Rohith Sharma K S
>Priority: Major
> Fix For: 2.10.0, 3.2.0, 3.1.1, 3.0.3
>
> Attachments: YARN-8107.001.patch, YARN-8107.002.patch
>
>
> Using an incorrect format for infofilters, conffilters and metricfilters 
> throws a NPE with no clear message to the caller. This should be tagged as a 
> 400 Bad Request with an informative message. Below is the timeline reader log.
> {noformat}
> 2018-04-02 06:44:10,451 INFO  reader.TimelineReaderWebServices 
> (TimelineReaderWebServices.java:handleException(173)) - Processed URL 
> /ws/v2/timeline/users/hrt_qa/flows/flow4/runs/1/apps?infofilters=UIDeq but 
> encountered exception (Took 0 ms.)
> 2018-04-02 06:44:10,451 ERROR reader.TimelineReaderWebServices 
> (TimelineReaderWebServices.java:handleException(188)) - Error while 
> processing REST request
> java.lang.NullPointerException
> at 
> org.apache.hadoop.yarn.server.timelineservice.reader.filter.TimelineFilterUtils.createHBaseFilterList(TimelineFilterUtils.java:276)
> at 
> org.apache.hadoop.yarn.server.timelineservice.storage.reader.ApplicationEntityReader.constructFilterListBasedOnFilters(ApplicationEntityReader.java:126)
> at 
> org.apache.hadoop.yarn.server.timelineservice.storage.reader.TimelineEntityReader.createFilterList(TimelineEntityReader.java:157)
> at 
> org.apache.hadoop.yarn.server.timelineservice.storage.reader.TimelineEntityReader.readEntities(TimelineEntityReader.java:277)
> at 
> org.apache.hadoop.yarn.server.timelineservice.storage.HBaseTimelineReaderImpl.getEntities(HBaseTimelineReaderImpl.java:87)
> at 
> org.apache.hadoop.yarn.server.timelineservice.reader.TimelineReaderManager.getEntities(TimelineReaderManager.java:143)
> at 
> org.apache.hadoop.yarn.server.timelineservice.reader.TimelineReaderWebServices.getEntities(TimelineReaderWebServices.java:605)
> at 
> org.apache.hadoop.yarn.server.timelineservice.reader.TimelineReaderWebServices.getFlowRunApps(TimelineReaderWebServices.java:1962)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$TypeOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:185)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
> at 
> com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:302)
> at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
> at 
> com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
> at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
> at 
> com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1542)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1473)
> at 
> 

[jira] [Updated] (YARN-8064) Docker ".cmd" files should not be put in hadoop.tmp.dir

2018-04-06 Thread Eric Badger (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Badger updated YARN-8064:
--
Attachment: YARN-8064.005.patch

> Docker ".cmd" files should not be put in hadoop.tmp.dir
> ---
>
> Key: YARN-8064
> URL: https://issues.apache.org/jira/browse/YARN-8064
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Badger
>Assignee: Eric Badger
>Priority: Major
> Attachments: YARN-8064.001.patch, YARN-8064.002.patch, 
> YARN-8064.003.patch, YARN-8064.004.patch, YARN-8064.005.patch
>
>
> Currently all of the docker command files are being put into 
> {{hadoop.tmp.dir}}, which doesn't get cleaned up. So, eventually all of the 
> inodes will fill up and no more tasks will be able to run



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8064) Docker ".cmd" files should not be put in hadoop.tmp.dir

2018-04-06 Thread Eric Badger (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16428583#comment-16428583
 ] 

Eric Badger commented on YARN-8064:
---

More checkstyle fixes in 005

> Docker ".cmd" files should not be put in hadoop.tmp.dir
> ---
>
> Key: YARN-8064
> URL: https://issues.apache.org/jira/browse/YARN-8064
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Badger
>Assignee: Eric Badger
>Priority: Major
> Attachments: YARN-8064.001.patch, YARN-8064.002.patch, 
> YARN-8064.003.patch, YARN-8064.004.patch, YARN-8064.005.patch
>
>
> Currently all of the docker command files are being put into 
> {{hadoop.tmp.dir}}, which doesn't get cleaned up. So, eventually all of the 
> inodes will fill up and no more tasks will be able to run



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8107) Give an informative message when incorrect format is used in ATSv2 filter attributes

2018-04-06 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated YARN-8107:
-
Summary: Give an informative message when incorrect format is used in ATSv2 
filter attributes  (was: NPE when incorrect format is used in ATSv2 filter 
attributes)

> Give an informative message when incorrect format is used in ATSv2 filter 
> attributes
> 
>
> Key: YARN-8107
> URL: https://issues.apache.org/jira/browse/YARN-8107
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: ATSv2
>Reporter: Charan Hebri
>Assignee: Rohith Sharma K S
>Priority: Major
> Attachments: YARN-8107.001.patch, YARN-8107.002.patch
>
>
> Using an incorrect format for infofilters, conffilters and metricfilters 
> throws a NPE with no clear message to the caller. This should be tagged as a 
> 400 Bad Request with an informative message. Below is the timeline reader log.
> {noformat}
> 2018-04-02 06:44:10,451 INFO  reader.TimelineReaderWebServices 
> (TimelineReaderWebServices.java:handleException(173)) - Processed URL 
> /ws/v2/timeline/users/hrt_qa/flows/flow4/runs/1/apps?infofilters=UIDeq but 
> encountered exception (Took 0 ms.)
> 2018-04-02 06:44:10,451 ERROR reader.TimelineReaderWebServices 
> (TimelineReaderWebServices.java:handleException(188)) - Error while 
> processing REST request
> java.lang.NullPointerException
> at 
> org.apache.hadoop.yarn.server.timelineservice.reader.filter.TimelineFilterUtils.createHBaseFilterList(TimelineFilterUtils.java:276)
> at 
> org.apache.hadoop.yarn.server.timelineservice.storage.reader.ApplicationEntityReader.constructFilterListBasedOnFilters(ApplicationEntityReader.java:126)
> at 
> org.apache.hadoop.yarn.server.timelineservice.storage.reader.TimelineEntityReader.createFilterList(TimelineEntityReader.java:157)
> at 
> org.apache.hadoop.yarn.server.timelineservice.storage.reader.TimelineEntityReader.readEntities(TimelineEntityReader.java:277)
> at 
> org.apache.hadoop.yarn.server.timelineservice.storage.HBaseTimelineReaderImpl.getEntities(HBaseTimelineReaderImpl.java:87)
> at 
> org.apache.hadoop.yarn.server.timelineservice.reader.TimelineReaderManager.getEntities(TimelineReaderManager.java:143)
> at 
> org.apache.hadoop.yarn.server.timelineservice.reader.TimelineReaderWebServices.getEntities(TimelineReaderWebServices.java:605)
> at 
> org.apache.hadoop.yarn.server.timelineservice.reader.TimelineReaderWebServices.getFlowRunApps(TimelineReaderWebServices.java:1962)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$TypeOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:185)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
> at 
> com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:302)
> at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
> at 
> com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
> at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
> at 
> com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1542)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1473)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1419)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1409)
> at 
> com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:409)
> at 
> com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:558)
> at 
> com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:733)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
> at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:848)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1772)
> at 
> 

[jira] [Commented] (YARN-8107) NPE when incorrect format is used in ATSv2 filter attributes

2018-04-06 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16428553#comment-16428553
 ] 

Haibo Chen commented on YARN-8107:
--

+1 checking this shortly

> NPE when incorrect format is used in ATSv2 filter attributes
> 
>
> Key: YARN-8107
> URL: https://issues.apache.org/jira/browse/YARN-8107
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: ATSv2
>Reporter: Charan Hebri
>Assignee: Rohith Sharma K S
>Priority: Major
> Attachments: YARN-8107.001.patch, YARN-8107.002.patch
>
>
> Using an incorrect format for infofilters, conffilters and metricfilters 
> throws a NPE with no clear message to the caller. This should be tagged as a 
> 400 Bad Request with an informative message. Below is the timeline reader log.
> {noformat}
> 2018-04-02 06:44:10,451 INFO  reader.TimelineReaderWebServices 
> (TimelineReaderWebServices.java:handleException(173)) - Processed URL 
> /ws/v2/timeline/users/hrt_qa/flows/flow4/runs/1/apps?infofilters=UIDeq but 
> encountered exception (Took 0 ms.)
> 2018-04-02 06:44:10,451 ERROR reader.TimelineReaderWebServices 
> (TimelineReaderWebServices.java:handleException(188)) - Error while 
> processing REST request
> java.lang.NullPointerException
> at 
> org.apache.hadoop.yarn.server.timelineservice.reader.filter.TimelineFilterUtils.createHBaseFilterList(TimelineFilterUtils.java:276)
> at 
> org.apache.hadoop.yarn.server.timelineservice.storage.reader.ApplicationEntityReader.constructFilterListBasedOnFilters(ApplicationEntityReader.java:126)
> at 
> org.apache.hadoop.yarn.server.timelineservice.storage.reader.TimelineEntityReader.createFilterList(TimelineEntityReader.java:157)
> at 
> org.apache.hadoop.yarn.server.timelineservice.storage.reader.TimelineEntityReader.readEntities(TimelineEntityReader.java:277)
> at 
> org.apache.hadoop.yarn.server.timelineservice.storage.HBaseTimelineReaderImpl.getEntities(HBaseTimelineReaderImpl.java:87)
> at 
> org.apache.hadoop.yarn.server.timelineservice.reader.TimelineReaderManager.getEntities(TimelineReaderManager.java:143)
> at 
> org.apache.hadoop.yarn.server.timelineservice.reader.TimelineReaderWebServices.getEntities(TimelineReaderWebServices.java:605)
> at 
> org.apache.hadoop.yarn.server.timelineservice.reader.TimelineReaderWebServices.getFlowRunApps(TimelineReaderWebServices.java:1962)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$TypeOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:185)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
> at 
> com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:302)
> at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
> at 
> com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
> at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
> at 
> com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1542)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1473)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1419)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1409)
> at 
> com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:409)
> at 
> com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:558)
> at 
> com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:733)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
> at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:848)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1772)
> at 
> org.apache.hadoop.yarn.server.timelineservice.reader.security.TimelineReaderWhitelistAuthorizationFilter.doFilter(TimelineReaderWhitelistAuthorizationFilter.java:85)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
> at 
> 

[jira] [Commented] (YARN-8064) Docker ".cmd" files should not be put in hadoop.tmp.dir

2018-04-06 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16428536#comment-16428536
 ] 

genericqa commented on YARN-8064:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
29s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 13s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 25s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
59s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 19m 
16s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 75m 55s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | 
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
|  |  Null passed for non-null parameter of new java.io.File(String) in 
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.docker.DockerClient.writeCommandToTempFile(DockerCommand,
 Container, Context)  Method invoked at DockerClient.java:of new 
java.io.File(String) in 
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.docker.DockerClient.writeCommandToTempFile(DockerCommand,
 Container, Context)  Method invoked at DockerClient.java:[line 122] |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | YARN-8064 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12917863/YARN-8064.004.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux db558248794a 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Commented] (YARN-8083) [UI2] All YARN related configurations are paged together in conf page

2018-04-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16428534#comment-16428534
 ] 

Hudson commented on YARN-8083:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #13934 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13934/])
YARN-8083. [UI2] All YARN related configurations are paged together in (sunilg: 
rev b17dc9f5f54fd91defc1d8646f8229da5fe7ccbb)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-tools/yarn-conf.hbs
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/yarn-tools/yarn-conf.js


> [UI2] All YARN related configurations are paged together in conf page
> -
>
> Key: YARN-8083
> URL: https://issues.apache.org/jira/browse/YARN-8083
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Reporter: Zoltan Haindrich
>Assignee: Gergely Novák
>Priority: Major
> Fix For: 3.2.0, 3.1.1
>
> Attachments: YARN-8083.001.patch, conf_browse.png
>
>
> there are 3 configs displayed on the same page; however all of the viewer 
> components respond to all page controllers...
> http://172.22.78.179:8088/ui2/#/yarn-tools/yarn-conf



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8083) [UI2] All YARN related configurations are paged together in conf page

2018-04-06 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-8083:
--
Fix Version/s: (was: 3.0.3)
   3.1.1

> [UI2] All YARN related configurations are paged together in conf page
> -
>
> Key: YARN-8083
> URL: https://issues.apache.org/jira/browse/YARN-8083
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Reporter: Zoltan Haindrich
>Assignee: Gergely Novák
>Priority: Major
> Fix For: 3.2.0, 3.1.1
>
> Attachments: YARN-8083.001.patch, conf_browse.png
>
>
> there are 3 configs displayed on the same page; however all of the viewer 
> components respond to all page controllers...
> http://172.22.78.179:8088/ui2/#/yarn-tools/yarn-conf



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7667) Docker Stop grace period should be configurable

2018-04-06 Thread Jim Brennan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16428497#comment-16428497
 ] 

Jim Brennan commented on YARN-7667:
---

Patch looks good to me.

> Docker Stop grace period should be configurable
> ---
>
> Key: YARN-7667
> URL: https://issues.apache.org/jira/browse/YARN-7667
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Eric Badger
>Assignee: Eric Badger
>Priority: Major
> Attachments: YARN-7667.001.patch, YARN-7667.002.patch, 
> YARN-7667.003.patch
>
>
> {{DockerStopCommand}} has a {{setGracePeriod}} method, but it is never 
> called. So, the stop uses the 10 second default grace period from docker



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8064) Docker ".cmd" files should not be put in hadoop.tmp.dir

2018-04-06 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16428492#comment-16428492
 ] 

genericqa commented on YARN-8064:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
26s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 18s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 37s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
2s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 19m 30s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 74m 59s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | 
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
|  |  Null passed for non-null parameter of 
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.docker.DockerClient.writeCommandToTempFile(DockerCommand,
 Container, Context) in 
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.DockerLinuxContainerRuntime.launchContainer(ContainerRuntimeContext)
  Method invoked at DockerLinuxContainerRuntime.java:of 
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.docker.DockerClient.writeCommandToTempFile(DockerCommand,
 Container, Context) in 
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.DockerLinuxContainerRuntime.launchContainer(ContainerRuntimeContext)
  Method invoked at DockerLinuxContainerRuntime.java:[line 891] |
| Failed junit tests | 
hadoop.yarn.server.nodemanager.containermanager.scheduler.TestContainerSchedulerQueuing
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | YARN-8064 |
| JIRA Patch URL | 

[jira] [Commented] (YARN-8124) Service Application Master log file can't be found.

2018-04-06 Thread Billie Rinaldi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16428466#comment-16428466
 ] 

Billie Rinaldi commented on YARN-8124:
--

I haven't seen this issue when testing trunk. Can you tell me a bit more about 
your environment? The AM should be getting a system property LOG_DIR set to 
 which is expanded in ContainerLaunch.

> Service Application Master log file can't be found. 
> 
>
> Key: YARN-8124
> URL: https://issues.apache.org/jira/browse/YARN-8124
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Rohith Sharma K S
>Priority: Critical
>
> It is observed that service am log file can't be found in log folder. When 
> inspected, _yarnservice-log4j.properties_ has entry for 
> log4j.appender.amlog.File=*${LOG_DIR}/serviceam.log* where LOG_DIR is not 
> resolving. 
> When changed above value to log4j.appender.amlog.File=*./serviceam.log*, able 
> to see the log. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8064) Docker ".cmd" files should not be put in hadoop.tmp.dir

2018-04-06 Thread Eric Badger (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16428426#comment-16428426
 ] 

Eric Badger commented on YARN-8064:
---

Missed the findbugs error in the last patch. Patch 004 should fix that

> Docker ".cmd" files should not be put in hadoop.tmp.dir
> ---
>
> Key: YARN-8064
> URL: https://issues.apache.org/jira/browse/YARN-8064
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Badger
>Assignee: Eric Badger
>Priority: Major
> Attachments: YARN-8064.001.patch, YARN-8064.002.patch, 
> YARN-8064.003.patch, YARN-8064.004.patch
>
>
> Currently all of the docker command files are being put into 
> {{hadoop.tmp.dir}}, which doesn't get cleaned up. So, eventually all of the 
> inodes will fill up and no more tasks will be able to run



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8064) Docker ".cmd" files should not be put in hadoop.tmp.dir

2018-04-06 Thread Eric Badger (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Badger updated YARN-8064:
--
Attachment: YARN-8064.004.patch

> Docker ".cmd" files should not be put in hadoop.tmp.dir
> ---
>
> Key: YARN-8064
> URL: https://issues.apache.org/jira/browse/YARN-8064
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Badger
>Assignee: Eric Badger
>Priority: Major
> Attachments: YARN-8064.001.patch, YARN-8064.002.patch, 
> YARN-8064.003.patch, YARN-8064.004.patch
>
>
> Currently all of the docker command files are being put into 
> {{hadoop.tmp.dir}}, which doesn't get cleaned up. So, eventually all of the 
> inodes will fill up and no more tasks will be able to run



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7931) [atsv2 read acls] Include domain table creation as part of schema creator

2018-04-06 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16428411#comment-16428411
 ] 

Haibo Chen commented on YARN-7931:
--

I have one question. The DomainRowKeyConverter.encode() and decode() are not 
symmetric. By that I mean, the domainId is not separator-encoded at encoding 
time, but will be separator-decoded in decoding time. Will that cause issue if 
the domainId happens to contain bytes of escape separators?
{code:java}
    @Override
    public byte[] encode(DomainRowKey rowKey) {
  if (rowKey == null) {
    return Separator.EMPTY_BYTES;
  }
  return Separator.QUALIFIERS.join(Separator.encode(rowKey.getClusterId(),
  Separator.SPACE, Separator.TAB, Separator.QUALIFIERS), Bytes
  .toBytes(rowKey.getDomainId()));
    }
{code}
{code:java}
    @Override
    public DomainRowKey decode(byte[] rowKey) {
  byte[][] rowKeyComponents =
  Separator.QUALIFIERS.split(rowKey, SEGMENT_SIZES);
  if (rowKeyComponents.length != 2) {
    throw new IllegalArgumentException("the row key is not valid for "
    + "a domain id");
  }
  String clusterId =
  Separator.decode(Bytes.toString(rowKeyComponents[0]),
  Separator.QUALIFIERS, Separator.TAB, Separator.SPACE);

  String domainId =
  Separator.decode(Bytes.toString(rowKeyComponents[1]),
  Separator.QUALIFIERS, Separator.TAB, Separator.SPACE);

  return new DomainRowKey(clusterId, domainId);
    }
{code}
dfa

> [atsv2 read acls] Include domain table creation as part of schema creator
> -
>
> Key: YARN-7931
> URL: https://issues.apache.org/jira/browse/YARN-7931
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Vrushali C
>Assignee: Vrushali C
>Priority: Major
> Attachments: YARN-7391.0001.patch, YARN-7391.0002.patch, 
> YARN-7391.0003.patch
>
>
>  
> Update the schema creator to create a domain table to store timeline entity 
> domain info. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7221) Add security check for privileged docker container

2018-04-06 Thread Billie Rinaldi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16428400#comment-16428400
 ] 

Billie Rinaldi commented on YARN-7221:
--

Thanks [~eyang], I am +1 for patch 020.

> Add security check for privileged docker container
> --
>
> Key: YARN-7221
> URL: https://issues.apache.org/jira/browse/YARN-7221
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 3.0.0, 3.1.0
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: YARN-7221.001.patch, YARN-7221.002.patch, 
> YARN-7221.003.patch, YARN-7221.004.patch, YARN-7221.005.patch, 
> YARN-7221.006.patch, YARN-7221.007.patch, YARN-7221.008.patch, 
> YARN-7221.009.patch, YARN-7221.010.patch, YARN-7221.011.patch, 
> YARN-7221.012.patch, YARN-7221.013.patch, YARN-7221.014.patch, 
> YARN-7221.015.patch, YARN-7221.016.patch, YARN-7221.017.patch, 
> YARN-7221.018.patch, YARN-7221.019.patch, YARN-7221.020.patch
>
>
> When a docker is running with privileges, majority of the use case is to have 
> some program running with root then drop privileges to another user.  i.e. 
> httpd to start with privileged and bind to port 80, then drop privileges to 
> www user.  
> # We should add security check for submitting users, to verify they have 
> "sudo" access to run privileged container.  
> # We should remove --user=uid:gid for privileged containers.  
>  
> Docker can be launched with --privileged=true, and --user=uid:gid flag.  With 
> this parameter combinations, user will not have access to become root user.  
> All docker exec command will be drop to uid:gid user to run instead of 
> granting privileges.  User can gain root privileges if container file system 
> contains files that give user extra power, but this type of image is 
> considered as dangerous.  Non-privileged user can launch container with 
> special bits to acquire same level of root power.  Hence, we lose control of 
> which image should be run with --privileges, and who have sudo rights to use 
> privileged container images.  As the result, we should check for sudo access 
> then decide to parameterize --privileged=true OR --user=uid:gid.  This will 
> avoid leading developer down the wrong path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5151) [YARN-3368] Support kill application from new YARN UI

2018-04-06 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16428398#comment-16428398
 ] 

genericqa commented on YARN-5151:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
35m 37s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 29s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 48m 18s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | YARN-5151 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12917853/YARN-5151.007.patch |
| Optional Tests |  asflicense  shadedclient  |
| uname | Linux 5a1ded347c04 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / ea3849f |
| maven | version: Apache Maven 3.3.9 |
| Max. process+thread count | 325 (vs. ulimit of 1) |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/20255/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> [YARN-3368] Support kill application from new YARN UI
> -
>
> Key: YARN-5151
> URL: https://issues.apache.org/jira/browse/YARN-5151
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Gergely Novák
>Priority: Major
> Attachments: YARN-5151.001.patch, YARN-5151.002.patch, 
> YARN-5151.003.patch, YARN-5151.004.patch, YARN-5151.005.patch, 
> YARN-5151.007.patch, screenshot-1.png, screenshot-2.png
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8064) Docker ".cmd" files should not be put in hadoop.tmp.dir

2018-04-06 Thread Eric Badger (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Badger updated YARN-8064:
--
Attachment: YARN-8064.003.patch

> Docker ".cmd" files should not be put in hadoop.tmp.dir
> ---
>
> Key: YARN-8064
> URL: https://issues.apache.org/jira/browse/YARN-8064
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Badger
>Assignee: Eric Badger
>Priority: Major
> Attachments: YARN-8064.001.patch, YARN-8064.002.patch, 
> YARN-8064.003.patch
>
>
> Currently all of the docker command files are being put into 
> {{hadoop.tmp.dir}}, which doesn't get cleaned up. So, eventually all of the 
> inodes will fill up and no more tasks will be able to run



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8064) Docker ".cmd" files should not be put in hadoop.tmp.dir

2018-04-06 Thread Eric Badger (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16428392#comment-16428392
 ] 

Eric Badger commented on YARN-8064:
---

New patch fixes checkstyle

> Docker ".cmd" files should not be put in hadoop.tmp.dir
> ---
>
> Key: YARN-8064
> URL: https://issues.apache.org/jira/browse/YARN-8064
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Badger
>Assignee: Eric Badger
>Priority: Major
> Attachments: YARN-8064.001.patch, YARN-8064.002.patch, 
> YARN-8064.003.patch
>
>
> Currently all of the docker command files are being put into 
> {{hadoop.tmp.dir}}, which doesn't get cleaned up. So, eventually all of the 
> inodes will fill up and no more tasks will be able to run



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7667) Docker Stop grace period should be configurable

2018-04-06 Thread Eric Badger (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16428386#comment-16428386
 ] 

Eric Badger commented on YARN-7667:
---

[~jlowe], [~shaneku...@gmail.com], [~Jim_Brennan], I think this is ready for 
review

> Docker Stop grace period should be configurable
> ---
>
> Key: YARN-7667
> URL: https://issues.apache.org/jira/browse/YARN-7667
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Eric Badger
>Assignee: Eric Badger
>Priority: Major
> Attachments: YARN-7667.001.patch, YARN-7667.002.patch, 
> YARN-7667.003.patch
>
>
> {{DockerStopCommand}} has a {{setGracePeriod}} method, but it is never 
> called. So, the stop uses the 10 second default grace period from docker



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5151) [YARN-3368] Support kill application from new YARN UI

2018-04-06 Thread JIRA

[ 
https://issues.apache.org/jira/browse/YARN-5151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16428341#comment-16428341
 ] 

Gergely Novák commented on YARN-5151:
-

Patch #007: Rebased to trunk.

> [YARN-3368] Support kill application from new YARN UI
> -
>
> Key: YARN-5151
> URL: https://issues.apache.org/jira/browse/YARN-5151
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Gergely Novák
>Priority: Major
> Attachments: YARN-5151.001.patch, YARN-5151.002.patch, 
> YARN-5151.003.patch, YARN-5151.004.patch, YARN-5151.005.patch, 
> YARN-5151.007.patch, screenshot-1.png, screenshot-2.png
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5151) [YARN-3368] Support kill application from new YARN UI

2018-04-06 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/YARN-5151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gergely Novák updated YARN-5151:

Attachment: YARN-5151.007.patch

> [YARN-3368] Support kill application from new YARN UI
> -
>
> Key: YARN-5151
> URL: https://issues.apache.org/jira/browse/YARN-5151
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Gergely Novák
>Priority: Major
> Attachments: YARN-5151.001.patch, YARN-5151.002.patch, 
> YARN-5151.003.patch, YARN-5151.004.patch, YARN-5151.005.patch, 
> YARN-5151.007.patch, screenshot-1.png, screenshot-2.png
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5888) [YARN-3368] Improve unit tests for YARN UI

2018-04-06 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16428303#comment-16428303
 ] 

genericqa commented on YARN-5888:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} YARN-5888 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-5888 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12843218/YARN-5888.002.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/20254/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> [YARN-3368] Improve unit tests for YARN UI
> --
>
> Key: YARN-5888
> URL: https://issues.apache.org/jira/browse/YARN-5888
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-ui-v2
>Reporter: Akhil PB
>Assignee: Akhil PB
>Priority: Minor
> Attachments: YARN-5888.001.patch, YARN-5888.002.patch
>
>
> - Add missing test cases in new YARN UI
> - Fix test cases errors in new YARN UI 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4518) [YARN-3368] Support rendering statistic-by-node-label for queues/apps page

2018-04-06 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16428302#comment-16428302
 ] 

genericqa commented on YARN-4518:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  6s{color} 
| {color:red} YARN-4518 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-4518 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12859523/YARN-4518.0005.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/20253/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> [YARN-3368] Support rendering statistic-by-node-label for queues/apps page
> --
>
> Key: YARN-4518
> URL: https://issues.apache.org/jira/browse/YARN-4518
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Akhil PB
>Priority: Major
> Attachments: YARN-4518-YARN-3368.1.patch, YARN-4518.0001.patch, 
> YARN-4518.0002.patch, YARN-4518.0003.patch, YARN-4518.0004.patch, 
> YARN-4518.0005.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5888) [YARN-3368] Improve unit tests for YARN UI

2018-04-06 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16428291#comment-16428291
 ] 

Sunil G commented on YARN-5888:
---

[~akhilpb] Thanks for the patch.

This seems to be fine with existing layouts. Could u pls help to rebase and 
check whether the patch is valid. Thanks.

> [YARN-3368] Improve unit tests for YARN UI
> --
>
> Key: YARN-5888
> URL: https://issues.apache.org/jira/browse/YARN-5888
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-ui-v2
>Reporter: Akhil PB
>Assignee: Akhil PB
>Priority: Minor
> Attachments: YARN-5888.001.patch, YARN-5888.002.patch
>
>
> - Add missing test cases in new YARN UI
> - Fix test cases errors in new YARN UI 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4518) [YARN-3368] Support rendering statistic-by-node-label for queues/apps page

2018-04-06 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16428289#comment-16428289
 ] 

Sunil G commented on YARN-4518:
---

This needs to be revisited based on new Nodes page layout change

[~akhilpb], do you have some bandwidth to check this. Thanks.

> [YARN-3368] Support rendering statistic-by-node-label for queues/apps page
> --
>
> Key: YARN-4518
> URL: https://issues.apache.org/jira/browse/YARN-4518
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Akhil PB
>Priority: Major
> Attachments: YARN-4518-YARN-3368.1.patch, YARN-4518.0001.patch, 
> YARN-4518.0002.patch, YARN-4518.0003.patch, YARN-4518.0004.patch, 
> YARN-4518.0005.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-8124) Service Application Master log file can't be found.

2018-04-06 Thread Rohith Sharma K S (JIRA)
Rohith Sharma K S created YARN-8124:
---

 Summary: Service Application Master log file can't be found. 
 Key: YARN-8124
 URL: https://issues.apache.org/jira/browse/YARN-8124
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Rohith Sharma K S


It is observed that service am log file can't be found in log folder. When 
inspected, _yarnservice-log4j.properties_ has entry for 
log4j.appender.amlog.File=*${LOG_DIR}/serviceam.log* where LOG_DIR is not 
resolving. 
When changed above value to log4j.appender.amlog.File=*./serviceam.log*, able 
to see the log. 




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6091) the AppMaster register failed when use Docker on LinuxContainer

2018-04-06 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16428161#comment-16428161
 ] 

genericqa commented on YARN-6091:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} YARN-6091 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-6091 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12884361/YARN-6091.002.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/20252/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> the AppMaster register failed when use Docker on LinuxContainer 
> 
>
> Key: YARN-6091
> URL: https://issues.apache.org/jira/browse/YARN-6091
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager, yarn
>Affects Versions: 2.8.1
> Environment: CentOS
>Reporter: zhengchenyu
>Assignee: Eric Badger
>Priority: Critical
> Attachments: YARN-6091.001.patch, YARN-6091.002.patch
>
>   Original Estimate: 336h
>  Remaining Estimate: 336h
>
> In some servers, When I use Docker on LinuxContainer, I found the aciton that 
> AppMaster register to Resourcemanager failed. But didn't happen in other 
> servers. 
> I found the pclose (in container-executor.c) return different value in 
> different server, even though the process which is launched by popen is 
> running normally. Some server return 0, and others return 13. 
> Because yarn regard the application as failed application when pclose return 
> nonzero, and yarn will remove the AMRMToken, then the AppMaster register 
> failed because Resourcemanager have removed this applicaiton's token. 
> In container-executor.c, the judgement condition is whether the return code 
> is zero. But man the pclose, the document tells that "pclose return -1" 
> represent wrong. So I change the judgement condition, then slove this 
> problem. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6091) the AppMaster register failed when use Docker on LinuxContainer

2018-04-06 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16428154#comment-16428154
 ] 

Junping Du commented on YARN-6091:
--

move to 2.8.4 as 2.8.3 is released last year.

> the AppMaster register failed when use Docker on LinuxContainer 
> 
>
> Key: YARN-6091
> URL: https://issues.apache.org/jira/browse/YARN-6091
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager, yarn
>Affects Versions: 2.8.1
> Environment: CentOS
>Reporter: zhengchenyu
>Assignee: Eric Badger
>Priority: Critical
> Attachments: YARN-6091.001.patch, YARN-6091.002.patch
>
>   Original Estimate: 336h
>  Remaining Estimate: 336h
>
> In some servers, When I use Docker on LinuxContainer, I found the aciton that 
> AppMaster register to Resourcemanager failed. But didn't happen in other 
> servers. 
> I found the pclose (in container-executor.c) return different value in 
> different server, even though the process which is launched by popen is 
> running normally. Some server return 0, and others return 13. 
> Because yarn regard the application as failed application when pclose return 
> nonzero, and yarn will remove the AMRMToken, then the AppMaster register 
> failed because Resourcemanager have removed this applicaiton's token. 
> In container-executor.c, the judgement condition is whether the return code 
> is zero. But man the pclose, the document tells that "pclose return -1" 
> represent wrong. So I change the judgement condition, then slove this 
> problem. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6091) the AppMaster register failed when use Docker on LinuxContainer

2018-04-06 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6091?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated YARN-6091:
-
Target Version/s: 2.8.4  (was: 2.8.3)

> the AppMaster register failed when use Docker on LinuxContainer 
> 
>
> Key: YARN-6091
> URL: https://issues.apache.org/jira/browse/YARN-6091
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager, yarn
>Affects Versions: 2.8.1
> Environment: CentOS
>Reporter: zhengchenyu
>Assignee: Eric Badger
>Priority: Critical
> Attachments: YARN-6091.001.patch, YARN-6091.002.patch
>
>   Original Estimate: 336h
>  Remaining Estimate: 336h
>
> In some servers, When I use Docker on LinuxContainer, I found the aciton that 
> AppMaster register to Resourcemanager failed. But didn't happen in other 
> servers. 
> I found the pclose (in container-executor.c) return different value in 
> different server, even though the process which is launched by popen is 
> running normally. Some server return 0, and others return 13. 
> Because yarn regard the application as failed application when pclose return 
> nonzero, and yarn will remove the AMRMToken, then the AppMaster register 
> failed because Resourcemanager have removed this applicaiton's token. 
> In container-executor.c, the judgement condition is whether the return code 
> is zero. But man the pclose, the document tells that "pclose return -1" 
> represent wrong. So I change the judgement condition, then slove this 
> problem. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8083) RM/UI2: all configurations are paged together

2018-04-06 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16428123#comment-16428123
 ] 

Sunil G commented on YARN-8083:
---

+1 on the patch.

Committing shortly.

> RM/UI2: all configurations are paged together
> -
>
> Key: YARN-8083
> URL: https://issues.apache.org/jira/browse/YARN-8083
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Reporter: Zoltan Haindrich
>Assignee: Gergely Novák
>Priority: Major
> Attachments: YARN-8083.001.patch, conf_browse.png
>
>
> there are 3 configs displayed on the same page; however all of the viewer 
> components respond to all page controllers...
> http://172.22.78.179:8088/ui2/#/yarn-tools/yarn-conf



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8083) [UI2] All YARN related configurations are paged together in conf page

2018-04-06 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-8083:
--
Summary: [UI2] All YARN related configurations are paged together in conf 
page  (was: RM/UI2: all configurations are paged together)

> [UI2] All YARN related configurations are paged together in conf page
> -
>
> Key: YARN-8083
> URL: https://issues.apache.org/jira/browse/YARN-8083
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Reporter: Zoltan Haindrich
>Assignee: Gergely Novák
>Priority: Major
> Attachments: YARN-8083.001.patch, conf_browse.png
>
>
> there are 3 configs displayed on the same page; however all of the viewer 
> components respond to all page controllers...
> http://172.22.78.179:8088/ui2/#/yarn-tools/yarn-conf



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8107) NPE when incorrect format is used in ATSv2 filter attributes

2018-04-06 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16428059#comment-16428059
 ] 

genericqa commented on YARN-8107:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 52s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 30s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
57s{color} | {color:green} hadoop-yarn-server-timelineservice in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 53m 40s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | YARN-8107 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12917823/YARN-8107.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux c16739f345a5 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / ea3849f |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/20251/testReport/ |
| Max. process+thread count | 341 (vs. ulimit of 1) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/20251/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> NPE when incorrect format is used in ATSv2 filter 

[jira] [Commented] (YARN-8123) Skip compiling old hamlet package when the Java version is 10 or upper

2018-04-06 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16428030#comment-16428030
 ] 

Akira Ajisaka commented on YARN-8123:
-

{code:title=hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/pom.xml}

  java9
  
9
  
{code}
The code should be updated.

> Skip compiling old hamlet package when the Java version is 10 or upper
> --
>
> Key: YARN-8123
> URL: https://issues.apache.org/jira/browse/YARN-8123
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: webapp
> Environment: Java 10 or upper
>Reporter: Akira Ajisaka
>Priority: Major
>  Labels: newbie
>
> HADOOP-11423 skipped compiling old hamlet package when the Java version is 9, 
> however, it is not skipped with Java 10+. We need to fix it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8123) Skip compiling old hamlet package when the Java version is 10 or upper

2018-04-06 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated YARN-8123:

Labels: newbie  (was: )

> Skip compiling old hamlet package when the Java version is 10 or upper
> --
>
> Key: YARN-8123
> URL: https://issues.apache.org/jira/browse/YARN-8123
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: webapp
> Environment: Java 10 or upper
>Reporter: Akira Ajisaka
>Priority: Major
>  Labels: newbie
>
> HADOOP-11423 skipped compiling old hamlet package when the Java version is 9, 
> however, it is not skipped with Java 10+. We need to fix it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-8123) Skip compiling old hamlet package when the Java version is 10 or upper

2018-04-06 Thread Akira Ajisaka (JIRA)
Akira Ajisaka created YARN-8123:
---

 Summary: Skip compiling old hamlet package when the Java version 
is 10 or upper
 Key: YARN-8123
 URL: https://issues.apache.org/jira/browse/YARN-8123
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: webapp
 Environment: Java 10 or upper
Reporter: Akira Ajisaka


HADOOP-11423 skipped compiling old hamlet package when the Java version is 9, 
however, it is not skipped with Java 10+. We need to fix it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7931) [atsv2 read acls] Include domain table creation as part of schema creator

2018-04-06 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16428021#comment-16428021
 ] 

Rohith Sharma K S commented on YARN-7931:
-

+1 lgtm

> [atsv2 read acls] Include domain table creation as part of schema creator
> -
>
> Key: YARN-7931
> URL: https://issues.apache.org/jira/browse/YARN-7931
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Vrushali C
>Assignee: Vrushali C
>Priority: Major
> Attachments: YARN-7391.0001.patch, YARN-7391.0002.patch, 
> YARN-7391.0003.patch
>
>
>  
> Update the schema creator to create a domain table to store timeline entity 
> domain info. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8107) NPE when incorrect format is used in ATSv2 filter attributes

2018-04-06 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16428013#comment-16428013
 ] 

Rohith Sharma K S commented on YARN-8107:
-

updated the patch adding invalid expression tests verification.

> NPE when incorrect format is used in ATSv2 filter attributes
> 
>
> Key: YARN-8107
> URL: https://issues.apache.org/jira/browse/YARN-8107
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: ATSv2
>Reporter: Charan Hebri
>Assignee: Rohith Sharma K S
>Priority: Major
> Attachments: YARN-8107.001.patch, YARN-8107.002.patch
>
>
> Using an incorrect format for infofilters, conffilters and metricfilters 
> throws a NPE with no clear message to the caller. This should be tagged as a 
> 400 Bad Request with an informative message. Below is the timeline reader log.
> {noformat}
> 2018-04-02 06:44:10,451 INFO  reader.TimelineReaderWebServices 
> (TimelineReaderWebServices.java:handleException(173)) - Processed URL 
> /ws/v2/timeline/users/hrt_qa/flows/flow4/runs/1/apps?infofilters=UIDeq but 
> encountered exception (Took 0 ms.)
> 2018-04-02 06:44:10,451 ERROR reader.TimelineReaderWebServices 
> (TimelineReaderWebServices.java:handleException(188)) - Error while 
> processing REST request
> java.lang.NullPointerException
> at 
> org.apache.hadoop.yarn.server.timelineservice.reader.filter.TimelineFilterUtils.createHBaseFilterList(TimelineFilterUtils.java:276)
> at 
> org.apache.hadoop.yarn.server.timelineservice.storage.reader.ApplicationEntityReader.constructFilterListBasedOnFilters(ApplicationEntityReader.java:126)
> at 
> org.apache.hadoop.yarn.server.timelineservice.storage.reader.TimelineEntityReader.createFilterList(TimelineEntityReader.java:157)
> at 
> org.apache.hadoop.yarn.server.timelineservice.storage.reader.TimelineEntityReader.readEntities(TimelineEntityReader.java:277)
> at 
> org.apache.hadoop.yarn.server.timelineservice.storage.HBaseTimelineReaderImpl.getEntities(HBaseTimelineReaderImpl.java:87)
> at 
> org.apache.hadoop.yarn.server.timelineservice.reader.TimelineReaderManager.getEntities(TimelineReaderManager.java:143)
> at 
> org.apache.hadoop.yarn.server.timelineservice.reader.TimelineReaderWebServices.getEntities(TimelineReaderWebServices.java:605)
> at 
> org.apache.hadoop.yarn.server.timelineservice.reader.TimelineReaderWebServices.getFlowRunApps(TimelineReaderWebServices.java:1962)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$TypeOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:185)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
> at 
> com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:302)
> at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
> at 
> com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
> at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
> at 
> com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1542)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1473)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1419)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1409)
> at 
> com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:409)
> at 
> com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:558)
> at 
> com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:733)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
> at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:848)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1772)
> at 
> org.apache.hadoop.yarn.server.timelineservice.reader.security.TimelineReaderWhitelistAuthorizationFilter.doFilter(TimelineReaderWhitelistAuthorizationFilter.java:85)
> at 
> 

[jira] [Updated] (YARN-8107) NPE when incorrect format is used in ATSv2 filter attributes

2018-04-06 Thread Rohith Sharma K S (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S updated YARN-8107:

Attachment: YARN-8107.002.patch

> NPE when incorrect format is used in ATSv2 filter attributes
> 
>
> Key: YARN-8107
> URL: https://issues.apache.org/jira/browse/YARN-8107
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: ATSv2
>Reporter: Charan Hebri
>Assignee: Rohith Sharma K S
>Priority: Major
> Attachments: YARN-8107.001.patch, YARN-8107.002.patch
>
>
> Using an incorrect format for infofilters, conffilters and metricfilters 
> throws a NPE with no clear message to the caller. This should be tagged as a 
> 400 Bad Request with an informative message. Below is the timeline reader log.
> {noformat}
> 2018-04-02 06:44:10,451 INFO  reader.TimelineReaderWebServices 
> (TimelineReaderWebServices.java:handleException(173)) - Processed URL 
> /ws/v2/timeline/users/hrt_qa/flows/flow4/runs/1/apps?infofilters=UIDeq but 
> encountered exception (Took 0 ms.)
> 2018-04-02 06:44:10,451 ERROR reader.TimelineReaderWebServices 
> (TimelineReaderWebServices.java:handleException(188)) - Error while 
> processing REST request
> java.lang.NullPointerException
> at 
> org.apache.hadoop.yarn.server.timelineservice.reader.filter.TimelineFilterUtils.createHBaseFilterList(TimelineFilterUtils.java:276)
> at 
> org.apache.hadoop.yarn.server.timelineservice.storage.reader.ApplicationEntityReader.constructFilterListBasedOnFilters(ApplicationEntityReader.java:126)
> at 
> org.apache.hadoop.yarn.server.timelineservice.storage.reader.TimelineEntityReader.createFilterList(TimelineEntityReader.java:157)
> at 
> org.apache.hadoop.yarn.server.timelineservice.storage.reader.TimelineEntityReader.readEntities(TimelineEntityReader.java:277)
> at 
> org.apache.hadoop.yarn.server.timelineservice.storage.HBaseTimelineReaderImpl.getEntities(HBaseTimelineReaderImpl.java:87)
> at 
> org.apache.hadoop.yarn.server.timelineservice.reader.TimelineReaderManager.getEntities(TimelineReaderManager.java:143)
> at 
> org.apache.hadoop.yarn.server.timelineservice.reader.TimelineReaderWebServices.getEntities(TimelineReaderWebServices.java:605)
> at 
> org.apache.hadoop.yarn.server.timelineservice.reader.TimelineReaderWebServices.getFlowRunApps(TimelineReaderWebServices.java:1962)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$TypeOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:185)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
> at 
> com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:302)
> at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
> at 
> com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
> at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
> at 
> com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1542)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1473)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1419)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1409)
> at 
> com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:409)
> at 
> com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:558)
> at 
> com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:733)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
> at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:848)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1772)
> at 
> org.apache.hadoop.yarn.server.timelineservice.reader.security.TimelineReaderWhitelistAuthorizationFilter.doFilter(TimelineReaderWhitelistAuthorizationFilter.java:85)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
> at 
> 

[jira] [Commented] (YARN-7142) Support placement policy in yarn native services

2018-04-06 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7142?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16428008#comment-16428008
 ] 

genericqa commented on YARN-7142:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
38s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} branch-3.1 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  3m 
30s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
26s{color} | {color:green} branch-3.1 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
54s{color} | {color:green} branch-3.1 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
23s{color} | {color:green} branch-3.1 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m  
1s{color} | {color:green} branch-3.1 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 49s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
5s{color} | {color:green} branch-3.1 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
25s{color} | {color:green} branch-3.1 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
32s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 18s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 10 new + 149 unchanged - 9 fixed = 159 total (was 158) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 12s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
19s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
46s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
10s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  6m  
2s{color} | {color:green} hadoop-yarn-services-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
33s{color} | {color:green} hadoop-yarn-services-api in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
20s{color} | {color:green} hadoop-yarn-site in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 

[jira] [Commented] (YARN-7574) Add support for Node Labels on Auto Created Leaf Queue Template

2018-04-06 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16428005#comment-16428005
 ] 

genericqa commented on YARN-7574:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 29s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 39s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 62 new + 545 unchanged - 23 fixed = 607 total (was 568) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 40s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 67m 
41s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}124m 44s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | YARN-7574 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12917810/YARN-7574.10.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux dcf9935cd357 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / ea3849f |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/20249/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/20249/testReport/ |
| Max. process+thread count | 812 (vs. ulimit of 1) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: