[jira] [Updated] (YARN-7822) Constraint satisfaction checker support for composite OR and AND constraints

2018-01-25 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-7822:
--
Summary: Constraint satisfaction checker support for composite OR and AND 
constraints  (was: Fix constraint satisfaction checker to handle composite OR 
and AND constraints)

> Constraint satisfaction checker support for composite OR and AND constraints
> 
>
> Key: YARN-7822
> URL: https://issues.apache.org/jira/browse/YARN-7822
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Priority: Major
>
> JIRA to track changes to {{PlacementConstraintsUtil#canSatisfyConstraints}} 
> handle OR and AND Composite constaints



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7821) Constraint satisfaction checker support for inter-app constraints

2018-01-25 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7821?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-7821:
--
Summary: Constraint satisfaction checker support for inter-app constraints  
(was: Fix constraint satisfaction checker to handle inter-app constraints)

> Constraint satisfaction checker support for inter-app constraints
> -
>
> Key: YARN-7821
> URL: https://issues.apache.org/jira/browse/YARN-7821
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Priority: Major
>
> JIRA to track changes to {{PlacementConstraintsUtil#canSatisfyConstraints}} 
> handle inter-app constraints



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7822) Fix constraint satisfaction checker to handle composite OR and AND constraints

2018-01-25 Thread Arun Suresh (JIRA)
Arun Suresh created YARN-7822:
-

 Summary: Fix constraint satisfaction checker to handle composite 
OR and AND constraints
 Key: YARN-7822
 URL: https://issues.apache.org/jira/browse/YARN-7822
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Arun Suresh


JIRA to track changes to {{PlacementConstraintsUtil#canSatisfyConstraints}} 
handle OR and AND Composite constaints



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7821) Fix constraint satisfaction checker to handle inter-app constraints

2018-01-25 Thread Arun Suresh (JIRA)
Arun Suresh created YARN-7821:
-

 Summary: Fix constraint satisfaction checker to handle inter-app 
constraints
 Key: YARN-7821
 URL: https://issues.apache.org/jira/browse/YARN-7821
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Arun Suresh


JIRA to track changes to {{PlacementConstraintsUtil#canSatisfyConstraints}} 
handle inter-app constraints



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7760) [UI2] Clicking 'Master Node' or link next to 'AM Node Web UI' under application's appAttempt page goes to OLD RM UI

2018-01-25 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-7760:
-
Summary: [UI2] Clicking 'Master Node' or link next to 'AM Node Web UI' 
under application's appAttempt page goes to OLD RM UI  (was: [UI2]Clicking 
'Master Node' or link next to 'AM Node Web UI' under application's appAttempt 
page goes to OLD RM UI)

> [UI2] Clicking 'Master Node' or link next to 'AM Node Web UI' under 
> application's appAttempt page goes to OLD RM UI
> ---
>
> Key: YARN-7760
> URL: https://issues.apache.org/jira/browse/YARN-7760
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Sumana Sathish
>Assignee: Vasudevan Skm
>Priority: Major
> Attachments: YARN-7760.001.patch, YARN-7760.002.patch
>
>
> Clicking 'Master Node' or link next to 'AM Node Web UI' under application's 
> appAttempt page goes to OLD RM UI



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7819) Allow PlacementProcessor to be used with the FairScheduler

2018-01-25 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-7819:
--
Parent Issue: YARN-7812  (was: YARN-6592)

> Allow PlacementProcessor to be used with the FairScheduler
> --
>
> Key: YARN-7819
> URL: https://issues.apache.org/jira/browse/YARN-7819
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
>Priority: Major
> Attachments: YARN-7819-YARN-6592.001.patch
>
>
> The FairScheduler needs to implement the 
> {{ResourceScheduler#attemptAllocationOnNode}} function for the processor to 
> support the FairScheduler.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7752) Handle AllocationTags for Opportunistic containers.

2018-01-25 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-7752:
--
Parent Issue: YARN-7812  (was: YARN-6592)

> Handle AllocationTags for Opportunistic containers.
> ---
>
> Key: YARN-7752
> URL: https://issues.apache.org/jira/browse/YARN-7752
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Priority: Major
>
> JIRA to track how opportunistic containers are handled w.r.t 
> AllocationTagsManager creation and removal of tags.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7698) A misleading variable's name in ApplicationAttemptEventDispatcher

2018-01-25 Thread Jinjiang Ling (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jinjiang Ling updated YARN-7698:

Priority: Major  (was: Minor)

> A misleading variable's name in ApplicationAttemptEventDispatcher
> -
>
> Key: YARN-7698
> URL: https://issues.apache.org/jira/browse/YARN-7698
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Reporter: Jinjiang Ling
>Assignee: Jinjiang Ling
>Priority: Major
> Attachments: YARN-7698.001.patch, YARN-7698.002.patch, 
> YARN-7698.002.patch
>
>
> I find there are two variables named "appAttemptId" in 
> ApplicationAttemptEventDispatcher.
> {code:java}
> public static final class ApplicationAttemptEventDispatcher implements
>   EventHandler {
> 
> public void handle(RMAppAttemptEvent event) {
>   ApplicationAttemptId appAttemptID = event.getApplicationAttemptId();
>   ApplicationId appAttemptId = appAttemptID.getApplicationId();
>   
> }
> {code}
> The first one is named as "{color:red}appAttemptID{color}" which is the true 
> attempt id. 
> The other one is named as  "{color:red}appAttemptId{color}", but I think it's 
> currect name should be "appId".
> I'm not sure there are any reason to name the application id as 
> "appAttemptId". But I think two "appAttemptId" in one function may cause some 
> misleading, so it's better to fix the second one to "appId"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7820) Fix the currentAppAttemptId error in AHS when an application is running

2018-01-25 Thread Jinjiang Ling (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jinjiang Ling updated YARN-7820:

Attachment: YARN-7820.001.patch

> Fix the currentAppAttemptId error in AHS when an application is running
> ---
>
> Key: YARN-7820
> URL: https://issues.apache.org/jira/browse/YARN-7820
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: timelineserver
>Reporter: Jinjiang Ling
>Assignee: Jinjiang Ling
>Priority: Major
> Attachments: YARN-7820.001.patch, image-2018-01-26-14-35-09-796.png
>
>
> When I using the REST API of the AHS to get a running app's latest attempt 
> id, it always returns a invalid id like 
> *appattempt_1516873125047_0013_{color:#FF}-01{color}*. 
> But when the app is finished, the RM will push a finished event which 
> contains the latest attempt id to TimelineServer, so the id will transitive 
> to a correct one in the end of the application. 
> I think as the app is running, this value should be a correct one, so I add 
> the latest attempt id in the other info of the app's entity when the app 
> trans to RUNNING state. Then the AHS will use this value to set the 
> currentAppAttemptId.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7820) Fix the currentAppAttemptId error in AHS when an application is running

2018-01-25 Thread Jinjiang Ling (JIRA)
Jinjiang Ling created YARN-7820:
---

 Summary: Fix the currentAppAttemptId error in AHS when an 
application is running
 Key: YARN-7820
 URL: https://issues.apache.org/jira/browse/YARN-7820
 Project: Hadoop YARN
  Issue Type: Bug
  Components: timelineserver
Reporter: Jinjiang Ling
Assignee: Jinjiang Ling
 Attachments: image-2018-01-26-14-35-09-796.png

When I using the REST API of the AHS to get a running app's latest attempt id, 
it always returns a invalid id like 
*appattempt_1516873125047_0013_{color:#FF}-01{color}*. 

But when the app is finished, the RM will push a finished event which contains 
the latest attempt id to TimelineServer, so the id will transitive to a correct 
one in the end of the application. 

I think as the app is running, this value should be a correct one, so I add the 
latest attempt id in the other info of the app's entity when the app trans to 
RUNNING state. Then the AHS will use this value to set the currentAppAttemptId.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6597) Add RMContainer recovery test to verify tag population in the AllocationTagsManager

2018-01-25 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-6597:
--
Summary: Add RMContainer recovery test to verify tag population in the 
AllocationTagsManager  (was: Wrapping up allocationTags support under 
RMContainer state transitions)

> Add RMContainer recovery test to verify tag population in the 
> AllocationTagsManager
> ---
>
> Key: YARN-6597
> URL: https://issues.apache.org/jira/browse/YARN-6597
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Konstantinos Karanasos
>Assignee: Panagiotis Garefalakis
>Priority: Major
> Attachments: YARN-6597-YARN-6592.001.patch
>
>
> Each allocation can have a set of allocation tags associated to it.
> For example, an allocation can be marked as hbase, hbase-master, spark, etc.
> These allocation-tags are active in the cluster only while that container is 
> active (from the moment it gets allocated until the moment it finishes its 
> execution).
> This JIRA is responsible for storing and updating in the 
> {{PlacementConstraintManager}} the active allocation tags in the cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7817) Add Resource reference to RM's NodeInfo object so REST API can get non memory/vcore resource usages.

2018-01-25 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16340614#comment-16340614
 ] 

genericqa commented on YARN-7817:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 13s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
12s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 52s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 5 new + 22 unchanged - 0 fixed = 27 total (was 22) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 59s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 64m 40s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
26s{color} | {color:green} hadoop-yarn-ui in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}126m 18s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.webapp.TestRMWebServiceAppsNodelabel |
|   | hadoop.yarn.server.resourcemanager.reservation.TestCapacityOverTimePolicy 
|
\\
\\
|| Subsystem || Report/Notes ||

[jira] [Updated] (YARN-7698) A misleading variable's name in ApplicationAttemptEventDispatcher

2018-01-25 Thread Jinjiang Ling (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jinjiang Ling updated YARN-7698:

Attachment: YARN-7698.002.patch

> A misleading variable's name in ApplicationAttemptEventDispatcher
> -
>
> Key: YARN-7698
> URL: https://issues.apache.org/jira/browse/YARN-7698
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Reporter: Jinjiang Ling
>Assignee: Jinjiang Ling
>Priority: Minor
> Attachments: YARN-7698.001.patch, YARN-7698.002.patch, 
> YARN-7698.002.patch
>
>
> I find there are two variables named "appAttemptId" in 
> ApplicationAttemptEventDispatcher.
> {code:java}
> public static final class ApplicationAttemptEventDispatcher implements
>   EventHandler {
> 
> public void handle(RMAppAttemptEvent event) {
>   ApplicationAttemptId appAttemptID = event.getApplicationAttemptId();
>   ApplicationId appAttemptId = appAttemptID.getApplicationId();
>   
> }
> {code}
> The first one is named as "{color:red}appAttemptID{color}" which is the true 
> attempt id. 
> The other one is named as  "{color:red}appAttemptId{color}", but I think it's 
> currect name should be "appId".
> I'm not sure there are any reason to name the application id as 
> "appAttemptId". But I think two "appAttemptId" in one function may cause some 
> misleading, so it's better to fix the second one to "appId"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7784) Fix Cluster metrics when placement processor is enabled

2018-01-25 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7784?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-7784:
--
Fix Version/s: YARN-6592

> Fix Cluster metrics when placement processor is enabled
> ---
>
> Key: YARN-7784
> URL: https://issues.apache.org/jira/browse/YARN-7784
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: metrics, RM
>Reporter: Weiwei Yang
>Assignee: Arun Suresh
>Priority: Major
> Fix For: YARN-6592
>
> Attachments: YARN-7784-YARN-6592.001.patch
>
>
> Reproducing steps
>  # Setup a cluster and sets 
> {{yarn.resourcemanager.placement-constraints.enabled}} to true
>  # Submit a DS job with placement constraint, such as {{-placement_spec 
> foo=2,NOTIN,NODE,foo}}
>  # Check cluster metrics from http:///cluster/apps
> when job is running, {{Containers Running}}, {{Memory Used}} and {{VCore 
> Used}} were not updated (except AM), metrics from containers allocated by the 
> PlacementProcessor were not accumulated to the cluster metrics; however when 
> job is done, the resource were deducted. Then UI displays like following:
>  * Containers Running: -2
>  * Memory Used: -400
>  * VCores Used: -2
> Looks like {{AppSchedulingInfo#updateMetricsForAllocatedContainer}} was not 
> called when allocating a container.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7784) Fix Cluster metrics when placement processor is enabled

2018-01-25 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7784?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-7784:
--
Summary: Fix Cluster metrics when placement processor is enabled  (was: 
Cluster metrics is inaccurate when placement constraint is enabled)

> Fix Cluster metrics when placement processor is enabled
> ---
>
> Key: YARN-7784
> URL: https://issues.apache.org/jira/browse/YARN-7784
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: metrics, RM
>Reporter: Weiwei Yang
>Assignee: Arun Suresh
>Priority: Major
> Attachments: YARN-7784-YARN-6592.001.patch
>
>
> Reproducing steps
>  # Setup a cluster and sets 
> {{yarn.resourcemanager.placement-constraints.enabled}} to true
>  # Submit a DS job with placement constraint, such as {{-placement_spec 
> foo=2,NOTIN,NODE,foo}}
>  # Check cluster metrics from http:///cluster/apps
> when job is running, {{Containers Running}}, {{Memory Used}} and {{VCore 
> Used}} were not updated (except AM), metrics from containers allocated by the 
> PlacementProcessor were not accumulated to the cluster metrics; however when 
> job is done, the resource were deducted. Then UI displays like following:
>  * Containers Running: -2
>  * Memory Used: -400
>  * VCores Used: -2
> Looks like {{AppSchedulingInfo#updateMetricsForAllocatedContainer}} was not 
> called when allocating a container.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7784) Cluster metrics is inaccurate when placement constraint is enabled

2018-01-25 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7784?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16340589#comment-16340589
 ] 

Weiwei Yang commented on YARN-7784:
---

Sure, looks good. +1. Please remove the unused import while committing it, 
thanks.

> Cluster metrics is inaccurate when placement constraint is enabled
> --
>
> Key: YARN-7784
> URL: https://issues.apache.org/jira/browse/YARN-7784
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: metrics, RM
>Reporter: Weiwei Yang
>Assignee: Arun Suresh
>Priority: Major
> Attachments: YARN-7784-YARN-6592.001.patch
>
>
> Reproducing steps
>  # Setup a cluster and sets 
> {{yarn.resourcemanager.placement-constraints.enabled}} to true
>  # Submit a DS job with placement constraint, such as {{-placement_spec 
> foo=2,NOTIN,NODE,foo}}
>  # Check cluster metrics from http:///cluster/apps
> when job is running, {{Containers Running}}, {{Memory Used}} and {{VCore 
> Used}} were not updated (except AM), metrics from containers allocated by the 
> PlacementProcessor were not accumulated to the cluster metrics; however when 
> job is done, the resource were deducted. Then UI displays like following:
>  * Containers Running: -2
>  * Memory Used: -400
>  * VCores Used: -2
> Looks like {{AppSchedulingInfo#updateMetricsForAllocatedContainer}} was not 
> called when allocating a container.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7784) Cluster metrics is inaccurate when placement constraint is enabled

2018-01-25 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7784?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16340586#comment-16340586
 ] 

Arun Suresh commented on YARN-7784:
---

Both the tests failures run fine locally for me.
I can remove the unused import when I commit. [~cheersyang], can I get a +1 ?

> Cluster metrics is inaccurate when placement constraint is enabled
> --
>
> Key: YARN-7784
> URL: https://issues.apache.org/jira/browse/YARN-7784
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: metrics, RM
>Reporter: Weiwei Yang
>Assignee: Arun Suresh
>Priority: Major
> Attachments: YARN-7784-YARN-6592.001.patch
>
>
> Reproducing steps
>  # Setup a cluster and sets 
> {{yarn.resourcemanager.placement-constraints.enabled}} to true
>  # Submit a DS job with placement constraint, such as {{-placement_spec 
> foo=2,NOTIN,NODE,foo}}
>  # Check cluster metrics from http:///cluster/apps
> when job is running, {{Containers Running}}, {{Memory Used}} and {{VCore 
> Used}} were not updated (except AM), metrics from containers allocated by the 
> PlacementProcessor were not accumulated to the cluster metrics; however when 
> job is done, the resource were deducted. Then UI displays like following:
>  * Containers Running: -2
>  * Memory Used: -400
>  * VCores Used: -2
> Looks like {{AppSchedulingInfo#updateMetricsForAllocatedContainer}} was not 
> called when allocating a container.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7784) Cluster metrics is inaccurate when placement constraint is enabled

2018-01-25 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7784?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16340581#comment-16340581
 ] 

genericqa commented on YARN-7784:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-6592 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
24s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
35s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
38s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 56s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
1s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} YARN-6592 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 23s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 1 new + 24 unchanged - 0 fixed = 25 total (was 24) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 23s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 65m 17s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}108m 11s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.constraint.TestPlacementProcessor |
|   | hadoop.yarn.server.resourcemanager.TestRMEmbeddedElector |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7784 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12907808/YARN-7784-YARN-6592.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux ab6329286d57 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | YARN-6592 / 13d37ce |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/19487/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit | 
http

[jira] [Commented] (YARN-7817) Add Resource reference to RM's NodeInfo object so REST API can get non memory/vcore resource usages.

2018-01-25 Thread Bibin A Chundatt (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16340579#comment-16340579
 ] 

Bibin A Chundatt commented on YARN-7817:


[~sunilg]
Should we mark {{ResourceInformation}} class as public?? Since variable change 
will be affecting the REST compatibility from here on.Could you share your 
thoughts ??

Apart from above mentioned comment latest patch looks good to me too.

> Add Resource reference to RM's NodeInfo object so REST API can get non 
> memory/vcore resource usages.
> 
>
> Key: YARN-7817
> URL: https://issues.apache.org/jira/browse/YARN-7817
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Sumana Sathish
>Assignee: Sunil G
>Priority: Major
> Attachments: Screen Shot 2018-01-25 at 11.59.31 PM.png, 
> YARN-7817.001.patch, YARN-7817.002.patch, YARN-7817.003.patch, 
> YARN-7817.004.patch, YARN-7817.005.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6648) [GPG] Add SubClusterCleaner in Global Policy Generator

2018-01-25 Thread Botong Huang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Botong Huang updated YARN-6648:
---
Attachment: YARN-6648-YARN-7402.v8.patch

> [GPG] Add SubClusterCleaner in Global Policy Generator
> --
>
> Key: YARN-6648
> URL: https://issues.apache.org/jira/browse/YARN-6648
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Botong Huang
>Assignee: Botong Huang
>Priority: Minor
>  Labels: federation, gpg
> Attachments: YARN-6648-YARN-2915.v1.patch, 
> YARN-6648-YARN-7402.v2.patch, YARN-6648-YARN-7402.v3.patch, 
> YARN-6648-YARN-7402.v4.patch, YARN-6648-YARN-7402.v5.patch, 
> YARN-6648-YARN-7402.v6.patch, YARN-6648-YARN-7402.v7.patch, 
> YARN-6648-YARN-7402.v8.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7817) Add Resource reference to RM's NodeInfo object so REST API can get non memory/vcore resource usages.

2018-01-25 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16340563#comment-16340563
 ] 

Wangda Tan commented on YARN-7817:
--

Reassigned to [~sunilg] since Sunil has done most of the works.:) 

Latest patch looks good, +1. Pending Jenkins.

> Add Resource reference to RM's NodeInfo object so REST API can get non 
> memory/vcore resource usages.
> 
>
> Key: YARN-7817
> URL: https://issues.apache.org/jira/browse/YARN-7817
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Sumana Sathish
>Assignee: Sunil G
>Priority: Major
> Attachments: Screen Shot 2018-01-25 at 11.59.31 PM.png, 
> YARN-7817.001.patch, YARN-7817.002.patch, YARN-7817.003.patch, 
> YARN-7817.004.patch, YARN-7817.005.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-7817) Add Resource reference to RM's NodeInfo object so REST API can get non memory/vcore resource usages.

2018-01-25 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7817?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan reassigned YARN-7817:


Assignee: Sunil G  (was: Wangda Tan)

> Add Resource reference to RM's NodeInfo object so REST API can get non 
> memory/vcore resource usages.
> 
>
> Key: YARN-7817
> URL: https://issues.apache.org/jira/browse/YARN-7817
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Sumana Sathish
>Assignee: Sunil G
>Priority: Major
> Attachments: Screen Shot 2018-01-25 at 11.59.31 PM.png, 
> YARN-7817.001.patch, YARN-7817.002.patch, YARN-7817.003.patch, 
> YARN-7817.004.patch, YARN-7817.005.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7817) Add Resource reference to RM's NodeInfo object so REST API can get non memory/vcore resource usages.

2018-01-25 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16340548#comment-16340548
 ] 

Sunil G commented on YARN-7817:
---

Fixed ui.. cc/ [~leftnoteasy]

> Add Resource reference to RM's NodeInfo object so REST API can get non 
> memory/vcore resource usages.
> 
>
> Key: YARN-7817
> URL: https://issues.apache.org/jira/browse/YARN-7817
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Sumana Sathish
>Assignee: Wangda Tan
>Priority: Major
> Attachments: Screen Shot 2018-01-25 at 11.59.31 PM.png, 
> YARN-7817.001.patch, YARN-7817.002.patch, YARN-7817.003.patch, 
> YARN-7817.004.patch, YARN-7817.005.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7817) Add Resource reference to RM's NodeInfo object so REST API can get non memory/vcore resource usages.

2018-01-25 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7817?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-7817:
--
Attachment: YARN-7817.005.patch

> Add Resource reference to RM's NodeInfo object so REST API can get non 
> memory/vcore resource usages.
> 
>
> Key: YARN-7817
> URL: https://issues.apache.org/jira/browse/YARN-7817
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Sumana Sathish
>Assignee: Wangda Tan
>Priority: Major
> Attachments: Screen Shot 2018-01-25 at 11.59.31 PM.png, 
> YARN-7817.001.patch, YARN-7817.002.patch, YARN-7817.003.patch, 
> YARN-7817.004.patch, YARN-7817.005.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7064) Use cgroup to get container resource utilization

2018-01-25 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16340542#comment-16340542
 ] 

genericqa commented on YARN-7064:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m  0s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
5s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api in 
trunk has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
18s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
48s{color} | {color:green} root: The patch generated 0 new + 266 unchanged - 4 
fixed = 266 total (was 270) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 25s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
11s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
10s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
39s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m  
3s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 19m  3s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}118m 47s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.nodemanager.webapp.TestContainerLogsPage |
|   | 
hadoop.yarn.server.nodemanager.containermanager.linux.resources.TestCGroupsResourceCalculator
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05

[jira] [Assigned] (YARN-7784) Cluster metrics is inaccurate when placement constraint is enabled

2018-01-25 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7784?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang reassigned YARN-7784:
-

Assignee: Arun Suresh

> Cluster metrics is inaccurate when placement constraint is enabled
> --
>
> Key: YARN-7784
> URL: https://issues.apache.org/jira/browse/YARN-7784
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: metrics, RM
>Reporter: Weiwei Yang
>Assignee: Arun Suresh
>Priority: Major
> Attachments: YARN-7784-YARN-6592.001.patch
>
>
> Reproducing steps
>  # Setup a cluster and sets 
> {{yarn.resourcemanager.placement-constraints.enabled}} to true
>  # Submit a DS job with placement constraint, such as {{-placement_spec 
> foo=2,NOTIN,NODE,foo}}
>  # Check cluster metrics from http:///cluster/apps
> when job is running, {{Containers Running}}, {{Memory Used}} and {{VCore 
> Used}} were not updated (except AM), metrics from containers allocated by the 
> PlacementProcessor were not accumulated to the cluster metrics; however when 
> job is done, the resource were deducted. Then UI displays like following:
>  * Containers Running: -2
>  * Memory Used: -400
>  * VCores Used: -2
> Looks like {{AppSchedulingInfo#updateMetricsForAllocatedContainer}} was not 
> called when allocating a container.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7784) Cluster metrics is inaccurate when placement constraint is enabled

2018-01-25 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7784?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16340512#comment-16340512
 ] 

Weiwei Yang commented on YARN-7784:
---

Thanks [~asuresh]. Looks good to me, one minor thing that 
\{{TestPlacementProcessor}} seems to have an unused import 
\{{AbstractYarnScheduler}}. 

> Cluster metrics is inaccurate when placement constraint is enabled
> --
>
> Key: YARN-7784
> URL: https://issues.apache.org/jira/browse/YARN-7784
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: metrics, RM
>Reporter: Weiwei Yang
>Priority: Major
> Attachments: YARN-7784-YARN-6592.001.patch
>
>
> Reproducing steps
>  # Setup a cluster and sets 
> {{yarn.resourcemanager.placement-constraints.enabled}} to true
>  # Submit a DS job with placement constraint, such as {{-placement_spec 
> foo=2,NOTIN,NODE,foo}}
>  # Check cluster metrics from http:///cluster/apps
> when job is running, {{Containers Running}}, {{Memory Used}} and {{VCore 
> Used}} were not updated (except AM), metrics from containers allocated by the 
> PlacementProcessor were not accumulated to the cluster metrics; however when 
> job is done, the resource were deducted. Then UI displays like following:
>  * Containers Running: -2
>  * Memory Used: -400
>  * VCores Used: -2
> Looks like {{AppSchedulingInfo#updateMetricsForAllocatedContainer}} was not 
> called when allocating a container.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7784) Cluster metrics is inaccurate when placement constraint is enabled

2018-01-25 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7784?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16340507#comment-16340507
 ] 

Arun Suresh commented on YARN-7784:
---

Updated patch - and updated the tests to verify metrics.
[~cheersyang], do take a look.

> Cluster metrics is inaccurate when placement constraint is enabled
> --
>
> Key: YARN-7784
> URL: https://issues.apache.org/jira/browse/YARN-7784
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: metrics, RM
>Reporter: Weiwei Yang
>Priority: Major
> Attachments: YARN-7784-YARN-6592.001.patch
>
>
> Reproducing steps
>  # Setup a cluster and sets 
> {{yarn.resourcemanager.placement-constraints.enabled}} to true
>  # Submit a DS job with placement constraint, such as {{-placement_spec 
> foo=2,NOTIN,NODE,foo}}
>  # Check cluster metrics from http:///cluster/apps
> when job is running, {{Containers Running}}, {{Memory Used}} and {{VCore 
> Used}} were not updated (except AM), metrics from containers allocated by the 
> PlacementProcessor were not accumulated to the cluster metrics; however when 
> job is done, the resource were deducted. Then UI displays like following:
>  * Containers Running: -2
>  * Memory Used: -400
>  * VCores Used: -2
> Looks like {{AppSchedulingInfo#updateMetricsForAllocatedContainer}} was not 
> called when allocating a container.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7780) Documentation for Placement Constraints

2018-01-25 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16340506#comment-16340506
 ] 

genericqa commented on YARN-7780:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} YARN-6592 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
47s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
44s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
30s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
57s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 40s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
7s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api in 
YARN-6592 has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} YARN-6592 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
33s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 51s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 1 new + 212 unchanged - 0 fixed = 213 total (was 212) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  1s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
35s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
17s{color} | {color:green} hadoop-yarn-site in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 60m 29s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Ser

[jira] [Updated] (YARN-7784) Cluster metrics is inaccurate when placement constraint is enabled

2018-01-25 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7784?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-7784:
--
Attachment: YARN-7784-YARN-6592.001.patch

> Cluster metrics is inaccurate when placement constraint is enabled
> --
>
> Key: YARN-7784
> URL: https://issues.apache.org/jira/browse/YARN-7784
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: metrics, RM
>Reporter: Weiwei Yang
>Priority: Major
> Attachments: YARN-7784-YARN-6592.001.patch
>
>
> Reproducing steps
>  # Setup a cluster and sets 
> {{yarn.resourcemanager.placement-constraints.enabled}} to true
>  # Submit a DS job with placement constraint, such as {{-placement_spec 
> foo=2,NOTIN,NODE,foo}}
>  # Check cluster metrics from http:///cluster/apps
> when job is running, {{Containers Running}}, {{Memory Used}} and {{VCore 
> Used}} were not updated (except AM), metrics from containers allocated by the 
> PlacementProcessor were not accumulated to the cluster metrics; however when 
> job is done, the resource were deducted. Then UI displays like following:
>  * Containers Running: -2
>  * Memory Used: -400
>  * VCores Used: -2
> Looks like {{AppSchedulingInfo#updateMetricsForAllocatedContainer}} was not 
> called when allocating a container.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7732) Support Pluggable AM Simulator

2018-01-25 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16340505#comment-16340505
 ] 

Wangda Tan commented on YARN-7732:
--

Thanks [~youchen] for working on this JIRA, could you update the 
title/description to better describe what the patch does? 

I saw existing AMSimulator is already pluggable: 
{code} 
//  map
for (Map.Entry e : tempConf) {
  String key = e.getKey().toString();
  if (key.startsWith(SLSConfiguration.AM_TYPE_PREFIX)) {
String amType = key.substring(SLSConfiguration.AM_TYPE_PREFIX.length());
amClassMap.put(amType, Class.forName(tempConf.get(key)));
  }
}
{code} 

I haven't reviewed details of the patch yet, did you meant to add the params to 
AMSimulator init method?

Also, this patch updated SynthJob and included the new StreamAMSimulator, could 
you elaborate these changes as well? 

> Support Pluggable AM Simulator
> --
>
> Key: YARN-7732
> URL: https://issues.apache.org/jira/browse/YARN-7732
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler-load-simulator
>Reporter: Young Chen
>Assignee: Young Chen
>Priority: Minor
> Attachments: YARN-7732-YARN-7798.01.patch, 
> YARN-7732-YARN-7798.02.patch, YARN-7732.01.patch, YARN-7732.02.patch, 
> YARN-7732.03.patch
>
>
> Extract the MapReduce specific set-up in the SLSRunner into the 
> MRAMSimulator, and enable support for pluggable AMSimulators



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7626) Allow regular expression matching in container-executor.cfg for devices and named docker volumes mount

2018-01-25 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16340496#comment-16340496
 ] 

Wangda Tan commented on YARN-7626:
--

Thanks [~Zian Chen] for working on this issue and testing. I just updated the 
title a bit to better reflect what the patch does: 

[~ebadger]/[~miklos.szeg...@cloudera.com]/[~shaneku...@gmail.com], could you 
help to review the patch?



> Allow regular expression matching in container-executor.cfg for devices and 
> named docker volumes mount
> --
>
> Key: YARN-7626
> URL: https://issues.apache.org/jira/browse/YARN-7626
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Zian Chen
>Assignee: Zian Chen
>Priority: Major
> Attachments: YARN-7626.001.patch, YARN-7626.002.patch, 
> YARN-7626.003.patch, YARN-7626.004.patch, YARN-7626.005.patch
>
>
> Currently when we config some of the GPU devices related fields (like ) in 
> container-executor.cfg, these fields are generated based on different driver 
> versions or GPU device names. We want to enable regular expression matching 
> so that user don't need to manually set up these fields when config 
> container-executor.cfg,



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7626) Allow regular expression matching in container-executor.cfg for devices and named docker volumes mount

2018-01-25 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-7626:
-
Summary: Allow regular expression matching in container-executor.cfg for 
devices and named docker volumes mount  (was: allow regular expression matching 
in container-executor.cfg for devices and volumes)

> Allow regular expression matching in container-executor.cfg for devices and 
> named docker volumes mount
> --
>
> Key: YARN-7626
> URL: https://issues.apache.org/jira/browse/YARN-7626
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Zian Chen
>Assignee: Zian Chen
>Priority: Major
> Attachments: YARN-7626.001.patch, YARN-7626.002.patch, 
> YARN-7626.003.patch, YARN-7626.004.patch, YARN-7626.005.patch
>
>
> Currently when we config some of the GPU devices related fields (like ) in 
> container-executor.cfg, these fields are generated based on different driver 
> versions or GPU device names. We want to enable regular expression matching 
> so that user don't need to manually set up these fields when config 
> container-executor.cfg,



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7784) Cluster metrics is inaccurate when placement constraint is enabled

2018-01-25 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7784?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16340476#comment-16340476
 ] 

Weiwei Yang commented on YARN-7784:
---

Hi [~asuresh]

I think you can split that fix from YARN-7819 and post it here, as the change 
is straightforward we should be able to get it done here quickly. YARN-7819 
review may need some more time. Does it make sense?

> Cluster metrics is inaccurate when placement constraint is enabled
> --
>
> Key: YARN-7784
> URL: https://issues.apache.org/jira/browse/YARN-7784
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: metrics, RM
>Reporter: Weiwei Yang
>Priority: Major
>
> Reproducing steps
>  # Setup a cluster and sets 
> {{yarn.resourcemanager.placement-constraints.enabled}} to true
>  # Submit a DS job with placement constraint, such as {{-placement_spec 
> foo=2,NOTIN,NODE,foo}}
>  # Check cluster metrics from http:///cluster/apps
> when job is running, {{Containers Running}}, {{Memory Used}} and {{VCore 
> Used}} were not updated (except AM), metrics from containers allocated by the 
> PlacementProcessor were not accumulated to the cluster metrics; however when 
> job is done, the resource were deducted. Then UI displays like following:
>  * Containers Running: -2
>  * Memory Used: -400
>  * VCores Used: -2
> Looks like {{AppSchedulingInfo#updateMetricsForAllocatedContainer}} was not 
> called when allocating a container.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7784) Cluster metrics is inaccurate when placement constraint is enabled

2018-01-25 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7784?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16340465#comment-16340465
 ] 

Arun Suresh commented on YARN-7784:
---

[~cheersyang], you are right.
I've actually included the fix as par of YARN-7819. Check 
[this|https://issues.apache.org/jira/secure/attachment/12907783/YARN-7819-YARN-6592.001.patch#file-1]
 change.
Maybe I should pull it out and post it here. Thoughts ?


> Cluster metrics is inaccurate when placement constraint is enabled
> --
>
> Key: YARN-7784
> URL: https://issues.apache.org/jira/browse/YARN-7784
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: metrics, RM
>Reporter: Weiwei Yang
>Priority: Major
>
> Reproducing steps
>  # Setup a cluster and sets 
> {{yarn.resourcemanager.placement-constraints.enabled}} to true
>  # Submit a DS job with placement constraint, such as {{-placement_spec 
> foo=2,NOTIN,NODE,foo}}
>  # Check cluster metrics from http:///cluster/apps
> when job is running, {{Containers Running}}, {{Memory Used}} and {{VCore 
> Used}} were not updated (except AM), metrics from containers allocated by the 
> PlacementProcessor were not accumulated to the cluster metrics; however when 
> job is done, the resource were deducted. Then UI displays like following:
>  * Containers Running: -2
>  * Memory Used: -400
>  * VCores Used: -2
> Looks like {{AppSchedulingInfo#updateMetricsForAllocatedContainer}} was not 
> called when allocating a container.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7784) Cluster metrics is inaccurate when placement constraint is enabled

2018-01-25 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7784?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16340454#comment-16340454
 ] 

Weiwei Yang commented on YARN-7784:
---

Hi [~asuresh]

Since YARN-7670, FiCaSchedulerApp#accept starts to accept a boolean 
\{{checkPending}}. And when placement-constraints.enable is true, this is a 
false value which causes \{{appSchedulingInfo.allocate}} was skipped, then the 
metrics deduction was also skipped. Causing this inaccurate metrics problem. 
What was the intention of this change? As long as container is allocated, we 
need to keep metrics updated no matter which approach it takes. Please suggest 
how to fix this.

Thanks

> Cluster metrics is inaccurate when placement constraint is enabled
> --
>
> Key: YARN-7784
> URL: https://issues.apache.org/jira/browse/YARN-7784
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: metrics, RM
>Reporter: Weiwei Yang
>Priority: Major
>
> Reproducing steps
>  # Setup a cluster and sets 
> {{yarn.resourcemanager.placement-constraints.enabled}} to true
>  # Submit a DS job with placement constraint, such as {{-placement_spec 
> foo=2,NOTIN,NODE,foo}}
>  # Check cluster metrics from http:///cluster/apps
> when job is running, {{Containers Running}}, {{Memory Used}} and {{VCore 
> Used}} were not updated (except AM), metrics from containers allocated by the 
> PlacementProcessor were not accumulated to the cluster metrics; however when 
> job is done, the resource were deducted. Then UI displays like following:
>  * Containers Running: -2
>  * Memory Used: -400
>  * VCores Used: -2
> Looks like {{AppSchedulingInfo#updateMetricsForAllocatedContainer}} was not 
> called when allocating a container.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7780) Documentation for Placement Constraints

2018-01-25 Thread Konstantinos Karanasos (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16340453#comment-16340453
 ] 

Konstantinos Karanasos commented on YARN-7780:
--

Thanks for the review, [~cheersyang]. I addressed your comments and uploaded 
new version of the patch.

> Documentation for Placement Constraints
> ---
>
> Key: YARN-7780
> URL: https://issues.apache.org/jira/browse/YARN-7780
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Konstantinos Karanasos
>Priority: Major
> Attachments: YARN-7780-YARN-6592.001.patch, 
> YARN-7780-YARN-6592.002.patch
>
>
> JIRA to track documentation for the feature.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7780) Documentation for Placement Constraints

2018-01-25 Thread Konstantinos Karanasos (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantinos Karanasos updated YARN-7780:
-
Attachment: YARN-7780-YARN-6592.002.patch

> Documentation for Placement Constraints
> ---
>
> Key: YARN-7780
> URL: https://issues.apache.org/jira/browse/YARN-7780
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Konstantinos Karanasos
>Priority: Major
> Attachments: YARN-7780-YARN-6592.001.patch, 
> YARN-7780-YARN-6592.002.patch
>
>
> JIRA to track documentation for the feature.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7814) Remove automatic mounting of the cgroups root directory into Docker containers

2018-01-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16340452#comment-16340452
 ] 

Hudson commented on YARN-7814:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13562 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13562/])
YARN-7814. Remove automatic mounting of the cgroups root directory into 
(szegedim: rev 2e5865606b7701ee79d0d297238ab58a07a9f61f)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/DockerLinuxContainerRuntime.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/TestDockerContainerRuntime.java


> Remove automatic mounting of the cgroups root directory into Docker containers
> --
>
> Key: YARN-7814
> URL: https://issues.apache.org/jira/browse/YARN-7814
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: YARN-7814.001.patch, YARN-7814.002.patch
>
>
> Currently, all Docker containers launched by {{DockerLinuxContainerRuntime}} 
> get /sys/fs/cgroup automatically mounted. Now that user supplied mounts 
> (YARN-5534) are in, containers that require this mount can request it (with a 
> properly configured mount whitelist).
> I propose we remove the automatic mounting of /sys/fs/cgroup into Docker 
> containers.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7780) Documentation for Placement Constraints

2018-01-25 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16340439#comment-16340439
 ] 

Weiwei Yang commented on YARN-7780:
---

Hi [~kkaranasos]

The doc looks pretty illustrative, just two minor comments,
 # Can we remove the last two "To be added" part in the patch? We can add them 
in next few patches that probably better.
 # Do we need to mention when multiple level constraints are specified, the 
lower level ones override the higher level ones in the doc? I am OK to add that 
when YARN-7778 is done.

I am +1 on the patch and totally be fine to get 1 and 2 addressed after merge.

Thanks

> Documentation for Placement Constraints
> ---
>
> Key: YARN-7780
> URL: https://issues.apache.org/jira/browse/YARN-7780
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Konstantinos Karanasos
>Priority: Major
> Attachments: YARN-7780-YARN-6592.001.patch
>
>
> JIRA to track documentation for the feature.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7064) Use cgroup to get container resource utilization

2018-01-25 Thread Miklos Szegedi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Szegedi updated YARN-7064:
-
Attachment: YARN-7064.013.patch

> Use cgroup to get container resource utilization
> 
>
> Key: YARN-7064
> URL: https://issues.apache.org/jira/browse/YARN-7064
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Miklos Szegedi
>Assignee: Miklos Szegedi
>Priority: Major
> Attachments: YARN-7064.000.patch, YARN-7064.001.patch, 
> YARN-7064.002.patch, YARN-7064.003.patch, YARN-7064.004.patch, 
> YARN-7064.005.patch, YARN-7064.007.patch, YARN-7064.008.patch, 
> YARN-7064.009.patch, YARN-7064.010.patch, YARN-7064.011.patch, 
> YARN-7064.012.patch, YARN-7064.013.patch
>
>
> This is an addendum to YARN-6668. What happens is that that jira always wants 
> to rebase patches against YARN-1011 instead of trunk.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7797) Docker host network can not obtain IP address for RegistryDNS

2018-01-25 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16340433#comment-16340433
 ] 

genericqa commented on YARN-7797:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
32s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 19s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 58s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 19m 
57s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 61m 19s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7797 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12907797/YARN-7797.005.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux a58d8e9702de 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / ff8378e |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/19484/testReport/ |
| Max. process+thread count | 398 (vs. ulimit of 5000) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/19484/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT 

[jira] [Commented] (YARN-7814) Remove automatic mounting of the cgroups root directory into Docker containers

2018-01-25 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16340423#comment-16340423
 ] 

Miklos Szegedi commented on YARN-7814:
--

+1 LGTM. I will commit this shortly.

> Remove automatic mounting of the cgroups root directory into Docker containers
> --
>
> Key: YARN-7814
> URL: https://issues.apache.org/jira/browse/YARN-7814
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
>Priority: Major
> Attachments: YARN-7814.001.patch, YARN-7814.002.patch
>
>
> Currently, all Docker containers launched by {{DockerLinuxContainerRuntime}} 
> get /sys/fs/cgroup automatically mounted. Now that user supplied mounts 
> (YARN-5534) are in, containers that require this mount can request it (with a 
> properly configured mount whitelist).
> I propose we remove the automatic mounting of /sys/fs/cgroup into Docker 
> containers.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7742) [UI2] Duplicated containers are rendered per attempt

2018-01-25 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16340420#comment-16340420
 ] 

Sunil G commented on YARN-7742:
---

Committing shortly.

> [UI2] Duplicated containers are rendered per attempt
> 
>
> Key: YARN-7742
> URL: https://issues.apache.org/jira/browse/YARN-7742
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Rohith Sharma K S
>Assignee: Vasudevan Skm
>Priority: Major
> Attachments: Screen Shot 2018-01-12 at 5.10.48 PM.png, YARN-7742 
> .001.patch, YARN-7742.002.patch, YARN-7742.003.patch
>
>
> In UI2, containers are rendered twice with different start and end time.
> Attached the screen shot of UI2



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7797) Docker host network can not obtain IP address for RegistryDNS

2018-01-25 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16340398#comment-16340398
 ] 

genericqa commented on YARN-7797:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  9m 
47s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 38s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 13s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 18m 
35s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 73m 50s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7797 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12907789/YARN-7797.004.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 77c209ae1830 3.13.0-133-generic #182-Ubuntu SMP Tue Sep 19 
15:49:21 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / ff8378e |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/19483/testReport/ |
| Max. process+thread count | 302 (vs. ulimit of 5000) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/19483/console |
| Powered by | Apache Yetus 0.8.0-SNAPSH

[jira] [Commented] (YARN-2185) Use pipes when localizing archives

2018-01-25 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16340393#comment-16340393
 ] 

genericqa commented on YARN-2185:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
10s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 42s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
38s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
28s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
35s{color} | {color:red} hadoop-yarn-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 4s{color} | {color:green} root: The patch generated 0 new + 151 unchanged - 8 
fixed = 151 total (was 159) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  8s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
38s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
38s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m  
2s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 98m 43s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-2185 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12907788/YARN-2185.012.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 4adb008b94a1 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 16be42d |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-YARN-Build/19482/artifact/out/

[jira] [Commented] (YARN-7819) Allow PlacementProcessor to be used with the FairScheduler

2018-01-25 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16340378#comment-16340378
 ] 

genericqa commented on YARN-7819:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
9s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 1s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-6592 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
35s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
40s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
30s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
42s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  6s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
11s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} YARN-6592 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 29s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 3 new + 63 unchanged - 0 fixed = 66 total (was 63) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 26s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
18s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 61m 
56s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}110m 14s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | 
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
|  |  Unchecked/unconfirmed cast from 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApplicationAttempt
 to org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt 
in 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.attemptAllocationOnNode(SchedulerApplicationAttempt,
 SchedulingRequest, SchedulerNode)  At 
FairScheduler.java:org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt
 in 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.attemptAllocationOnNode(SchedulerApplicationAttempt,
 SchedulingRequest, SchedulerNode)  At FairScheduler.java:[line 1882] |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7819 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure

[jira] [Commented] (YARN-7797) Docker host network can not obtain IP address for RegistryDNS

2018-01-25 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16340362#comment-16340362
 ] 

Eric Yang commented on YARN-7797:
-

[~shaneku...@gmail.com] Thank you for the review.  Patch 005 includes null 
pointer handling improvement.

> Docker host network can not obtain IP address for RegistryDNS
> -
>
> Key: YARN-7797
> URL: https://issues.apache.org/jira/browse/YARN-7797
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Affects Versions: 3.1.0
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: YARN-7797.001.patch, YARN-7797.002.patch, 
> YARN-7797.003.patch, YARN-7797.004.patch, YARN-7797.005.patch
>
>
> When docker is configured to use host network, docker inspect command does 
> not return IP address of the container.  This prevents IP information to be 
> collected for RegistryDNS to register a hostname entry for the docker 
> container.
> The proposed solution is to intelligently detect the docker network 
> deployment method, and report back host IP address for RegistryDNS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7797) Docker host network can not obtain IP address for RegistryDNS

2018-01-25 Thread Eric Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7797?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-7797:

Attachment: YARN-7797.005.patch

> Docker host network can not obtain IP address for RegistryDNS
> -
>
> Key: YARN-7797
> URL: https://issues.apache.org/jira/browse/YARN-7797
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Affects Versions: 3.1.0
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: YARN-7797.001.patch, YARN-7797.002.patch, 
> YARN-7797.003.patch, YARN-7797.004.patch, YARN-7797.005.patch
>
>
> When docker is configured to use host network, docker inspect command does 
> not return IP address of the container.  This prevents IP information to be 
> collected for RegistryDNS to register a hostname entry for the docker 
> container.
> The proposed solution is to intelligently detect the docker network 
> deployment method, and report back host IP address for RegistryDNS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7780) Documentation for Placement Constraints

2018-01-25 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16340351#comment-16340351
 ] 

genericqa commented on YARN-7780:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} YARN-6592 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
50s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
 0s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
26s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 0s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 44s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
21s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api in 
YARN-6592 has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} YARN-6592 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
42s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
29s{color} | {color:red} hadoop-yarn-api in the patch failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  8m 
56s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 57s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 1 new + 212 unchanged - 0 fixed = 213 total (was 212) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 50s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
38s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
14s{color} | {color:green} hadoop-yarn-site in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 64m 55s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17

[jira] [Commented] (YARN-7626) allow regular expression matching in container-executor.cfg for devices and volumes

2018-01-25 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16340343#comment-16340343
 ] 

genericqa commented on YARN-7626:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 23m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
26m 26s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 11s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 20m  2s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 82m 25s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.nodemanager.containermanager.TestContainerManager |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7626 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12907780/YARN-7626.005.patch |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux 27841858be2a 4.4.0-89-generic #112-Ubuntu SMP Mon Jul 31 
19:38:41 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 16be42d |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/19479/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/19479/testReport/ |
| Max. process+thread count | 408 (vs. ulimit of 5000) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/19479/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> allow regular expression matching in container-executor.cfg for devices and 
> volumes
> ---
>
> Key: YARN-7626
> URL: https://issues.apache.org/jira/browse/YARN-7626
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Zian Chen
>Assignee: Zian Chen
>Priority: Major
> Attachments: YARN-7626.001.patch, YARN-7626

[jira] [Commented] (YARN-7102) NM heartbeat stuck when responseId overflows MAX_INT

2018-01-25 Thread Botong Huang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16340337#comment-16340337
 ] 

Botong Huang commented on YARN-7102:


Awesome, thanks [~jlowe] for the review and helpful feedback! 

> NM heartbeat stuck when responseId overflows MAX_INT
> 
>
> Key: YARN-7102
> URL: https://issues.apache.org/jira/browse/YARN-7102
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Botong Huang
>Assignee: Botong Huang
>Priority: Critical
> Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.1, 2.8.4
>
> Attachments: YARN-7102-branch-2.8.v10.patch, 
> YARN-7102-branch-2.8.v11.patch, YARN-7102-branch-2.8.v14.patch, 
> YARN-7102-branch-2.8.v14.patch, YARN-7102-branch-2.8.v17.patch, 
> YARN-7102-branch-2.8.v9.patch, YARN-7102-branch-2.v14.patch, 
> YARN-7102-branch-2.v14.patch, YARN-7102-branch-2.v17.patch, 
> YARN-7102-branch-2.v17.patch, YARN-7102-branch-2.v9.patch, 
> YARN-7102-branch-2.v9.patch, YARN-7102-branch-2.v9.patch, YARN-7102.v1.patch, 
> YARN-7102.v12.patch, YARN-7102.v13.patch, YARN-7102.v14.patch, 
> YARN-7102.v15.patch, YARN-7102.v16.patch, YARN-7102.v17.patch, 
> YARN-7102.v17.patch, YARN-7102.v17.patch, YARN-7102.v2.patch, 
> YARN-7102.v3.patch, YARN-7102.v4.patch, YARN-7102.v5.patch, 
> YARN-7102.v6.patch, YARN-7102.v7.patch, YARN-7102.v8.patch, YARN-7102.v9.patch
>
>
> ResponseId overflow problem in NM-RM heartbeat. This is same as AM-RM 
> heartbeat in YARN-6640, please refer to YARN-6640 for details. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7102) NM heartbeat stuck when responseId overflows MAX_INT

2018-01-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16340330#comment-16340330
 ] 

Hudson commented on YARN-7102:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13560 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13560/])
YARN-7102. NM heartbeat stuck when responseId overflows MAX_INT. (jlowe: rev 
ff8378eb1b960c72d18a984c7e5d145b407ca11a)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmnode/RMNodeImpl.java
* (edit) 
hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/scheduler/RMNodeWrapper.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/logaggregationstatus/TestRMAppLogAggregationStatus.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmnode/RMNodeStatusEvent.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestResourceTrackerService.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmnode/RMNode.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestRMNodeTransitions.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/MockNM.java
* (edit) 
hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/nodemanager/NMSimulator.java
* (edit) 
hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/nodemanager/NodeInfo.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/MockNodes.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ResourceTrackerService.java


> NM heartbeat stuck when responseId overflows MAX_INT
> 
>
> Key: YARN-7102
> URL: https://issues.apache.org/jira/browse/YARN-7102
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Botong Huang
>Assignee: Botong Huang
>Priority: Critical
> Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.1, 2.8.4
>
> Attachments: YARN-7102-branch-2.8.v10.patch, 
> YARN-7102-branch-2.8.v11.patch, YARN-7102-branch-2.8.v14.patch, 
> YARN-7102-branch-2.8.v14.patch, YARN-7102-branch-2.8.v17.patch, 
> YARN-7102-branch-2.8.v9.patch, YARN-7102-branch-2.v14.patch, 
> YARN-7102-branch-2.v14.patch, YARN-7102-branch-2.v17.patch, 
> YARN-7102-branch-2.v17.patch, YARN-7102-branch-2.v9.patch, 
> YARN-7102-branch-2.v9.patch, YARN-7102-branch-2.v9.patch, YARN-7102.v1.patch, 
> YARN-7102.v12.patch, YARN-7102.v13.patch, YARN-7102.v14.patch, 
> YARN-7102.v15.patch, YARN-7102.v16.patch, YARN-7102.v17.patch, 
> YARN-7102.v17.patch, YARN-7102.v17.patch, YARN-7102.v2.patch, 
> YARN-7102.v3.patch, YARN-7102.v4.patch, YARN-7102.v5.patch, 
> YARN-7102.v6.patch, YARN-7102.v7.patch, YARN-7102.v8.patch, YARN-7102.v9.patch
>
>
> ResponseId overflow problem in NM-RM heartbeat. This is same as AM-RM 
> heartbeat in YARN-6640, please refer to YARN-6640 for details. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7797) Docker host network can not obtain IP address for RegistryDNS

2018-01-25 Thread Eric Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7797?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-7797:

Attachment: YARN-7797.004.patch

> Docker host network can not obtain IP address for RegistryDNS
> -
>
> Key: YARN-7797
> URL: https://issues.apache.org/jira/browse/YARN-7797
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Affects Versions: 3.1.0
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: YARN-7797.001.patch, YARN-7797.002.patch, 
> YARN-7797.003.patch, YARN-7797.004.patch
>
>
> When docker is configured to use host network, docker inspect command does 
> not return IP address of the container.  This prevents IP information to be 
> collected for RegistryDNS to register a hostname entry for the docker 
> container.
> The proposed solution is to intelligently detect the docker network 
> deployment method, and report back host IP address for RegistryDNS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3895) Support ACLs in ATSv2

2018-01-25 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16340325#comment-16340325
 ] 

Vrushali C commented on YARN-3895:
--

Hi [~jlowe]

We discussed this between [~rohithsharma], [~lohit],  [~varun_saxena] and I. It 
basically comes down to whether we want to take a performance hit at read time 
or write time. Given that writing out extra details at write time seems like 
the  worse option when running at scale, we thought of taking the approach 
which may be a slight hit on the read path but has some optimizations.

Here is our proposal. 

Extremely short summary:

We will go with the domain concept that comes with ATSv1. So each entity is 
written with a domain id. At read time, the check is made to ensure the 
querying user has permissions to read the data based on domain id.

  

Design Details:

Domain ID storage:

- domains are published by the AM, just as they are done in ATSv1.

- subsequent entity writes include the domain id per write, same as ATSv1.

- domain ids are written to two tables in hbase.

- one table is user_domain table and the other is groups_domain table.

- the user_domain table has the rowkey as cluster id + username and a column 
whose value is the list of domain ids for that user. 

- Similarly the groups_domain table  rowkey of cluster id + group name and a 
column whose value stores the list of domain ids for that group. 

So, for each user or group in the timeline domain object who is a reader or the 
owner, the domain id is added to that user's row in the user_domain or 
groups_domain table. The domain id is first written to the cell with tags. Now, 
there will be a coprocessor which checks if the domain id already exists in the 
value in the domain column. If yes, no-op, nothing to do. If the domain id does 
not already exist, meaning it is a new one, it will be appended to the value 
list.

- Expiration/ removal of domain ids.

If this list of domain ids has the potential to grow very big, we can consider 
storing a TTL for each domain id. We can store the TTLs per domain id in these 
user_domain and group_domain tables and have the coprocessor look at cleanup at 
the time of major compaction.

If the list of domain ids is small enough, expiration / TTL is not required to 
be implemented.  What do you think? How many domains would there be?

 

Read Query time:

We propose to have the reader api authorization to work in the following 
fashion.

- A read query for an entity comes in from a user.

- The timeline reader will create 3 threads and issue three parallel requests 
to hbase.

- One request is a Get from the user_domain table for this querying user. Gets 
back a list of domain ids this user has permissions for.

- Another request is a Get from the  groups_domain table for the group that 
this querying user belongs to. Gets back a list of domain ids this group has 
permissions for. This may be pretty big?

- Third request is to get the entities that are being asked for . 

Now, given the domain ids in the entity response, a check is made if the domain 
id exists in the user_domain response or the groups_domain response.

This  dataset is accordingly returned as the query response. I believe ATSv1 
does a get all entities and then queries the domain table to see if this domain 
id relates to that querying user. This model may not work efficiently in hbase 
in case of multiple domain ids, doing too many gets will make the timeline 
reader response slow.

But, as an additional api option, if the domain id is passed into the query, we 
can check for existence of that domain id directly in the user_domain or 
groups_domain table and proceed accordingly.

Also, if the user who is querying is an admin user, we can skip all the checks 
and just get the entities. And of course, if security is not enabled, no 
additional gets from user_domain and groups_domain table are required. 

What do you think of this approach? 

> Support ACLs in ATSv2
> -
>
> Key: YARN-3895
> URL: https://issues.apache.org/jira/browse/YARN-3895
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Varun Saxena
>Assignee: Varun Saxena
>Priority: Major
>  Labels: YARN-5355
>
> This JIRA is to keep track of authorization support design discussions for 
> both readers and collectors. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7792) Merge work for YARN-6592

2018-01-25 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16340310#comment-16340310
 ] 

Arun Suresh commented on YARN-7792:
---

Yup - looks good to me too. Thanks [~sunilg]

> Merge work for YARN-6592
> 
>
> Key: YARN-7792
> URL: https://issues.apache.org/jira/browse/YARN-7792
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Sunil G
>Priority: Blocker
> Attachments: YARN-6592.001.patch, YARN-7792.002.patch, 
> YARN-7792.003.patch
>
>
> This Jira is to run aggregated YARN-6592 branch patch against trunk and check 
> for any jenkins issues.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7102) NM heartbeat stuck when responseId overflows MAX_INT

2018-01-25 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16340308#comment-16340308
 ] 

Jason Lowe commented on YARN-7102:
--

I agree the unit tests failures on branch-2 and branch-2.8 appear to be 
unrelated.

+1 for the branch-2 and branch-2.8 patches as well.  Committing this.

> NM heartbeat stuck when responseId overflows MAX_INT
> 
>
> Key: YARN-7102
> URL: https://issues.apache.org/jira/browse/YARN-7102
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Botong Huang
>Assignee: Botong Huang
>Priority: Critical
> Attachments: YARN-7102-branch-2.8.v10.patch, 
> YARN-7102-branch-2.8.v11.patch, YARN-7102-branch-2.8.v14.patch, 
> YARN-7102-branch-2.8.v14.patch, YARN-7102-branch-2.8.v17.patch, 
> YARN-7102-branch-2.8.v9.patch, YARN-7102-branch-2.v14.patch, 
> YARN-7102-branch-2.v14.patch, YARN-7102-branch-2.v17.patch, 
> YARN-7102-branch-2.v17.patch, YARN-7102-branch-2.v9.patch, 
> YARN-7102-branch-2.v9.patch, YARN-7102-branch-2.v9.patch, YARN-7102.v1.patch, 
> YARN-7102.v12.patch, YARN-7102.v13.patch, YARN-7102.v14.patch, 
> YARN-7102.v15.patch, YARN-7102.v16.patch, YARN-7102.v17.patch, 
> YARN-7102.v17.patch, YARN-7102.v17.patch, YARN-7102.v2.patch, 
> YARN-7102.v3.patch, YARN-7102.v4.patch, YARN-7102.v5.patch, 
> YARN-7102.v6.patch, YARN-7102.v7.patch, YARN-7102.v8.patch, YARN-7102.v9.patch
>
>
> ResponseId overflow problem in NM-RM heartbeat. This is same as AM-RM 
> heartbeat in YARN-6640, please refer to YARN-6640 for details. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-2185) Use pipes when localizing archives

2018-01-25 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2185?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe updated YARN-2185:
-
Attachment: YARN-2185.012.patch

> Use pipes when localizing archives
> --
>
> Key: YARN-2185
> URL: https://issues.apache.org/jira/browse/YARN-2185
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Affects Versions: 2.4.0
>Reporter: Jason Lowe
>Assignee: Miklos Szegedi
>Priority: Major
> Attachments: YARN-2185.000.patch, YARN-2185.001.patch, 
> YARN-2185.002.patch, YARN-2185.003.patch, YARN-2185.004.patch, 
> YARN-2185.005.patch, YARN-2185.006.patch, YARN-2185.007.patch, 
> YARN-2185.008.patch, YARN-2185.009.patch, YARN-2185.010.patch, 
> YARN-2185.011.patch, YARN-2185.012.patch, YARN-2185.012.patch
>
>
> Currently the nodemanager downloads an archive to a local file, unpacks it, 
> and then removes it.  It would be more efficient to stream the data as it's 
> being unpacked to avoid both the extra disk space requirements and the 
> additional disk activity from storing the archive.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2185) Use pipes when localizing archives

2018-01-25 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16340296#comment-16340296
 ] 

Jason Lowe commented on YARN-2185:
--

Thanks for updating the patch!  Looks good to me.  I noticed the QA bot 
commented on patch 11 twice instead of patch 12 for some reason.  I'll upload 
patch 12 again to get get get a Jenkins run on it.


> Use pipes when localizing archives
> --
>
> Key: YARN-2185
> URL: https://issues.apache.org/jira/browse/YARN-2185
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Affects Versions: 2.4.0
>Reporter: Jason Lowe
>Assignee: Miklos Szegedi
>Priority: Major
> Attachments: YARN-2185.000.patch, YARN-2185.001.patch, 
> YARN-2185.002.patch, YARN-2185.003.patch, YARN-2185.004.patch, 
> YARN-2185.005.patch, YARN-2185.006.patch, YARN-2185.007.patch, 
> YARN-2185.008.patch, YARN-2185.009.patch, YARN-2185.010.patch, 
> YARN-2185.011.patch, YARN-2185.012.patch
>
>
> Currently the nodemanager downloads an archive to a local file, unpacks it, 
> and then removes it.  It would be more efficient to stream the data as it's 
> being unpacked to avoid both the extra disk space requirements and the 
> additional disk activity from storing the archive.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7780) Documentation for Placement Constraints

2018-01-25 Thread Konstantinos Karanasos (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantinos Karanasos updated YARN-7780:
-
Attachment: YARN-7780-YARN-6592.001.patch

> Documentation for Placement Constraints
> ---
>
> Key: YARN-7780
> URL: https://issues.apache.org/jira/browse/YARN-7780
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Konstantinos Karanasos
>Priority: Major
> Attachments: YARN-7780-YARN-6592.001.patch
>
>
> JIRA to track documentation for the feature.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7797) Docker host network can not obtain IP address for RegistryDNS

2018-01-25 Thread Shane Kumpf (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16340280#comment-16340280
 ] 

Shane Kumpf commented on YARN-7797:
---

Thanks for addressing my comments, [~eyang]! I did find one issue. If the env 
{{YARN_CONTAINER_RUNTIME_DOCKER_CONTAINER_NETWORK}} is not set, the value is 
supposed to come from yarn-site (or yarn-default if not set in yarn-site). Take 
a look at the network handling in 
{{DockerLinuxContainerRuntime#launchContainer}}, I think something similar is 
needed to check both the env and config. I believe the current env check can 
NPE as well, so might be good to add some checks there. I think this is ready 
once that is fixed.

> Docker host network can not obtain IP address for RegistryDNS
> -
>
> Key: YARN-7797
> URL: https://issues.apache.org/jira/browse/YARN-7797
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Affects Versions: 3.1.0
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: YARN-7797.001.patch, YARN-7797.002.patch, 
> YARN-7797.003.patch
>
>
> When docker is configured to use host network, docker inspect command does 
> not return IP address of the container.  This prevents IP information to be 
> collected for RegistryDNS to register a hostname entry for the docker 
> container.
> The proposed solution is to intelligently detect the docker network 
> deployment method, and report back host IP address for RegistryDNS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7732) Support Pluggable AM Simulator

2018-01-25 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16340277#comment-16340277
 ] 

genericqa commented on YARN-7732:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 16m 
11s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 9 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 41s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 11s{color} | {color:orange} hadoop-tools/hadoop-sls: The patch generated 1 
new + 50 unchanged - 1 fixed = 51 total (was 51) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 23s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 
19s{color} | {color:green} hadoop-sls in the patch passed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
20s{color} | {color:red} The patch generated 2 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 71m 30s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7732 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12907773/YARN-7732-YARN-7798.02.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux a79d4d2ec248 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 16be42d |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/19478/artifact/out/diff-checkstyle-hadoop-tools_hadoop-sls.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/19478/testReport/ |
| asflicense | 
https://builds.apache.org/job/PreCommit-YARN-Build/19478/artifact/out/patch-asflicense-problems.txt
 |
| Max. process+thread count | 457 (v

[jira] [Commented] (YARN-7728) Expose container preemptions related information in Capacity Scheduler queue metrics

2018-01-25 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16340279#comment-16340279
 ] 

genericqa commented on YARN-7728:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 21m 
45s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2.8 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
38s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
40s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
9s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} branch-2.8 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 15s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 2 new + 164 unchanged - 0 fixed = 166 total (was 164) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 83m 38s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}122m  4s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.webapp.TestRMWebServiceAppsNodelabel |
|   | hadoop.yarn.server.resourcemanager.TestClientRMTokens |
|   | hadoop.yarn.server.resourcemanager.TestAMAuthorization |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:c2d96dd |
| JIRA Issue | YARN-7728 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12907766/YARN-7728.branch-2.8.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux e784363b9937 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | branch-2.8 / c328ab4 |
| maven | version: Apache Maven 3.0.5 |
| Default Java | 1.7.0_151 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/19477/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/19477/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/19477/testReport/ |
| Max. process+thread count | 651 (vs. ulimit of 5000) |
|

[jira] [Comment Edited] (YARN-7819) Allow PlacementProcessor to be used with the FairScheduler

2018-01-25 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16340254#comment-16340254
 ] 

Arun Suresh edited comment on YARN-7819 at 1/25/18 11:04 PM:
-

Uploading initial patch.

* Implement the {{attemptAllocationOnNode}} method for the FairScheduler.
* Parametrized the {{TestPlacementProcessor}} to work with the FairScheduler as 
well

cc [~templedf] / [~haibo.chen]


was (Author: asuresh):
Uploading initial patch.

> Allow PlacementProcessor to be used with the FairScheduler
> --
>
> Key: YARN-7819
> URL: https://issues.apache.org/jira/browse/YARN-7819
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
>Priority: Major
> Attachments: YARN-7819-YARN-6592.001.patch
>
>
> The FairScheduler needs to implement the 
> {{ResourceScheduler#attemptAllocationOnNode}} function for the processor to 
> support the FairScheduler.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7819) Allow PlacementProcessor to be used with the FairScheduler

2018-01-25 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16340254#comment-16340254
 ] 

Arun Suresh commented on YARN-7819:
---

Uploading initial patch.

> Allow PlacementProcessor to be used with the FairScheduler
> --
>
> Key: YARN-7819
> URL: https://issues.apache.org/jira/browse/YARN-7819
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
>Priority: Major
> Attachments: YARN-7819-YARN-6592.001.patch
>
>
> The FairScheduler needs to implement the 
> {{ResourceScheduler#attemptAllocationOnNode}} function for the processor to 
> support the FairScheduler.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7819) Allow PlacementProcessor to be used with the FairScheduler

2018-01-25 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-7819:
--
Attachment: YARN-7819-YARN-6592.001.patch

> Allow PlacementProcessor to be used with the FairScheduler
> --
>
> Key: YARN-7819
> URL: https://issues.apache.org/jira/browse/YARN-7819
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
>Priority: Major
> Attachments: YARN-7819-YARN-6592.001.patch
>
>
> The FairScheduler needs to implement the 
> {{ResourceScheduler#attemptAllocationOnNode}} function for the processor to 
> support the FairScheduler.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7819) Allow PlacementProcessor to be used with the FairScheduler

2018-01-25 Thread Arun Suresh (JIRA)
Arun Suresh created YARN-7819:
-

 Summary: Allow PlacementProcessor to be used with the FairScheduler
 Key: YARN-7819
 URL: https://issues.apache.org/jira/browse/YARN-7819
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Arun Suresh
Assignee: Arun Suresh


The FairScheduler needs to implement the 
{{ResourceScheduler#attemptAllocationOnNode}} function for the processor to 
support the FairScheduler.




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7626) allow regular expression matching in container-executor.cfg for devices and volumes

2018-01-25 Thread Zian Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16340243#comment-16340243
 ] 

Zian Chen commented on YARN-7626:
-

Tested patch 005 on Ycloud cluster with GPU. Looks good. Upload the patch. 

> allow regular expression matching in container-executor.cfg for devices and 
> volumes
> ---
>
> Key: YARN-7626
> URL: https://issues.apache.org/jira/browse/YARN-7626
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Zian Chen
>Assignee: Zian Chen
>Priority: Major
> Attachments: YARN-7626.001.patch, YARN-7626.002.patch, 
> YARN-7626.003.patch, YARN-7626.004.patch, YARN-7626.005.patch
>
>
> Currently when we config some of the GPU devices related fields (like ) in 
> container-executor.cfg, these fields are generated based on different driver 
> versions or GPU device names. We want to enable regular expression matching 
> so that user don't need to manually set up these fields when config 
> container-executor.cfg,



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7626) allow regular expression matching in container-executor.cfg for devices and volumes

2018-01-25 Thread Zian Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16340243#comment-16340243
 ] 

Zian Chen edited comment on YARN-7626 at 1/25/18 10:52 PM:
---

Tested patch 005 on cluster with GPU. Looks good. Upload the patch. 


was (Author: zian chen):
Tested patch 005 on Ycloud cluster with GPU. Looks good. Upload the patch. 

> allow regular expression matching in container-executor.cfg for devices and 
> volumes
> ---
>
> Key: YARN-7626
> URL: https://issues.apache.org/jira/browse/YARN-7626
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Zian Chen
>Assignee: Zian Chen
>Priority: Major
> Attachments: YARN-7626.001.patch, YARN-7626.002.patch, 
> YARN-7626.003.patch, YARN-7626.004.patch, YARN-7626.005.patch
>
>
> Currently when we config some of the GPU devices related fields (like ) in 
> container-executor.cfg, these fields are generated based on different driver 
> versions or GPU device names. We want to enable regular expression matching 
> so that user don't need to manually set up these fields when config 
> container-executor.cfg,



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7626) allow regular expression matching in container-executor.cfg for devices and volumes

2018-01-25 Thread Zian Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zian Chen updated YARN-7626:

Attachment: YARN-7626.005.patch

> allow regular expression matching in container-executor.cfg for devices and 
> volumes
> ---
>
> Key: YARN-7626
> URL: https://issues.apache.org/jira/browse/YARN-7626
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Zian Chen
>Assignee: Zian Chen
>Priority: Major
> Attachments: YARN-7626.001.patch, YARN-7626.002.patch, 
> YARN-7626.003.patch, YARN-7626.004.patch, YARN-7626.005.patch
>
>
> Currently when we config some of the GPU devices related fields (like ) in 
> container-executor.cfg, these fields are generated based on different driver 
> versions or GPU device names. We want to enable regular expression matching 
> so that user don't need to manually set up these fields when config 
> container-executor.cfg,



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7516) Security check for trusted docker image

2018-01-25 Thread Eric Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-7516:

Summary: Security check for trusted docker image  (was: Security check for 
untrusted docker image)

> Security check for trusted docker image
> ---
>
> Key: YARN-7516
> URL: https://issues.apache.org/jira/browse/YARN-7516
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: YARN-7516.001.patch, YARN-7516.002.patch, 
> YARN-7516.003.patch, YARN-7516.004.patch, YARN-7516.005.patch, 
> YARN-7516.006.patch, YARN-7516.007.patch, YARN-7516.008.patch, 
> YARN-7516.009.patch, YARN-7516.010.patch, YARN-7516.011.patch, 
> YARN-7516.012.patch, YARN-7516.013.patch, YARN-7516.014.patch, 
> YARN-7516.015.patch
>
>
> Hadoop YARN Services can support using private docker registry image or 
> docker image from docker hub.  In current implementation, Hadoop security is 
> enforced through username and group membership, and enforce uid:gid 
> consistency in docker container and distributed file system.  There is cloud 
> use case for having ability to run untrusted docker image on the same cluster 
> for testing.  
> The basic requirement for untrusted container is to ensure all kernel and 
> root privileges are dropped, and there is no interaction with distributed 
> file system to avoid contamination.  We can probably enforce detection of 
> untrusted docker image by checking the following:
> # If docker image is from public docker hub repository, the container is 
> automatically flagged as insecure, and disk volume mount are disabled 
> automatically, and drop all kernel capabilities.
> # If docker image is from private repository in docker hub, and there is a 
> white list to allow the private repository, disk volume mount is allowed, 
> kernel capabilities follows the allowed list.
> # If docker image is from private trusted registry with image name like 
> "private.registry.local:5000/centos", and white list allows this private 
> trusted repository.  Disk volume mount is allowed, kernel capabilities 
> follows the allowed list.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7787) Yarn service can not be launched with User Principal

2018-01-25 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16340186#comment-16340186
 ] 

Eric Yang commented on YARN-7787:
-

YARN Service code contains one implementation of ApplicationMaster code that 
extends 

org.apache.hadoop.service.AbstractService.  AM's responsibility is to report 
service status, and other application logic.  Hadoop RPC setup by 
ApplicationMaster must follow basic Hadoop security practice.  HADOOP-9698 
added logic to make sure saslRPCClient verifies server side credential against 
list of configuration defined principal names.  The goal is to prevents men in 
middle attack or replay attack.  This is hard coded into Hadoop security design 
when service are statically deploy on cluster of nodes. 

Therefore, user must use server principal in Yarn Service to launch YARN 
service:
{code:java}
  "kerberos_principal" : {
    "principal_name" : "hbase/_h...@example.com",
    "keytab" : "file:///etc/security/keytabs/hbase.service.keytab"
  },{code}
 

This ticket is to discuss whether there is any wiggle room to relax security 
and allow end user principal to be used for starting service.  
ApplicationMaster can run on any node in YARN cluster.  This security check 
seems cumbersome to generate a keytab that contains the proper server 
principals for ApplicationMaster.  In large scale cluster, using server 
principal is definitely preferred to prevent men-in-middle attack even within 
trusted security perimeter.  This request can have profound impact to Hadoop 
security design for sasl rpc client and worthy of discussion.  The alternative 
is to reimplement AM not base on Hadoop RPC, and new implementation needs to 
solve men-in-middle attack in other shape or forms.  It seems like a lot 
disadvantages to enable end user principal to run ApplicationMaster.  Thoughts?

> Yarn service can not be launched with User Principal
> 
>
> Key: YARN-7787
> URL: https://issues.apache.org/jira/browse/YARN-7787
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-native-services
>Reporter: Yesha Vora
>Priority: Critical
>
> Steps:
> 1) update httpd.json by adding below block.
> {code:java}
> "kerberos_principal" : {
> "principal_name" : "hrt...@example.com",
> "keytab" : "file:///home/hrt_qa/hadoopqa/keytabs/hrt_qa.headless.keytab"
>   }{code}
> 2) Launch http example as hrt_qa user
> {code:java}
> 2018-01-19 22:00:37,238|INFO|MainThread|machine.py:150 - 
> run()||GUID=6b0714d0-1377-43ee-8959-9ae380e1486c|RUNNING: 
> /usr/hdp/current/hadoop-yarn-client/bin/yarn app -launch httpd-hrt-qa httpd
> 2018-01-19 22:00:37,295|INFO|WARNING: YARN_LOG_DIR has been replaced by 
> HADOOP_LOG_DIR. Using value of YARN_LOG_DIR.
> 2018-01-19 22:00:37,295|INFO|WARNING: YARN_LOGFILE has been replaced by 
> HADOOP_LOGFILE. Using value of YARN_LOGFILE.
> 2018-01-19 22:00:37,295|INFO|WARNING: YARN_PID_DIR has been replaced by 
> HADOOP_PID_DIR. Using value of YARN_PID_DIR.
> 2018-01-19 22:00:37,296|INFO|WARNING: YARN_OPTS has been replaced by 
> HADOOP_OPTS. Using value of YARN_OPTS.
> 2018-01-19 22:00:38,173|INFO|18/01/19 22:00:38 WARN util.NativeCodeLoader: 
> Unable to load native-hadoop library for your platform... using builtin-java 
> classes where applicable
> 2018-01-19 22:00:39,530|INFO|18/01/19 22:00:39 WARN 
> shortcircuit.DomainSocketFactory: The short-circuit local reads feature 
> cannot be used because libhadoop cannot be loaded.
> 2018-01-19 22:00:39,545|INFO|18/01/19 22:00:39 INFO client.ServiceClient: 
> Loading service definition from local FS: 
> /usr/hdp/3.0.0.0-xx/hadoop-yarn/yarn-service-examples/httpd/httpd.json
> 2018-01-19 22:00:40,186|INFO|18/01/19 22:00:40 INFO 
> client.ConfiguredRMFailoverProxyProvider: Failing over to rm2
> 2018-01-19 22:00:40,492|INFO|18/01/19 22:00:40 INFO client.ServiceClient: 
> Persisted service httpd-hrt-qa at 
> hdfs://mycluster/user/hrt_qa/.yarn/services/httpd-hrt-qa/httpd-hrt-qa.json
> 2018-01-19 22:00:40,589|INFO|18/01/19 22:00:40 INFO conf.Configuration: found 
> resource resource-types.xml at 
> file:/etc/hadoop/3.0.0.0-xx/0/resource-types.xml
> 2018-01-19 22:00:40,719|INFO|18/01/19 22:00:40 INFO client.ServiceClient: 
> Uploading all dependency jars to HDFS. For faster submission of apps, 
> pre-upload dependency jars to HDFS using command: yarn app -enableFastLaunch
> 2018-01-19 22:00:48,253|INFO|18/01/19 22:00:48 INFO hdfs.DFSClient: Created 
> token for hrt_qa: HDFS_DELEGATION_TOKEN owner=hrt...@example.com, 
> renewer=yarn, realUser=, issueDate=1516399248244, maxDate=1517004048244, 
> sequenceNumber=4, masterKeyId=4 on ha-hdfs:mycluster
> 2018-01-19 22:00:49,463|INFO|18/01/19 22:00:49 INFO impl.YarnClientImpl: 
> Submitted application application_1516398459631_0001{code}
> 3) Run "yarn applica

[jira] [Commented] (YARN-7732) Support Pluggable AM Simulator

2018-01-25 Thread Young Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16340169#comment-16340169
 ] 

Young Chen commented on YARN-7732:
--

YARN-7798 just checked in; resubmitting the patch. Thanks [~yufeigu]!

> Support Pluggable AM Simulator
> --
>
> Key: YARN-7732
> URL: https://issues.apache.org/jira/browse/YARN-7732
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler-load-simulator
>Reporter: Young Chen
>Assignee: Young Chen
>Priority: Minor
> Attachments: YARN-7732-YARN-7798.01.patch, 
> YARN-7732-YARN-7798.02.patch, YARN-7732.01.patch, YARN-7732.02.patch, 
> YARN-7732.03.patch
>
>
> Extract the MapReduce specific set-up in the SLSRunner into the 
> MRAMSimulator, and enable support for pluggable AMSimulators



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7732) Support Pluggable AM Simulator

2018-01-25 Thread Young Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Young Chen updated YARN-7732:
-
Attachment: YARN-7732-YARN-7798.02.patch

> Support Pluggable AM Simulator
> --
>
> Key: YARN-7732
> URL: https://issues.apache.org/jira/browse/YARN-7732
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler-load-simulator
>Reporter: Young Chen
>Assignee: Young Chen
>Priority: Minor
> Attachments: YARN-7732-YARN-7798.01.patch, 
> YARN-7732-YARN-7798.02.patch, YARN-7732.01.patch, YARN-7732.02.patch, 
> YARN-7732.03.patch
>
>
> Extract the MapReduce specific set-up in the SLSRunner into the 
> MRAMSimulator, and enable support for pluggable AMSimulators



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6721) container-executor should have stack checking

2018-01-25 Thread Jim Brennan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16340132#comment-16340132
 ] 

Jim Brennan commented on YARN-6721:
---

The segmentation fault in container-executor reported in -YARN-7796-  appears 
to be due to a binary compatibility issue with the -fstack-check flag that was 
added in this patch.

Based on my testing, a container-executor (without the patch from this Jira) 
compiled on RHEL 6 with the -fstack-check flag always hits this segmentation 
fault when run on RHEL 7.  But if you compile without this flag, the 
container-executor runs on RHEL 7 with no problems.  I also verified this with 
a simple program that just does the copy_file.

This redhat link suggests that there are problems with stack-check: 
[|https://access.redhat.com/security/vulnerabilities/stackguard] 
[https://access.redhat.com/security/vulnerabilities/stackguard]
{noformat}
To avoid stack guard page jumping, every stack allocation primitive needs to 
implement freshly allocated memory probing with the stack guard gap size 
granularity. The existing gcc -fstack-check implementation aims to do exactly 
that, but currently it is not working correctly. Before the gcc -fstack-check 
implementation is fixed and all of the exposed binaries are rebuilt, we have a 
combination of kernel and glibc mitigations that addresses all known 
reporter-provided exploits available:{noformat}
 

I've also verified this with a simple test program that just does the file_copy 
call.
{noformat}
[jbrennan02@imposeenclose test]$ ./copy_file_test-rhel7 /etc/services /tmp/foo

copy /etc/services to /tmp/foo

[jbrennan02@imposeenclose test]$ ./copy_file_test-rhel7-stack-check 
/etc/services /tmp/foo

copy /etc/services to /tmp/foo

[jbrennan02@imposeenclose test]$ ./copy_file_test-rhel6 /etc/services /tmp/foo

copy /etc/services to /tmp/foo

[jbrennan02@imposeenclose test]$ ./copy_file_test-rhel6-stack-check 
/etc/services /tmp/foo

copy /etc/services to /tmp/foo

Segmentation fault



The RHEL 6 versions were compiled on this system:

[jbrennan02@goalssoles test]$ hostname

goalssoles.corp.ne1.yahoo.com

[jbrennan02@goalssoles test]$ cat /etc/redhat-release 

Red Hat Enterprise Linux Server release 6.8 (Santiago)

[jbrennan02@goalssoles test]$ gcc --version

gcc (GCC) 4.4.7 20120313 (Red Hat 4.4.7-18)



The RHEL 7 versions were compiled on this system:

[jbrennan02@imposeenclose test]$ hostname

imposeenclose.corp.ne1.yahoo.com

[jbrennan02@imposeenclose test]$ cat /etc/redhat-release 

Red Hat Enterprise Linux Server release 7.4 (Maipo)

[jbrennan02@imposeenclose test]$ gcc --version

gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-16)


Compiled with: gcc [-fstack-check] -o copy_file_test 
copy_file_test.c{noformat}
 

I propose that we remove the -fstack-check flag, and possibly replace it with 
-fstack-protector, although that does not provide the same protection.

 

> container-executor should have stack checking
> -
>
> Key: YARN-6721
> URL: https://issues.apache.org/jira/browse/YARN-6721
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager, security
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Critical
>  Labels: security
> Fix For: 3.0.0-beta1
>
> Attachments: YARN-6721.00.patch, YARN-6721.01.patch, 
> YARN-6721.02.patch
>
>
> As per https://www.qualys.com/2017/06/19/stack-clash/stack-clash.txt and 
> given that container-executor is setuid, it should be compiled with stack 
> checking if the compiler supports such features.  (-fstack-check on gcc, 
> -fsanitize=safe-stack on clang, -xcheck=stkovf on "Oracle Solaris Studio", 
> others as we find them, ...)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7814) Remove automatic mounting of the cgroups root directory into Docker containers

2018-01-25 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16340117#comment-16340117
 ] 

genericqa commented on YARN-7814:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 11s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 43s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 19m 
43s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 62m 40s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7814 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12907762/YARN-7814.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 5f8c14ae54e8 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 0c139d5 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/19476/testReport/ |
| Max. process+thread count | 410 (vs. ulimit of 5000) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/19476/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Remove automatic mounting of the cgroups root dir

[jira] [Updated] (YARN-7787) Yarn service can not be launched with User Principal

2018-01-25 Thread Yesha Vora (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yesha Vora updated YARN-7787:
-
Description: 
Steps:

1) update httpd.json by adding below block.
{code:java}
"kerberos_principal" : {
"principal_name" : "hrt...@example.com",
"keytab" : "file:///home/hrt_qa/hadoopqa/keytabs/hrt_qa.headless.keytab"
  }{code}
2) Launch http example as hrt_qa user
{code:java}
2018-01-19 22:00:37,238|INFO|MainThread|machine.py:150 - 
run()||GUID=6b0714d0-1377-43ee-8959-9ae380e1486c|RUNNING: 
/usr/hdp/current/hadoop-yarn-client/bin/yarn app -launch httpd-hrt-qa httpd
2018-01-19 22:00:37,295|INFO|WARNING: YARN_LOG_DIR has been replaced by 
HADOOP_LOG_DIR. Using value of YARN_LOG_DIR.
2018-01-19 22:00:37,295|INFO|WARNING: YARN_LOGFILE has been replaced by 
HADOOP_LOGFILE. Using value of YARN_LOGFILE.
2018-01-19 22:00:37,295|INFO|WARNING: YARN_PID_DIR has been replaced by 
HADOOP_PID_DIR. Using value of YARN_PID_DIR.
2018-01-19 22:00:37,296|INFO|WARNING: YARN_OPTS has been replaced by 
HADOOP_OPTS. Using value of YARN_OPTS.
2018-01-19 22:00:38,173|INFO|18/01/19 22:00:38 WARN util.NativeCodeLoader: 
Unable to load native-hadoop library for your platform... using builtin-java 
classes where applicable
2018-01-19 22:00:39,530|INFO|18/01/19 22:00:39 WARN 
shortcircuit.DomainSocketFactory: The short-circuit local reads feature cannot 
be used because libhadoop cannot be loaded.
2018-01-19 22:00:39,545|INFO|18/01/19 22:00:39 INFO client.ServiceClient: 
Loading service definition from local FS: 
/usr/hdp/3.0.0.0-xx/hadoop-yarn/yarn-service-examples/httpd/httpd.json
2018-01-19 22:00:40,186|INFO|18/01/19 22:00:40 INFO 
client.ConfiguredRMFailoverProxyProvider: Failing over to rm2
2018-01-19 22:00:40,492|INFO|18/01/19 22:00:40 INFO client.ServiceClient: 
Persisted service httpd-hrt-qa at 
hdfs://mycluster/user/hrt_qa/.yarn/services/httpd-hrt-qa/httpd-hrt-qa.json
2018-01-19 22:00:40,589|INFO|18/01/19 22:00:40 INFO conf.Configuration: found 
resource resource-types.xml at file:/etc/hadoop/3.0.0.0-xx/0/resource-types.xml
2018-01-19 22:00:40,719|INFO|18/01/19 22:00:40 INFO client.ServiceClient: 
Uploading all dependency jars to HDFS. For faster submission of apps, 
pre-upload dependency jars to HDFS using command: yarn app -enableFastLaunch
2018-01-19 22:00:48,253|INFO|18/01/19 22:00:48 INFO hdfs.DFSClient: Created 
token for hrt_qa: HDFS_DELEGATION_TOKEN owner=hrt...@example.com, renewer=yarn, 
realUser=, issueDate=1516399248244, maxDate=1517004048244, sequenceNumber=4, 
masterKeyId=4 on ha-hdfs:mycluster
2018-01-19 22:00:49,463|INFO|18/01/19 22:00:49 INFO impl.YarnClientImpl: 
Submitted application application_1516398459631_0001{code}
3) Run "yarn application -status "
{code:java}
2018-01-19 22:01:05,570|INFO|RUNNING: 
/usr/hdp/current/hadoop-yarn-client/bin/yarn application -status httpd-hrt-qa
2018-01-19 22:01:05,626|INFO|WARNING: YARN_LOG_DIR has been replaced by 
HADOOP_LOG_DIR. Using value of YARN_LOG_DIR.
2018-01-19 22:01:05,626|INFO|WARNING: YARN_LOGFILE has been replaced by 
HADOOP_LOGFILE. Using value of YARN_LOGFILE.
2018-01-19 22:01:05,626|INFO|WARNING: YARN_PID_DIR has been replaced by 
HADOOP_PID_DIR. Using value of YARN_PID_DIR.
2018-01-19 22:01:05,626|INFO|WARNING: YARN_OPTS has been replaced by 
HADOOP_OPTS. Using value of YARN_OPTS.
2018-01-19 22:01:06,529|INFO|18/01/19 22:01:06 WARN util.NativeCodeLoader: 
Unable to load native-hadoop library for your platform... using builtin-java 
classes where applicable
2018-01-19 22:01:07,851|INFO|18/01/19 22:01:07 WARN 
shortcircuit.DomainSocketFactory: The short-circuit local reads feature cannot 
be used because libhadoop cannot be loaded.
2018-01-19 22:01:08,003|INFO|18/01/19 22:01:08 INFO utils.ServiceApiUtil: 
Loading service definition from 
hdfs://mycluster/user/hrt_qa/.yarn/services/httpd-hrt-qa/httpd-hrt-qa.json
2018-01-19 22:01:08,563|INFO|18/01/19 22:01:08 INFO 
client.ConfiguredRMFailoverProxyProvider: Failing over to rm2
2018-01-19 22:01:08,787|INFO|Exception in thread "main" java.io.IOException: 
Failed on local exception: java.io.IOException: Couldn't set up IO streams: 
java.lang.IllegalArgumentException: Kerberos principal name does NOT have the 
expected hostname part: hrt...@example.com; Host Details : local host is: 
“host1/xx.xx.xx.xx"; destination host is: “host1”:40318;
2018-01-19 22:01:08,788|INFO|at 
org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:808)
2018-01-19 22:01:08,788|INFO|at 
org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1495)
2018-01-19 22:01:08,788|INFO|at 
org.apache.hadoop.ipc.Client.call(Client.java:1437)
2018-01-19 22:01:08,788|INFO|at 
org.apache.hadoop.ipc.Client.call(Client.java:1347)
2018-01-19 22:01:08,789|INFO|at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
2018-01-19 22:01:08,789|INFO|at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:

[jira] [Updated] (YARN-7787) Yarn service can not be launched with User Principal

2018-01-25 Thread Yesha Vora (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yesha Vora updated YARN-7787:
-
Summary: Yarn service can not be launched with User Principal  (was: 
Application status cmd fails if principal does not have hostname part)

> Yarn service can not be launched with User Principal
> 
>
> Key: YARN-7787
> URL: https://issues.apache.org/jira/browse/YARN-7787
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-native-services
>Reporter: Yesha Vora
>Priority: Critical
>
> 1) update httpd.json by adding below block.
> {code}
> "kerberos_principal" : {
> "principal_name" : "hrt...@example.com",
> "keytab" : "file:///home/hrt_qa/hadoopqa/keytabs/hrt_qa.headless.keytab"
>   }{code}
> 2) Launch http example as hrt_qa user
> {code}2018-01-19 22:00:37,238|INFO|MainThread|machine.py:150 - 
> run()||GUID=6b0714d0-1377-43ee-8959-9ae380e1486c|RUNNING: 
> /usr/hdp/current/hadoop-yarn-client/bin/yarn app -launch httpd-hrt-qa httpd
> 2018-01-19 22:00:37,295|INFO|WARNING: YARN_LOG_DIR has been replaced by 
> HADOOP_LOG_DIR. Using value of YARN_LOG_DIR.
> 2018-01-19 22:00:37,295|INFO|WARNING: YARN_LOGFILE has been replaced by 
> HADOOP_LOGFILE. Using value of YARN_LOGFILE.
> 2018-01-19 22:00:37,295|INFO|WARNING: YARN_PID_DIR has been replaced by 
> HADOOP_PID_DIR. Using value of YARN_PID_DIR.
> 2018-01-19 22:00:37,296|INFO|WARNING: YARN_OPTS has been replaced by 
> HADOOP_OPTS. Using value of YARN_OPTS.
> 2018-01-19 22:00:38,173|INFO|18/01/19 22:00:38 WARN util.NativeCodeLoader: 
> Unable to load native-hadoop library for your platform... using builtin-java 
> classes where applicable
> 2018-01-19 22:00:39,530|INFO|18/01/19 22:00:39 WARN 
> shortcircuit.DomainSocketFactory: The short-circuit local reads feature 
> cannot be used because libhadoop cannot be loaded.
> 2018-01-19 22:00:39,545|INFO|18/01/19 22:00:39 INFO client.ServiceClient: 
> Loading service definition from local FS: 
> /usr/hdp/3.0.0.0-xx/hadoop-yarn/yarn-service-examples/httpd/httpd.json
> 2018-01-19 22:00:40,186|INFO|18/01/19 22:00:40 INFO 
> client.ConfiguredRMFailoverProxyProvider: Failing over to rm2
> 2018-01-19 22:00:40,492|INFO|18/01/19 22:00:40 INFO client.ServiceClient: 
> Persisted service httpd-hrt-qa at 
> hdfs://mycluster/user/hrt_qa/.yarn/services/httpd-hrt-qa/httpd-hrt-qa.json
> 2018-01-19 22:00:40,589|INFO|18/01/19 22:00:40 INFO conf.Configuration: found 
> resource resource-types.xml at 
> file:/etc/hadoop/3.0.0.0-xx/0/resource-types.xml
> 2018-01-19 22:00:40,719|INFO|18/01/19 22:00:40 INFO client.ServiceClient: 
> Uploading all dependency jars to HDFS. For faster submission of apps, 
> pre-upload dependency jars to HDFS using command: yarn app -enableFastLaunch
> 2018-01-19 22:00:48,253|INFO|18/01/19 22:00:48 INFO hdfs.DFSClient: Created 
> token for hrt_qa: HDFS_DELEGATION_TOKEN owner=hrt...@example.com, 
> renewer=yarn, realUser=, issueDate=1516399248244, maxDate=1517004048244, 
> sequenceNumber=4, masterKeyId=4 on ha-hdfs:mycluster
> 2018-01-19 22:00:49,463|INFO|18/01/19 22:00:49 INFO impl.YarnClientImpl: 
> Submitted application application_1516398459631_0001{code}
> 3) Run "yarn application -status "
> {code}
> 2018-01-19 22:01:05,570|INFO|RUNNING: 
> /usr/hdp/current/hadoop-yarn-client/bin/yarn application -status httpd-hrt-qa
> 2018-01-19 22:01:05,626|INFO|WARNING: YARN_LOG_DIR has been replaced by 
> HADOOP_LOG_DIR. Using value of YARN_LOG_DIR.
> 2018-01-19 22:01:05,626|INFO|WARNING: YARN_LOGFILE has been replaced by 
> HADOOP_LOGFILE. Using value of YARN_LOGFILE.
> 2018-01-19 22:01:05,626|INFO|WARNING: YARN_PID_DIR has been replaced by 
> HADOOP_PID_DIR. Using value of YARN_PID_DIR.
> 2018-01-19 22:01:05,626|INFO|WARNING: YARN_OPTS has been replaced by 
> HADOOP_OPTS. Using value of YARN_OPTS.
> 2018-01-19 22:01:06,529|INFO|18/01/19 22:01:06 WARN util.NativeCodeLoader: 
> Unable to load native-hadoop library for your platform... using builtin-java 
> classes where applicable
> 2018-01-19 22:01:07,851|INFO|18/01/19 22:01:07 WARN 
> shortcircuit.DomainSocketFactory: The short-circuit local reads feature 
> cannot be used because libhadoop cannot be loaded.
> 2018-01-19 22:01:08,003|INFO|18/01/19 22:01:08 INFO utils.ServiceApiUtil: 
> Loading service definition from 
> hdfs://mycluster/user/hrt_qa/.yarn/services/httpd-hrt-qa/httpd-hrt-qa.json
> 2018-01-19 22:01:08,563|INFO|18/01/19 22:01:08 INFO 
> client.ConfiguredRMFailoverProxyProvider: Failing over to rm2
> 2018-01-19 22:01:08,787|INFO|Exception in thread "main" java.io.IOException: 
> Failed on local exception: java.io.IOException: Couldn't set up IO streams: 
> java.lang.IllegalArgumentException: Kerberos principal name does NOT have the 
> expected hostname part: hrt...@example.com; Host Details : local host is: 
> “host1/

[jira] [Commented] (YARN-7798) Refactor SLS Reservation Creation

2018-01-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16340103#comment-16340103
 ] 

Hudson commented on YARN-7798:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13559 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13559/])
YARN-7798. Refactor SLS Reservation Creation. Contributed by Young Chen. 
(yufei: rev 16be42d3097c13b17d704e5b6dc8d66bd5ff6d9a)
* (edit) 
hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/appmaster/MRAMSimulator.java
* (edit) 
hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/SLSRunner.java
* (edit) 
hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/appmaster/AMSimulator.java
* (edit) 
hadoop-tools/hadoop-sls/src/test/java/org/apache/hadoop/yarn/sls/TestSLSRunner.java
* (edit) 
hadoop-tools/hadoop-sls/src/test/java/org/apache/hadoop/yarn/sls/appmaster/TestAMSimulator.java


> Refactor SLS Reservation Creation
> -
>
> Key: YARN-7798
> URL: https://issues.apache.org/jira/browse/YARN-7798
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Young Chen
>Assignee: Young Chen
>Priority: Minor
> Fix For: 3.1.0
>
> Attachments: YARN-7798.01.patch, YARN-7798.02.patch, 
> YARN-7798.03.patch
>
>
> Move the reservation request creation out of SLSRunner and delegate to the 
> AMSimulator instance.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7796) Container-executor fails with segfault on certain OS configurations

2018-01-25 Thread Jim Brennan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16340101#comment-16340101
 ] 

Jim Brennan commented on YARN-7796:
---

We ran into this same issue when running a container-executor that was compiled 
on RHEL 6 on a RHEL 7 system.  While we have verified that the patch in this 
Jira does avoid the segmentation fault, we are concerned that the root cause of 
the problem remains, and may bite us later.

The -fstack-check flag was added to the command line in YARN-6721

Based on my testing, a container-executor (without the patch from this Jira) 
compiled on RHEL 6 with the -fstack-check flag always hits this segmentation 
fault when run on RHEL 7.  But if you compile without this flag, the 
container-executor runs on RHEL 7 with no problems.  I also verified this with 
a simple program that just does the copy_file.

[~grepas] - was this the case for you? Were you running a container-executor 
that was compiled on an earlier redhat release?

> Container-executor fails with segfault on certain OS configurations
> ---
>
> Key: YARN-7796
> URL: https://issues.apache.org/jira/browse/YARN-7796
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.0.0
>Reporter: Gergo Repas
>Assignee: Gergo Repas
>Priority: Major
> Fix For: 3.1.0, 3.0.1
>
> Attachments: YARN-7796.000.patch, YARN-7796.001.patch, 
> YARN-7796.002.patch
>
>
> There is a relatively big (128K) buffer allocated on the stack in 
> container-executor.c for the purpose of copying files. As indicated by the 
> below gdb stack trace, this allocation can fail with SIGSEGV. This happens 
> only on certain OS configurations - I can reproduce this issue on RHEL 6.9:
> {code:java}
> [Thread debugging using libthread_db enabled]
> main : command provided 0
> main : run as user is ***
> main : requested yarn user is ***
> Program received signal SIGSEGV, Segmentation fault.
> 0x004069bc in copy_file (input=7, in_filename=0x7ffd669fd2d6 
> "/yarn/nm/nmPrivate/container_1516711246952_0001_02_01.tokens", 
> out_filename=0x932930 
> "/yarn/nm/usercache/systest/appcache/application_1516711246952_0001/container_1516711246952_0001_02_01.tokens",
>  perm=384)
> at 
> /root/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c:966
> 966 char buffer[buffer_size];
> (gdb) bt
> #0  0x004069bc in copy_file (input=7, in_filename=0x7ffd669fd2d6 
> "/yarn/nm/nmPrivate/container_1516711246952_0001_02_01.tokens", 
> out_filename=0x932930 
> "/yarn/nm/usercache/systest/appcache/application_1516711246952_0001/container_1516711246952_0001_02_01.tokens",
>  perm=384)
> at 
> /root/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c:966
> #1  0x00409a81 in initialize_app (user=, 
> app_id=0x7ffd669fd2b7 "application_1516711246952_0001", 
> nmPrivate_credentials_file=0x7ffd669fd2d6 
> "/yarn/nm/nmPrivate/container_1516711246952_0001_02_01.tokens", 
> local_dirs=0x9331c8, log_roots=, args=0x7ffd669fb168)
> at 
> /root/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c:1122
> #2  0x00403f90 in main (argc=, argv= optimized out>) at 
> /root/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/main.c:558
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7728) Expose container preemptions related information in Capacity Scheduler queue metrics

2018-01-25 Thread Eric Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Payne updated YARN-7728:
-
Attachment: YARN-7728.branch-2.8.002.patch

> Expose container preemptions related information in Capacity Scheduler queue 
> metrics
> 
>
> Key: YARN-7728
> URL: https://issues.apache.org/jira/browse/YARN-7728
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.9.0, 2.8.3, 3.0.0
>Reporter: Eric Payne
>Assignee: Eric Payne
>Priority: Major
> Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.1
>
> Attachments: YARN-7728.001.patch, YARN-7728.002.patch, 
> YARN-7728.branch-2.8.002.patch
>
>
> YARN-1047 exposed queue metrics for the number of preempted containers to the 
> fair scheduler. I would like to also expose these to the capacity scheduler 
> and add metrics for the amount of lost memory seconds and vcore seconds.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Reopened] (YARN-7728) Expose container preemptions related information in Capacity Scheduler queue metrics

2018-01-25 Thread Eric Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Payne reopened YARN-7728:
--

> Expose container preemptions related information in Capacity Scheduler queue 
> metrics
> 
>
> Key: YARN-7728
> URL: https://issues.apache.org/jira/browse/YARN-7728
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.9.0, 2.8.3, 3.0.0
>Reporter: Eric Payne
>Assignee: Eric Payne
>Priority: Major
> Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.1
>
> Attachments: YARN-7728.001.patch, YARN-7728.002.patch
>
>
> YARN-1047 exposed queue metrics for the number of preempted containers to the 
> fair scheduler. I would like to also expose these to the capacity scheduler 
> and add metrics for the amount of lost memory seconds and vcore seconds.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7798) Refactor SLS Reservation Creation

2018-01-25 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16340085#comment-16340085
 ] 

Yufei Gu commented on YARN-7798:


Committed to trunk. Thanks [~youchen] for the patch.

> Refactor SLS Reservation Creation
> -
>
> Key: YARN-7798
> URL: https://issues.apache.org/jira/browse/YARN-7798
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Young Chen
>Assignee: Young Chen
>Priority: Minor
> Fix For: 3.1.0
>
> Attachments: YARN-7798.01.patch, YARN-7798.02.patch, 
> YARN-7798.03.patch
>
>
> Move the reservation request creation out of SLSRunner and delegate to the 
> AMSimulator instance.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7798) Refactor SLS Reservation Creation

2018-01-25 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16340081#comment-16340081
 ] 

Yufei Gu commented on YARN-7798:


+1. Will commit soon.

> Refactor SLS Reservation Creation
> -
>
> Key: YARN-7798
> URL: https://issues.apache.org/jira/browse/YARN-7798
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Young Chen
>Assignee: Young Chen
>Priority: Minor
> Attachments: YARN-7798.01.patch, YARN-7798.02.patch, 
> YARN-7798.03.patch
>
>
> Move the reservation request creation out of SLSRunner and delegate to the 
> AMSimulator instance.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7811) Service AM should use configured default docker network

2018-01-25 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16339833#comment-16339833
 ] 

genericqa commented on YARN-7811:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 27m  
4s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 26s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 12s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core:
 The patch generated 2 new + 24 unchanged - 3 fixed = 26 total (was 27) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 37s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
32s{color} | {color:green} hadoop-yarn-services-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 70m 35s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7811 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12907756/YARN-7811.01.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 8ee8b0b061fb 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 82cc6f6 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/19475/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-services_hadoop-yarn-services-core.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/19475/testReport/ |
| 

[jira] [Updated] (YARN-7814) Remove automatic mounting of the cgroups root directory into Docker containers

2018-01-25 Thread Shane Kumpf (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shane Kumpf updated YARN-7814:
--
Attachment: YARN-7814.002.patch

> Remove automatic mounting of the cgroups root directory into Docker containers
> --
>
> Key: YARN-7814
> URL: https://issues.apache.org/jira/browse/YARN-7814
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
>Priority: Major
> Attachments: YARN-7814.001.patch, YARN-7814.002.patch
>
>
> Currently, all Docker containers launched by {{DockerLinuxContainerRuntime}} 
> get /sys/fs/cgroup automatically mounted. Now that user supplied mounts 
> (YARN-5534) are in, containers that require this mount can request it (with a 
> properly configured mount whitelist).
> I propose we remove the automatic mounting of /sys/fs/cgroup into Docker 
> containers.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7814) Remove automatic mounting of the cgroups root directory into Docker containers

2018-01-25 Thread Shane Kumpf (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16339829#comment-16339829
 ] 

Shane Kumpf commented on YARN-7814:
---

Attached a new patch to address the unused import.

> Remove automatic mounting of the cgroups root directory into Docker containers
> --
>
> Key: YARN-7814
> URL: https://issues.apache.org/jira/browse/YARN-7814
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
>Priority: Major
> Attachments: YARN-7814.001.patch, YARN-7814.002.patch
>
>
> Currently, all Docker containers launched by {{DockerLinuxContainerRuntime}} 
> get /sys/fs/cgroup automatically mounted. Now that user supplied mounts 
> (YARN-5534) are in, containers that require this mount can request it (with a 
> properly configured mount whitelist).
> I propose we remove the automatic mounting of /sys/fs/cgroup into Docker 
> containers.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6597) Wrapping up allocationTags support under RMContainer state transitions

2018-01-25 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16339826#comment-16339826
 ] 

genericqa commented on YARN-6597:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
32s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-6592 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
36s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
40s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
43s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 19s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
6s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} YARN-6592 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 0 new + 57 unchanged - 2 fixed = 57 total (was 59) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 31s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 61m 25s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
21s{color} | {color:red} The patch generated 4 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}106m 52s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-6597 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12907751/YARN-6597-YARN-6592.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 948139ab41a8 4.4.0-89-generic #112-Ubuntu SMP Mon Jul 31 
19:38:41 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | YARN-6592 / 13d37ce |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/19472/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/19472/testReport/ |
| asflicense | 
https://builds.apache.org/job/PreCommit-YARN-Build/19472/artifact/out/patch-asflicense-problems.txt
 |
| Max. process+thread count | 83

[jira] [Commented] (YARN-7798) Refactor SLS Reservation Creation

2018-01-25 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16339801#comment-16339801
 ] 

genericqa commented on YARN-7798:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 10m 
36s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 34s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 10s{color} | {color:orange} hadoop-tools/hadoop-sls: The patch generated 1 
new + 50 unchanged - 2 fixed = 51 total (was 52) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 22s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
49s{color} | {color:green} hadoop-sls in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 62m 52s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7798 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12907755/YARN-7798.03.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux f42d332c82a7 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 82cc6f6 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/19474/artifact/out/diff-checkstyle-hadoop-tools_hadoop-sls.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/19474/testReport/ |
| Max. process+thread count | 457 (vs. ulimit of 5000) |
| modules | C: hadoop-tools/hadoop-sls U: hadoop-tools/hadoop-sls |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/19474/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was a

[jira] [Commented] (YARN-7765) [Atsv2] GSSException: No valid credentials provided - Failed to find any Kerberos tgt thrown by HBaseClient in NM and HDFSClient in HBase daemons

2018-01-25 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16339680#comment-16339680
 ] 

Vrushali C commented on YARN-7765:
--

Related discussions 

[https://community.hortonworks.com/questions/117897/hbase-connection-expiration-on-kerberized-cluster.html]

https://issues.apache.org/jira/browse/HADOOP-10786

 

> [Atsv2] GSSException: No valid credentials provided - Failed to find any 
> Kerberos tgt thrown by HBaseClient in NM and HDFSClient in HBase daemons
> -
>
> Key: YARN-7765
> URL: https://issues.apache.org/jira/browse/YARN-7765
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Rohith Sharma K S
>Priority: Critical
>
> Secure cluster is deployed and all YARN services are started successfully. 
> When application is submitted, app collectors which is started as aux-service 
> throwing below exception. But this exception is *NOT* observed from RM 
> TimelineCollector. 
> Cluster is deployed with Hadoop-3.0 and Hbase-1.2.6 secure cluster. All the 
> YARN and HBase service are started and working perfectly fine. After 24 hours 
> i.e when token lifetime is expired, HBaseClient in NM and HDFSClient in 
> HMaster and HRegionServer started getting this error. After sometime, HBase 
> daemons got shutdown. In NM, JVM didn't shutdown but none of the events got 
> published.
> {noformat}
> 2018-01-17 11:04:48,017 FATAL ipc.RpcClientImpl (RpcClientImpl.java:run(684)) 
> - SASL authentication failed. The most likely cause is missing or invalid 
> credentials. Consider 'kinit'.
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)]
> {noformat}
> cc :/ [~vrushalic] [~varun_saxena] 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7811) Service AM should use configured default docker network

2018-01-25 Thread Billie Rinaldi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16339677#comment-16339677
 ] 

Billie Rinaldi commented on YARN-7811:
--

The patch makes the Service AM only set the network environment variable when a 
network is specified. It also makes the Service AM only set the privileged 
container environment variable when the value is true, since false is an 
invalid value for that variable.

> Service AM should use configured default docker network
> ---
>
> Key: YARN-7811
> URL: https://issues.apache.org/jira/browse/YARN-7811
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Billie Rinaldi
>Assignee: Billie Rinaldi
>Priority: Major
> Attachments: YARN-7811.01.patch
>
>
> Currently the DockerProviderService used by the Service AM hardcodes a 
> default of bridge for the docker network. We already have a YARN 
> configuration property for default network, so the Service AM should honor 
> that.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7765) [Atsv2] GSSException: No valid credentials provided - Failed to find any Kerberos tgt thrown by HBaseClient in NM and HDFSClient in HBase daemons

2018-01-25 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16339674#comment-16339674
 ] 

Vrushali C commented on YARN-7765:
--

It looks like this is happening to long running hbase connections like in this 
case from the timeline collector in the Node Manager to hbase. The hbase 
connection does not "automatically" pick the new kerberbos information after 
the lifetime expiration. Perhaps we can consider adding a connection expiry 
when we set up the hbase connection in the timeline collector.

> [Atsv2] GSSException: No valid credentials provided - Failed to find any 
> Kerberos tgt thrown by HBaseClient in NM and HDFSClient in HBase daemons
> -
>
> Key: YARN-7765
> URL: https://issues.apache.org/jira/browse/YARN-7765
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Rohith Sharma K S
>Priority: Critical
>
> Secure cluster is deployed and all YARN services are started successfully. 
> When application is submitted, app collectors which is started as aux-service 
> throwing below exception. But this exception is *NOT* observed from RM 
> TimelineCollector. 
> Cluster is deployed with Hadoop-3.0 and Hbase-1.2.6 secure cluster. All the 
> YARN and HBase service are started and working perfectly fine. After 24 hours 
> i.e when token lifetime is expired, HBaseClient in NM and HDFSClient in 
> HMaster and HRegionServer started getting this error. After sometime, HBase 
> daemons got shutdown. In NM, JVM didn't shutdown but none of the events got 
> published.
> {noformat}
> 2018-01-17 11:04:48,017 FATAL ipc.RpcClientImpl (RpcClientImpl.java:run(684)) 
> - SASL authentication failed. The most likely cause is missing or invalid 
> credentials. Consider 'kinit'.
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)]
> {noformat}
> cc :/ [~vrushalic] [~varun_saxena] 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7811) Service AM should use configured default docker network

2018-01-25 Thread Billie Rinaldi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Billie Rinaldi updated YARN-7811:
-
Attachment: YARN-7811.01.patch

> Service AM should use configured default docker network
> ---
>
> Key: YARN-7811
> URL: https://issues.apache.org/jira/browse/YARN-7811
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Billie Rinaldi
>Assignee: Billie Rinaldi
>Priority: Major
> Attachments: YARN-7811.01.patch
>
>
> Currently the DockerProviderService used by the Service AM hardcodes a 
> default of bridge for the docker network. We already have a YARN 
> configuration property for default network, so the Service AM should honor 
> that.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5148) [YARN-3368] Add page to new YARN UI to view server side configurations/logs/JVM-metrics

2018-01-25 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16339669#comment-16339669
 ] 

genericqa commented on YARN-5148:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
28s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
31m 53s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
10s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  3s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  5m 
38s{color} | {color:green} hadoop-yarn-ui in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 55m 38s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-5148 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12907742/YARN-5148.15.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  xml  |
| uname | Linux ebdbd4333e8b 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 82cc6f6 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/19471/testReport/ |
| Max. process+thread count | 435 (vs. ulimit of 5000) |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/19471/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> [YARN-3368] Add page to new YARN UI to view server side 
> configurations/logs/JVM-metrics
> ---
>
> Key: YARN-5148
> URL: https://issues.apache.org/jira/browse/YARN-5148
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: webapp, yarn-ui-v2
>Reporter: Wangda Tan
> 

[jira] [Updated] (YARN-7798) Refactor SLS Reservation Creation

2018-01-25 Thread Young Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Young Chen updated YARN-7798:
-
Attachment: YARN-7798.03.patch

> Refactor SLS Reservation Creation
> -
>
> Key: YARN-7798
> URL: https://issues.apache.org/jira/browse/YARN-7798
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Young Chen
>Assignee: Young Chen
>Priority: Minor
> Attachments: YARN-7798.01.patch, YARN-7798.02.patch, 
> YARN-7798.03.patch
>
>
> Move the reservation request creation out of SLSRunner and delegate to the 
> AMSimulator instance.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-3841) [Storage implementation] Adding retry semantics to HDFS backing storage

2018-01-25 Thread Abhishek Modi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3841?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhishek Modi reassigned YARN-3841:
---

Assignee: Abhishek Modi  (was: Tsuyoshi Ozawa)

> [Storage implementation] Adding retry semantics to HDFS backing storage
> ---
>
> Key: YARN-3841
> URL: https://issues.apache.org/jira/browse/YARN-3841
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Tsuyoshi Ozawa
>Assignee: Abhishek Modi
>Priority: Major
>  Labels: YARN-5355
> Attachments: YARN-3841-YARN-7055.002.patch, YARN-3841.001.patch, 
> YARN-3841.002.patch, YARN-3841.003.patch
>
>
> HDFS backing storage is useful for following scenarios.
> 1. For Hadoop clusters which don't run HBase.
> 2. For fallback from HBase when HBase cluster is temporary unavailable. 
> Quoting ATS design document of YARN-2928:
> {quote}
> In the case the HBase
> storage is not available, the plugin should buffer the writes temporarily 
> (e.g. HDFS), and flush
> them once the storage comes back online. Reading and writing to hdfs as the 
> the backup storage
> could potentially use the HDFS writer plugin unless the complexity of 
> generalizing the HDFS
> writer plugin for this purpose exceeds the benefits of reusing it here.
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7806) Distributed Shell should use timeline async api's

2018-01-25 Thread Rohith Sharma K S (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7806?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S updated YARN-7806:

Reporter: Sumana Sathish  (was: Rohith Sharma K S)

> Distributed Shell should use timeline async api's
> -
>
> Key: YARN-7806
> URL: https://issues.apache.org/jira/browse/YARN-7806
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: distributed-shell
>Reporter: Sumana Sathish
>Assignee: Rohith Sharma K S
>Priority: Major
> Fix For: 3.1.0, 2.10.0, 3.0.1
>
> Attachments: YARN-7806.01.patch
>
>
> DS publishes container start/stop events using sync API. If back end is  down 
> for some reasons, then DS will hang till container start/stop events are 
> published. By default, retry is 30 and interval is 1sec.
> To publish single entity using sync API will take 1 minutes to come out. In 
> case of DS, if number of containers are 10 then 10minutes for start event and 
> 10minutes for stop event. Overall 20 minutes will wait.
>  
> DS should publish container events using asyn api.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6597) Wrapping up allocationTags support under RMContainer state transitions

2018-01-25 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16339630#comment-16339630
 ] 

Arun Suresh commented on YARN-6597:
---

Thanks [~pgaref], +1
Will push this in after we merge with trunk.

> Wrapping up allocationTags support under RMContainer state transitions
> --
>
> Key: YARN-6597
> URL: https://issues.apache.org/jira/browse/YARN-6597
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Konstantinos Karanasos
>Assignee: Panagiotis Garefalakis
>Priority: Major
> Attachments: YARN-6597-YARN-6592.001.patch
>
>
> Each allocation can have a set of allocation tags associated to it.
> For example, an allocation can be marked as hbase, hbase-master, spark, etc.
> These allocation-tags are active in the cluster only while that container is 
> active (from the moment it gets allocated until the moment it finishes its 
> execution).
> This JIRA is responsible for storing and updating in the 
> {{PlacementConstraintManager}} the active allocation tags in the cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6597) Wrapping up allocationTags support under RMContainer state transitions

2018-01-25 Thread Panagiotis Garefalakis (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Panagiotis Garefalakis updated YARN-6597:
-
Summary: Wrapping up allocationTags support under RMContainer state 
transitions  (was: Store and update allocation tags in the Placement Constraint 
Manager)

> Wrapping up allocationTags support under RMContainer state transitions
> --
>
> Key: YARN-6597
> URL: https://issues.apache.org/jira/browse/YARN-6597
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Konstantinos Karanasos
>Assignee: Panagiotis Garefalakis
>Priority: Major
> Attachments: YARN-6597-YARN-6592.001.patch
>
>
> Each allocation can have a set of allocation tags associated to it.
> For example, an allocation can be marked as hbase, hbase-master, spark, etc.
> These allocation-tags are active in the cluster only while that container is 
> active (from the moment it gets allocated until the moment it finishes its 
> execution).
> This JIRA is responsible for storing and updating in the 
> {{PlacementConstraintManager}} the active allocation tags in the cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7818) DistributedShell Container fails with exitCode=143 when NM restarts and recovers

2018-01-25 Thread Yesha Vora (JIRA)
Yesha Vora created YARN-7818:


 Summary: DistributedShell Container fails with exitCode=143 when 
NM restarts and recovers
 Key: YARN-7818
 URL: https://issues.apache.org/jira/browse/YARN-7818
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Yesha Vora


steps:
1) Run Dshell Application
{code}
yarn  org.apache.hadoop.yarn.applications.distributedshell.Client -jar 
/usr/hdp/3.0.0.0-751/hadoop-yarn/hadoop-yarn-applications-distributedshell-*.jar
 -keep_containers_across_application_attempts -timeout 90 -shell_command 
"sleep 110" -num_containers 4{code}
2) Find out host where AM is running. 
3) Find Containers launched by application
4) Restart NM where AM is running
5) Validate that new attempt is not started and containers launched before 
restart are in RUNNING state.

In this test, step#5 fails because containers failed to launch with error 143
{code}
2018-01-24 09:48:30,547 INFO  container.ContainerImpl 
(ContainerImpl.java:handle(2108)) - Container 
container_e04_1516787230461_0001_01_03 transitioned from RUNNING to KILLING
2018-01-24 09:48:30,547 INFO  launcher.ContainerLaunch 
(ContainerLaunch.java:cleanupContainer(668)) - Cleaning up container 
container_e04_1516787230461_0001_01_03
2018-01-24 09:48:30,552 WARN  privileged.PrivilegedOperationExecutor 
(PrivilegedOperationExecutor.java:executePrivilegedOperation(174)) - Shell 
execution returned exit code: 143. Privileged Execution Operation Stderr:

Stdout: main : command provided 1
main : run as user is hrt_qa
main : requested yarn user is hrt_qa
Getting exit code file...
Creating script paths...
Writing pid file...
Writing to tmp file 
/grid/0/hadoop/yarn/local/nmPrivate/application_1516787230461_0001/container_e04_1516787230461_0001_01_03/container_e04_1516787230461_0001_01_03.pid.tmp
Writing to cgroup task files...
Creating local dirs...
Launching container...
Getting exit code file...
Creating script paths...

Full command array for failed execution:
[/usr/hdp/3.0.0.0-751/hadoop-yarn/bin/container-executor, hrt_qa, hrt_qa, 1, 
application_1516787230461_0001, container_e04_1516787230461_0001_01_03, 
/grid/0/hadoop/yarn/local/usercache/hrt_qa/appcache/application_1516787230461_0001/container_e04_1516787230461_0001_01_03,
 
/grid/0/hadoop/yarn/local/nmPrivate/application_1516787230461_0001/container_e04_1516787230461_0001_01_03/launch_container.sh,
 
/grid/0/hadoop/yarn/local/nmPrivate/application_1516787230461_0001/container_e04_1516787230461_0001_01_03/container_e04_1516787230461_0001_01_03.tokens,
 
/grid/0/hadoop/yarn/local/nmPrivate/application_1516787230461_0001/container_e04_1516787230461_0001_01_03/container_e04_1516787230461_0001_01_03.pid,
 /grid/0/hadoop/yarn/local, /grid/0/hadoop/yarn/log, cgroups=none]
2018-01-24 09:48:30,553 WARN  runtime.DefaultLinuxContainerRuntime 
(DefaultLinuxContainerRuntime.java:launchContainer(127)) - Launch container 
failed. Exception:
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.privileged.PrivilegedOperationException:
 ExitCodeException exitCode=143:
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.privileged.PrivilegedOperationExecutor.executePrivilegedOperation(PrivilegedOperationExecutor.java:180)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.DefaultLinuxContainerRuntime.launchContainer(DefaultLinuxContainerRuntime.java:124)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.DelegatingLinuxContainerRuntime.launchContainer(DelegatingLinuxContainerRuntime.java:152)
at 
org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.launchContainer(LinuxContainerExecutor.java:549)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.launchContainer(ContainerLaunch.java:465)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:285)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:95)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: ExitCodeException exitCode=143:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:1009)
at org.apache.hadoop.util.Shell.run(Shell.java:902)
at 
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:1227)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.privileged.PrivilegedOperationExecutor.executePrivilegedOperation(PrivilegedOperationExecutor.java:152)
... 10 more
2018-01-24 09

  1   2   >