[jira] [Commented] (YARN-6596) Introduce Placement Constraint Manager module

2017-12-20 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16299659#comment-16299659
 ] 

Weiwei Yang commented on YARN-6596:
---

Hi [~kkaranasos]

I have some general comments. I thought the main purpose of 
{{PlacementConstraintManager}} is to store the mapping of 
allocationTags/node-attributes to nodes, so these info can be used to calculate 
placement based on constraints. However I did not see any data structure for 
this. Instead it only stores {{Map> appConstraints}}, I am not sure how useful is this. Each 
allocation request already includes {{PlacementConstraint}}, why we need to 
store it here?

Thanks

> Introduce Placement Constraint Manager module
> -
>
> Key: YARN-6596
> URL: https://issues.apache.org/jira/browse/YARN-6596
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Attachments: YARN-6596-YARN-6592.001.patch
>
>
> This RM module will be responsible for storing placement constraints, 
> allocation tags, and node attributes.
> It will be used when determining the placement of SchedulingRequests with 
> constraints.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3895) Support ACLs in ATSv2

2017-12-20 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16299641#comment-16299641
 ] 

Vrushali C commented on YARN-3895:
--

We can consider application ACLs in the submission context. These ACLs will be 
at application level (not applicable for offline collectors).

We can allow all writes but only allowed readers will be able to read. Since 
only authorized users can write. 

Let us try to target 3.1 for this. 

> Support ACLs in ATSv2
> -
>
> Key: YARN-3895
> URL: https://issues.apache.org/jira/browse/YARN-3895
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Varun Saxena
>Assignee: Varun Saxena
>  Labels: YARN-5355
>
> This JIRA is to keep track of authorization support design discussions for 
> both readers and collectors. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7669) API and interface modifications for placement constraint processor

2017-12-20 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16299634#comment-16299634
 ] 

Arun Suresh commented on YARN-7669:
---

Thanks [~sunilg] and [~cheersyang] for the reviews as well. Even though this 
has been committed, feel free to raise any additional concerns regarding the 
changes here. Am more than happy to reopen this / raise follow up JIRAs.

> API and interface modifications for placement constraint processor
> --
>
> Key: YARN-7669
> URL: https://issues.apache.org/jira/browse/YARN-7669
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Fix For: 3.1.0
>
> Attachments: YARN-7669-YARN-6592.001.patch, 
> YARN-7669-YARN-6592.002.patch, YARN-7669-YARN-6592.003.patch, 
> YARN-7669-YARN-6592.004.patch, YARN-7669-YARN-6592.005.patch
>
>
> As per discussions in YARN-7612. This JIRA will introduce the generic 
> interfaces which will be implemented in YARN-7612



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7669) API and interface modifications for placement constraint processor

2017-12-20 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16299622#comment-16299622
 ] 

Arun Suresh commented on YARN-7669:
---

Thanks [~kkaranasos]. About the naming, Yeah, we have other places where we 
have placement, and might clash if considered out of context, but given that 
this pertains to SchedulingRequests etc, it should be self explanatory.
Committing this to YARN-6592 branch.

> API and interface modifications for placement constraint processor
> --
>
> Key: YARN-7669
> URL: https://issues.apache.org/jira/browse/YARN-7669
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-7669-YARN-6592.001.patch, 
> YARN-7669-YARN-6592.002.patch, YARN-7669-YARN-6592.003.patch, 
> YARN-7669-YARN-6592.004.patch, YARN-7669-YARN-6592.005.patch
>
>
> As per discussions in YARN-7612. This JIRA will introduce the generic 
> interfaces which will be implemented in YARN-7612



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7669) API and interface modifications for placement constraint processor

2017-12-20 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-7669:
--
Summary: API and interface modifications for placement constraint processor 
 (was: [API] Introduce interfaces for placement constraint processing)

> API and interface modifications for placement constraint processor
> --
>
> Key: YARN-7669
> URL: https://issues.apache.org/jira/browse/YARN-7669
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-7669-YARN-6592.001.patch, 
> YARN-7669-YARN-6592.002.patch, YARN-7669-YARN-6592.003.patch, 
> YARN-7669-YARN-6592.004.patch, YARN-7669-YARN-6592.005.patch
>
>
> As per discussions in YARN-7612. This JIRA will introduce the generic 
> interfaces which will be implemented in YARN-7612



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7605) Implement doAs for Api Service REST API

2017-12-20 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16299557#comment-16299557
 ] 

genericqa commented on YARN-7605:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
16s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 26s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
54s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m 
17s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m  6s{color} | {color:orange} root: The patch generated 2 new + 158 unchanged 
- 2 fixed = 160 total (was 160) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  2s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
54s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 48s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m  
4s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 63m  
1s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
49s{color} | {color:green} hadoop-yarn-services-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
32s{color} | {color:green} hadoop-yarn-services-api in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}178m 39s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.fs.viewfs.TestViewFileSystemLocalFileSystem |
|   | hadoop.fs.viewfs.TestViewFileSystemWithAuthorityLocalFileSystem |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7605 |
| JIRA Patch URL | 

[jira] [Commented] (YARN-6596) Introduce Placement Constraint Manager module

2017-12-20 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16299542#comment-16299542
 ] 

genericqa commented on YARN-6596:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
29s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} YARN-6592 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
58s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
39s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
41s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  2s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
7s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} YARN-6592 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 23s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 3 new + 55 unchanged - 0 fixed = 58 total (was 55) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 26s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
16s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 75m 59s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}120m 18s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | 
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
|  |  String is incompatible with expected argument type 
org.apache.hadoop.yarn.api.records.ApplicationId in 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.constraint.MemoryPlacementConstraintManager.removeGlobalConstraint(Set)
  At MemoryPlacementConstraintManager.java:argument type 
org.apache.hadoop.yarn.api.records.ApplicationId in 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.constraint.MemoryPlacementConstraintManager.removeGlobalConstraint(Set)
  At MemoryPlacementConstraintManager.java:[line 252] |
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestNodeLabelContainerAllocation
 |
|   | hadoop.yarn.server.resourcemanager.TestRMRestart |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | 

[jira] [Commented] (YARN-6596) Introduce Placement Constraint Manager module

2017-12-20 Thread Konstantinos Karanasos (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16299462#comment-16299462
 ] 

Konstantinos Karanasos commented on YARN-6596:
--

bq. Would it not be better if we just expose addConstraint(sourceTags, 
constraint, appId) and getConstraint(sourceTags, appId) and let the 
PlacementConstaintManager decide from the tags if it is app specific or global 
and perform the appropriate operation ?
I like what you say about the getConstraint. Indeed we should probably not have 
a getGlobalConstraint. We can have a getConstraint that if the appId is empty, 
it returns only global constraints or something along these lines. Also, what I 
had in mind to do was that even if you request for the constraint with a 
specific sourceTag and appId, it should be merged with any global constraint 
too. So if you have for a specific appId that you should not have more than 5 
HBase containers/rack, while a global says no more than 3/rack, you should 
merge these too (the global wins here as it is more restrictive). I am planning 
to add some transformation rules to the constraints to handle this soon.
For the addConstraint, I think it is fine to distinguish between app specific 
and global, as the global should be used only by the admin API. But again we 
can have a single addConstraint and when the appId is empty, it means it is a 
global constraint.

> Introduce Placement Constraint Manager module
> -
>
> Key: YARN-6596
> URL: https://issues.apache.org/jira/browse/YARN-6596
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Attachments: YARN-6596-YARN-6592.001.patch
>
>
> This RM module will be responsible for storing placement constraints, 
> allocation tags, and node attributes.
> It will be used when determining the placement of SchedulingRequests with 
> constraints.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4227) FairScheduler: RM quits processing expired container from a removed node

2017-12-20 Thread Wilfred Spiegelenburg (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16299461#comment-16299461
 ] 

Wilfred Spiegelenburg commented on YARN-4227:
-

Tests pass locally
Test failures also can not be related because they are hard coded to only test 
the capacity scheduler.

> FairScheduler: RM quits processing expired container from a removed node
> 
>
> Key: YARN-4227
> URL: https://issues.apache.org/jira/browse/YARN-4227
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.3.0, 2.5.0, 2.7.1
>Reporter: Wilfred Spiegelenburg
>Assignee: Wilfred Spiegelenburg
>Priority: Critical
> Attachments: YARN-4227.2.patch, YARN-4227.3.patch, YARN-4227.4.patch, 
> YARN-4227.5.patch, YARN-4227.patch
>
>
> Under some circumstances the node is removed before an expired container 
> event is processed causing the RM to exit:
> {code}
> 2015-10-04 21:14:01,063 INFO 
> org.apache.hadoop.yarn.util.AbstractLivelinessMonitor: 
> Expired:container_1436927988321_1307950_01_12 Timed out after 600 secs
> 2015-10-04 21:14:01,063 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: 
> container_1436927988321_1307950_01_12 Container Transitioned from 
> ACQUIRED to EXPIRED
> 2015-10-04 21:14:01,063 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSSchedulerApp: 
> Completed container: container_1436927988321_1307950_01_12 in state: 
> EXPIRED event:EXPIRE
> 2015-10-04 21:14:01,063 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=system_op   
>OPERATION=AM Released Container TARGET=SchedulerApp RESULT=SUCCESS  
> APPID=application_1436927988321_1307950 
> CONTAINERID=container_1436927988321_1307950_01_12
> 2015-10-04 21:14:01,063 FATAL 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error in 
> handling event type CONTAINER_EXPIRED to the scheduler
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.completedContainer(FairScheduler.java:849)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.handle(FairScheduler.java:1273)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.handle(FairScheduler.java:122)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$SchedulerEventDispatcher$EventProcessor.run(ResourceManager.java:585)
>   at java.lang.Thread.run(Thread.java:745)
> 2015-10-04 21:14:01,063 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Exiting, bbye..
> {code}
> The stack trace is from 2.3.0 but the same issue has been observed in 2.5.0 
> and 2.6.0 by different customers.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7590) Improve container-executor validation check

2017-12-20 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16299455#comment-16299455
 ] 

genericqa commented on YARN-7590:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
28m 12s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 49s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 17m 
30s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 60m 36s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7590 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12903145/YARN-7590.004.patch |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux 3f04492624ad 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 5ab632b |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/19002/testReport/ |
| Max. process+thread count | 330 (vs. ulimit of 5000) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/19002/console |
| Powered by | Apache Yetus 0.7.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Improve container-executor validation check
> ---
>
> Key: YARN-7590
> URL: https://issues.apache.org/jira/browse/YARN-7590
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: security, yarn
>Affects Versions: 2.0.1-alpha, 2.2.0, 2.3.0, 2.4.0, 2.5.0, 2.6.0, 2.7.0, 
> 2.8.0, 2.8.1, 3.0.0-beta1
>Reporter: Eric Yang
>Assignee: Eric Yang
> Attachments: YARN-7590.001.patch, YARN-7590.002.patch, 
> YARN-7590.003.patch, YARN-7590.004.patch
>
>
> There is minimum check for prefix path for container-executor.  If YARN is 
> compromised, attacker  can use container-executor to change system files 
> ownership:
> {code}
> 

[jira] [Commented] (YARN-6596) Introduce Placement Constraint Manager module

2017-12-20 Thread Konstantinos Karanasos (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16299454#comment-16299454
 ] 

Konstantinos Karanasos commented on YARN-6596:
--

Thanks [~asuresh] for the comments.

bq. Do we need both registerApplication() and addApplicationConstraint() ? The 
former takes a map and the latter takes individual entries of the map. I'd say 
the add should be good enough - given that it exposes the replace parameter.
I was thinking to have the first one being used when the application gets 
registered for the application level constraints to be added directly, without 
the external code having to do any iteration. This also avoids checking each 
time whether the application is registered in the PCM. I was thinking to use 
the second when a SchedulingRequest comes in with additional constraints.
bq. Nit: Looks like the patch introduced some empty lines.
Yep, I have already removed them locally, waiting for Jenkins so that I can fix 
more stuff before uploading next version.
bq. Given that we are dissallowing an allocation tag set key with more than 1 
entry at the moment, we would also need to ensure that an incoming scheduling 
request be validated to ensure it is associated with a single tag as well - 
right ?
I think so, yes.

> Introduce Placement Constraint Manager module
> -
>
> Key: YARN-6596
> URL: https://issues.apache.org/jira/browse/YARN-6596
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Attachments: YARN-6596-YARN-6592.001.patch
>
>
> This RM module will be responsible for storing placement constraints, 
> allocation tags, and node attributes.
> It will be used when determining the placement of SchedulingRequests with 
> constraints.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-6596) Introduce Placement Constraint Manager module

2017-12-20 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16299449#comment-16299449
 ] 

Arun Suresh edited comment on YARN-6596 at 12/21/17 2:49 AM:
-

Also just a thought.
Clients of the API might not probably know if it is dealing with a global 
constraint or an application constraint (atleast until we have a cluster admin 
level API). For eg. the processor would have to parse the source tags and 
decide which API to call. Would it not be better if we just expose 
addConstraint(sourceTags, constraint, appId) and getConstraint(sourceTags, 
appId) and let the PlacementConstaintManager decide from the tags if it is app 
specific or global and perform the appropriate operation ?

 


was (Author: asuresh):
Also just a thought.
Clients of the API might not probably if it is dealing with a global constraint 
or an application constraint (atleast until we have a cluster admin level API). 
Would it not be better if we just expose addConstraint(sourceTags, constraint, 
appId) and getConstraint(sourceTags, appId) and let the 
PlacementConstaintManager decide from the tags if it is app specific or global 
and perform the appropriate operation ?

 

> Introduce Placement Constraint Manager module
> -
>
> Key: YARN-6596
> URL: https://issues.apache.org/jira/browse/YARN-6596
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Attachments: YARN-6596-YARN-6592.001.patch
>
>
> This RM module will be responsible for storing placement constraints, 
> allocation tags, and node attributes.
> It will be used when determining the placement of SchedulingRequests with 
> constraints.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6596) Introduce Placement Constraint Manager module

2017-12-20 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16299449#comment-16299449
 ] 

Arun Suresh commented on YARN-6596:
---

Also just a thought.
Clients of the API might not probably if it is dealing with a global constraint 
or an application constraint (atleast until we have a cluster admin level API). 
Would it not be better if we just expose addConstraint(sourceTags, constraint, 
appId) and getConstraint(sourceTags, appId) and let the 
PlacementConstaintManager decide from the tags if it is app specific or global 
and perform the appropriate operation ?

 

> Introduce Placement Constraint Manager module
> -
>
> Key: YARN-6596
> URL: https://issues.apache.org/jira/browse/YARN-6596
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Attachments: YARN-6596-YARN-6592.001.patch
>
>
> This RM module will be responsible for storing placement constraints, 
> allocation tags, and node attributes.
> It will be used when determining the placement of SchedulingRequests with 
> constraints.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7669) [API] Introduce interfaces for placement constraint processing

2017-12-20 Thread Konstantinos Karanasos (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16299445#comment-16299445
 ] 

Konstantinos Karanasos commented on YARN-7669:
--

Patch looks good to me, thanks [~asuresh].

PS: In the latest names of the {{RejectionReason}} you make the assumption that 
"place" is what the processor does and "schedule" is what the scheduler does 
when trying to commit the resource. In other parts of the code, placement is 
used differently (e.g., {{PlacementManager}} and {{PlacementRule}} -- this is 
what I was referring to in [my 
comment|https://issues.apache.org/jira/browse/YARN-7612?focusedCommentId=16296117=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16296117]
 in YARN-7612). In any case, the javadoc is clear now, so I guess we can rename 
the enums later.

> [API] Introduce interfaces for placement constraint processing
> --
>
> Key: YARN-7669
> URL: https://issues.apache.org/jira/browse/YARN-7669
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-7669-YARN-6592.001.patch, 
> YARN-7669-YARN-6592.002.patch, YARN-7669-YARN-6592.003.patch, 
> YARN-7669-YARN-6592.004.patch, YARN-7669-YARN-6592.005.patch
>
>
> As per discussions in YARN-7612. This JIRA will introduce the generic 
> interfaces which will be implemented in YARN-7612



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6596) Introduce Placement Constraint Manager module

2017-12-20 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16299436#comment-16299436
 ] 

Arun Suresh commented on YARN-6596:
---

Thanks for the patch [~kkaranasos]

Skimmed through it. Couple of comments:
* Do we need both {{registerApplication()}} and {{addApplicationConstraint()}} 
? The former takes a map and the latter takes individual entries of the map. 
I'd say the add should be good enough - given that it exposes the replace 
parameter.
* Nit: Looks like the patch introduced some empty lines.
* Given that we are dissallowing an allocation tag set key with more than 1 
entry at the moment, we would also need to ensure that an incoming scheduling 
request be validated to ensure it is associated with a single tag as well - 
right ? 

> Introduce Placement Constraint Manager module
> -
>
> Key: YARN-6596
> URL: https://issues.apache.org/jira/browse/YARN-6596
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Attachments: YARN-6596-YARN-6592.001.patch
>
>
> This RM module will be responsible for storing placement constraints, 
> allocation tags, and node attributes.
> It will be used when determining the placement of SchedulingRequests with 
> constraints.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6596) Introduce Placement Constraint Manager module

2017-12-20 Thread Konstantinos Karanasos (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6596?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantinos Karanasos updated YARN-6596:
-
Attachment: YARN-6596-YARN-6592.001.patch

Attaching first version of the patch.
I have not included tests yet -- I will add some tomorrow.

> Introduce Placement Constraint Manager module
> -
>
> Key: YARN-6596
> URL: https://issues.apache.org/jira/browse/YARN-6596
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Attachments: YARN-6596-YARN-6592.001.patch
>
>
> This RM module will be responsible for storing placement constraints, 
> allocation tags, and node attributes.
> It will be used when determining the placement of SchedulingRequests with 
> constraints.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7673) ClassNotFoundException: org.apache.hadoop.yarn.server.api.DistributedSchedulingAMProtocol when using hadoop-client-minicluster

2017-12-20 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16299414#comment-16299414
 ] 

Bharat Viswanadham commented on YARN-7673:
--

yarn-server-common is missing, because of this class not found exception is 
seen.
Attached patch v00.

[~djp] Could you please review the changes.

> ClassNotFoundException: 
> org.apache.hadoop.yarn.server.api.DistributedSchedulingAMProtocol when using 
> hadoop-client-minicluster
> --
>
> Key: YARN-7673
> URL: https://issues.apache.org/jira/browse/YARN-7673
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Jeff Zhang
> Attachments: YARN-7673.00.patch
>
>
> I'd like to use hadoop-client-minicluster for hadoop downstream project, but 
> I encounter the following exception when starting hadoop minicluster.  And I 
> check the hadoop-client-minicluster, it indeed does not have this class. Is 
> this something that is missing when packaging the published jar ?
> {code}
> java.lang.NoClassDefFoundError: 
> org/apache/hadoop/yarn/server/api/DistributedSchedulingAMProtocol
>   at java.lang.ClassLoader.defineClass1(Native Method)
>   at java.lang.ClassLoader.defineClass(ClassLoader.java:763)
>   at 
> java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
>   at java.net.URLClassLoader.defineClass(URLClassLoader.java:467)
>   at java.net.URLClassLoader.access$100(URLClassLoader.java:73)
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:368)
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:362)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:361)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   at 
> org.apache.hadoop.yarn.server.MiniYARNCluster.createResourceManager(MiniYARNCluster.java:851)
>   at 
> org.apache.hadoop.yarn.server.MiniYARNCluster.serviceInit(MiniYARNCluster.java:285)
>   at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6596) Introduce Placement Constraint Manager module

2017-12-20 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16299412#comment-16299412
 ] 

Arun Suresh commented on YARN-6596:
---

Sure.. assigning to you.

> Introduce Placement Constraint Manager module
> -
>
> Key: YARN-6596
> URL: https://issues.apache.org/jira/browse/YARN-6596
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
>
> This RM module will be responsible for storing placement constraints, 
> allocation tags, and node attributes.
> It will be used when determining the placement of SchedulingRequests with 
> constraints.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-6596) Introduce Placement Constraint Manager module

2017-12-20 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6596?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh reassigned YARN-6596:
-

Assignee: Konstantinos Karanasos  (was: Arun Suresh)

> Introduce Placement Constraint Manager module
> -
>
> Key: YARN-6596
> URL: https://issues.apache.org/jira/browse/YARN-6596
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
>
> This RM module will be responsible for storing placement constraints, 
> allocation tags, and node attributes.
> It will be used when determining the placement of SchedulingRequests with 
> constraints.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7673) ClassNotFoundException: org.apache.hadoop.yarn.server.api.DistributedSchedulingAMProtocol when using hadoop-client-minicluster

2017-12-20 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7673?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated YARN-7673:
-
Attachment: YARN-7673.00.patch

> ClassNotFoundException: 
> org.apache.hadoop.yarn.server.api.DistributedSchedulingAMProtocol when using 
> hadoop-client-minicluster
> --
>
> Key: YARN-7673
> URL: https://issues.apache.org/jira/browse/YARN-7673
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Jeff Zhang
> Attachments: YARN-7673.00.patch
>
>
> I'd like to use hadoop-client-minicluster for hadoop downstream project, but 
> I encounter the following exception when starting hadoop minicluster.  And I 
> check the hadoop-client-minicluster, it indeed does not have this class. Is 
> this something that is missing when packaging the published jar ?
> {code}
> java.lang.NoClassDefFoundError: 
> org/apache/hadoop/yarn/server/api/DistributedSchedulingAMProtocol
>   at java.lang.ClassLoader.defineClass1(Native Method)
>   at java.lang.ClassLoader.defineClass(ClassLoader.java:763)
>   at 
> java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
>   at java.net.URLClassLoader.defineClass(URLClassLoader.java:467)
>   at java.net.URLClassLoader.access$100(URLClassLoader.java:73)
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:368)
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:362)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:361)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   at 
> org.apache.hadoop.yarn.server.MiniYARNCluster.createResourceManager(MiniYARNCluster.java:851)
>   at 
> org.apache.hadoop.yarn.server.MiniYARNCluster.serviceInit(MiniYARNCluster.java:285)
>   at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6596) Introduce Placement Constraint Manager module

2017-12-20 Thread Konstantinos Karanasos (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16299401#comment-16299401
 ] 

Konstantinos Karanasos commented on YARN-6596:
--

[~asuresh], I put some work on the Placement Constraint Manager, so I will take 
over this JIRA, if you don't mind. 
Especially the API is required for finalizing YARN-7612. I will post a first 
patch soon.

> Introduce Placement Constraint Manager module
> -
>
> Key: YARN-6596
> URL: https://issues.apache.org/jira/browse/YARN-6596
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Konstantinos Karanasos
>Assignee: Arun Suresh
>
> This RM module will be responsible for storing placement constraints, 
> allocation tags, and node attributes.
> It will be used when determining the placement of SchedulingRequests with 
> constraints.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7605) Implement doAs for Api Service REST API

2017-12-20 Thread Eric Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-7605:

Attachment: YARN-7605.008.patch

> Implement doAs for Api Service REST API
> ---
>
> Key: YARN-7605
> URL: https://issues.apache.org/jira/browse/YARN-7605
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Assignee: Eric Yang
> Fix For: yarn-native-services
>
> Attachments: YARN-7605.001.patch, YARN-7605.004.patch, 
> YARN-7605.005.patch, YARN-7605.006.patch, YARN-7605.007.patch, 
> YARN-7605.008.patch
>
>
> In YARN-7540, all client entry points for API service is centralized to use 
> REST API instead of having direct file system and resource manager rpc calls. 
>  This change helped to centralize yarn metadata to be owned by yarn user 
> instead of crawling through every user's home directory to find metadata.  
> The next step is to make sure "doAs" calls work properly for API Service.  
> The metadata is stored by YARN user, but the actual workload still need to be 
> performed as end users, hence API service must authenticate end user kerberos 
> credential, and perform doAs call when requesting containers via 
> ServiceClient.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7590) Improve container-executor validation check

2017-12-20 Thread Eric Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-7590:

Attachment: YARN-7590.004.patch

- Fix a segmentation fault in test case.

> Improve container-executor validation check
> ---
>
> Key: YARN-7590
> URL: https://issues.apache.org/jira/browse/YARN-7590
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: security, yarn
>Affects Versions: 2.0.1-alpha, 2.2.0, 2.3.0, 2.4.0, 2.5.0, 2.6.0, 2.7.0, 
> 2.8.0, 2.8.1, 3.0.0-beta1
>Reporter: Eric Yang
>Assignee: Eric Yang
> Attachments: YARN-7590.001.patch, YARN-7590.002.patch, 
> YARN-7590.003.patch, YARN-7590.004.patch
>
>
> There is minimum check for prefix path for container-executor.  If YARN is 
> compromised, attacker  can use container-executor to change system files 
> ownership:
> {code}
> /usr/local/hadoop/bin/container-executor spark yarn 0 etc /home/yarn/tokens 
> /home/spark / ls
> {code}
> This will change /etc to be owned by spark user:
> {code}
> # ls -ld /etc
> drwxr-s---. 110 spark hadoop 8192 Nov 21 20:00 /etc
> {code}
> Spark user can rewrite /etc files to gain more access.  We can improve this 
> with additional check in container-executor:
> # Make sure the prefix path is owned by the same user as the caller to 
> container-executor.
> # Make sure the log directory prefix is owned by the same user as the caller.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7612) Add Placement Processor Framework

2017-12-20 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-7612:
--
Attachment: YARN-7612-YARN-6592.008.patch

Updating patch after rebasing and applying latest version of YARN-7669.

> Add Placement Processor Framework
> -
>
> Key: YARN-7612
> URL: https://issues.apache.org/jira/browse/YARN-7612
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-7612-YARN-6592.001.patch, 
> YARN-7612-YARN-6592.002.patch, YARN-7612-YARN-6592.003.patch, 
> YARN-7612-YARN-6592.004.patch, YARN-7612-YARN-6592.005.patch, 
> YARN-7612-YARN-6592.006.patch, YARN-7612-YARN-6592.007.patch, 
> YARN-7612-YARN-6592.008.patch, YARN-7612-v2.wip.patch, YARN-7612.wip.patch
>
>
> This introduces a Placement Processor and a Planning algorithm framework to 
> handle placement constraints and scheduling requests from an app and places 
> them on nodes.
> The actual planning algorithm(s) will be handled in a YARN-7613.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7670) Modifications to the ResourceScheduler to support SchedulingRequests

2017-12-20 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-7670:
--
Attachment: YARN-7670-YARN-6592.addendum.patch

Attaching an addendum patch - with a minor change that I had missed staging.

> Modifications to the ResourceScheduler to support SchedulingRequests
> 
>
> Key: YARN-7670
> URL: https://issues.apache.org/jira/browse/YARN-7670
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Fix For: 3.1.0
>
> Attachments: YARN-7670-YARN-6592.001.patch, 
> YARN-7670-YARN-6592.002.patch, YARN-7670-YARN-6592.003.patch, 
> YARN-7670-YARN-6592.addendum.patch
>
>
> As per discussions in YARN-7612. This JIRA tracks the changes to the 
> ResourceScheduler interface and implementation in CapacityScheduler to 
> support SchedulingRequests



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7590) Improve container-executor validation check

2017-12-20 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16299272#comment-16299272
 ] 

genericqa commented on YARN-7590:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
27m 47s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 12s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 17m 12s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 58m 58s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7590 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12903122/YARN-7590.003.patch |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux 079354060f13 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 382215c |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/19001/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/19001/testReport/ |
| Max. process+thread count | 303 (vs. ulimit of 5000) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/19001/console |
| Powered by | Apache Yetus 0.7.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Improve container-executor validation check
> ---
>
> Key: YARN-7590
> URL: https://issues.apache.org/jira/browse/YARN-7590
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: security, yarn
>Affects Versions: 2.0.1-alpha, 2.2.0, 2.3.0, 2.4.0, 2.5.0, 2.6.0, 2.7.0, 
> 2.8.0, 2.8.1, 3.0.0-beta1
>Reporter: Eric Yang
>Assignee: Eric Yang
> Attachments: YARN-7590.001.patch, YARN-7590.002.patch, 
> YARN-7590.003.patch
>
>
> There is minimum check for prefix path for container-executor.  

[jira] [Commented] (YARN-7673) ClassNotFoundException: org.apache.hadoop.yarn.server.api.DistributedSchedulingAMProtocol when using hadoop-client-minicluster

2017-12-20 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16299201#comment-16299201
 ] 

Junping Du commented on YARN-7673:
--

Thanks [~zjffdu] for trying hadoop shaded jar in downstream project and 
reporting the issue. Discussed this with [~bharatviswa], we think we could miss 
some classes in wrapping up these shaded jars. If no objections, I will go 
ahead to create a umbrella JIRA to track unfinished work for hadoop shaded 
client in case more classes are found as missing in real world testing.

> ClassNotFoundException: 
> org.apache.hadoop.yarn.server.api.DistributedSchedulingAMProtocol when using 
> hadoop-client-minicluster
> --
>
> Key: YARN-7673
> URL: https://issues.apache.org/jira/browse/YARN-7673
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Jeff Zhang
>
> I'd like to use hadoop-client-minicluster for hadoop downstream project, but 
> I encounter the following exception when starting hadoop minicluster.  And I 
> check the hadoop-client-minicluster, it indeed does not have this class. Is 
> this something that is missing when packaging the published jar ?
> {code}
> java.lang.NoClassDefFoundError: 
> org/apache/hadoop/yarn/server/api/DistributedSchedulingAMProtocol
>   at java.lang.ClassLoader.defineClass1(Native Method)
>   at java.lang.ClassLoader.defineClass(ClassLoader.java:763)
>   at 
> java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
>   at java.net.URLClassLoader.defineClass(URLClassLoader.java:467)
>   at java.net.URLClassLoader.access$100(URLClassLoader.java:73)
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:368)
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:362)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:361)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   at 
> org.apache.hadoop.yarn.server.MiniYARNCluster.createResourceManager(MiniYARNCluster.java:851)
>   at 
> org.apache.hadoop.yarn.server.MiniYARNCluster.serviceInit(MiniYARNCluster.java:285)
>   at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2185) Use pipes when localizing archives

2017-12-20 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16299199#comment-16299199
 ] 

Miklos Szegedi commented on YARN-2185:
--

Based on our discussion offline I can spend a few cycles on this.

> Use pipes when localizing archives
> --
>
> Key: YARN-2185
> URL: https://issues.apache.org/jira/browse/YARN-2185
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Affects Versions: 2.4.0
>Reporter: Jason Lowe
>
> Currently the nodemanager downloads an archive to a local file, unpacks it, 
> and then removes it.  It would be more efficient to stream the data as it's 
> being unpacked to avoid both the extra disk space requirements and the 
> additional disk activity from storing the archive.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-2185) Use pipes when localizing archives

2017-12-20 Thread Miklos Szegedi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2185?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Szegedi reassigned YARN-2185:


Assignee: Miklos Szegedi

> Use pipes when localizing archives
> --
>
> Key: YARN-2185
> URL: https://issues.apache.org/jira/browse/YARN-2185
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Affects Versions: 2.4.0
>Reporter: Jason Lowe
>Assignee: Miklos Szegedi
>
> Currently the nodemanager downloads an archive to a local file, unpacks it, 
> and then removes it.  It would be more efficient to stream the data as it's 
> being unpacked to avoid both the extra disk space requirements and the 
> additional disk activity from storing the archive.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7590) Improve container-executor validation check

2017-12-20 Thread Eric Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-7590:

Attachment: YARN-7590.003.patch

- Rename uid to caller_uid.
- Use global variable for caller_uid to minimize code changes.
- Added check for log directory prefix.


> Improve container-executor validation check
> ---
>
> Key: YARN-7590
> URL: https://issues.apache.org/jira/browse/YARN-7590
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: security, yarn
>Affects Versions: 2.0.1-alpha, 2.2.0, 2.3.0, 2.4.0, 2.5.0, 2.6.0, 2.7.0, 
> 2.8.0, 2.8.1, 3.0.0-beta1
>Reporter: Eric Yang
>Assignee: Eric Yang
> Attachments: YARN-7590.001.patch, YARN-7590.002.patch, 
> YARN-7590.003.patch
>
>
> There is minimum check for prefix path for container-executor.  If YARN is 
> compromised, attacker  can use container-executor to change system files 
> ownership:
> {code}
> /usr/local/hadoop/bin/container-executor spark yarn 0 etc /home/yarn/tokens 
> /home/spark / ls
> {code}
> This will change /etc to be owned by spark user:
> {code}
> # ls -ld /etc
> drwxr-s---. 110 spark hadoop 8192 Nov 21 20:00 /etc
> {code}
> Spark user can rewrite /etc files to gain more access.  We can improve this 
> with additional check in container-executor:
> # Make sure the prefix path is owned by the same user as the caller to 
> container-executor.
> # Make sure the log directory prefix is owned by the same user as the caller.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7557) It should be possible to specify resource types in the fair scheduler increment value

2017-12-20 Thread Robert Kanter (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16299168#comment-16299168
 ] 

Robert Kanter commented on YARN-7557:
-

Thanks for the patch [~grepas].  A few comments:
# After the {{Matcher}}, instead of a series of if statements to check for 
{{MEMORY_MB}} and {{VCORES}} on every match, I think we could unify and save 
the checks by simply adding everything to the {{others}} {{HashMap}}.  After 
all that's done, we can simply do a lookup in {{others}} for {{MEMORY_MB}} and 
{{VCORES}}, and remove them.
# {{A_CUSTOM_RESOURCE}} should be used in the XML in 
{{getConfigurationInputStream}} instead of the String {{a-custom-resource}}
# I don't have all of the context on this, but it sounds like we should be 
deprecating 
{{RM_SCHEDULER_INCREMENT_ALLOCATION_MB}}/{{yarn.scheduler.increment-allocation-mb}}
 and 
{{RM_SCHEDULER_INCREMENT_ALLOCATION_VCORES}}/{{yarn.scheduler.increment-allocation-vcores}},
 right?

> It should be possible to specify resource types in the fair scheduler 
> increment value
> -
>
> Key: YARN-7557
> URL: https://issues.apache.org/jira/browse/YARN-7557
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Affects Versions: 3.0.0-beta1
>Reporter: Daniel Templeton
>Assignee: Gergo Repas
>Priority: Critical
> Attachments: YARN-7557.000.patch, YARN-7557.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7676) Fix inconsistent priority ordering in Priority and SchedulerRequestKey

2017-12-20 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16299165#comment-16299165
 ] 

genericqa commented on YARN-7676:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
35s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 26s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
6s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api in 
trunk has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
15s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  8m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
53s{color} | {color:green} hadoop-yarn-project/hadoop-yarn: The patch generated 
0 new + 18 unchanged - 1 fixed = 18 total (was 19) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  2s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
29s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
41s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
16s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 77m 34s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}152m  1s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.monitor.capacity.TestProportionalCapacityPreemptionPolicyIntraQueueUserLimit
 |
|   | 
hadoop.yarn.server.resourcemanager.monitor.capacity.TestProportionalCapacityPreemptionPolicyIntraQueueWithDRF
 |
|   | 
hadoop.yarn.server.resourcemanager.monitor.capacity.TestProportionalCapacityPreemptionPolicyIntraQueue
 |
|   | hadoop.yarn.server.resourcemanager.scheduler.capacity.TestLeafQueue |
|   | 
hadoop.yarn.server.resourcemanager.scheduler.policy.TestFifoOrderingPolicyForPendingApps
 

[jira] [Commented] (YARN-7577) Unit Fail: TestAMRestart#testPreemptedAMRestartOnRMRestart

2017-12-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16299160#comment-16299160
 ] 

Hudson commented on YARN-7577:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13412 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13412/])
YARN-7577. Unit Fail: TestAMRestart#testPreemptedAMRestartOnRMRestart (rkanter: 
rev 382215c72b93d6a97d813f407cf6496a7c3f2a4a)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/applicationsmanager/TestAMRestart.java


> Unit Fail: TestAMRestart#testPreemptedAMRestartOnRMRestart
> --
>
> Key: YARN-7577
> URL: https://issues.apache.org/jira/browse/YARN-7577
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Miklos Szegedi
>Assignee: Miklos Szegedi
> Fix For: 3.1.0
>
> Attachments: YARN-7577.000.patch, YARN-7577.001.patch, 
> YARN-7577.002.patch, YARN-7577.003.patch, YARN-7577.004.patch, 
> YARN-7577.005.patch, YARN-7577.006.patch
>
>
> This happens, if Fair Scheduler is the default. The test should run with both 
> schedulers
> {code}
> java.lang.AssertionError: 
> Expected :-102
> Actual   :-106
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRestart.testPreemptedAMRestartOnRMRestart(TestAMRestart.java:583)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7669) [API] Introduce interfaces for placement constraint processing

2017-12-20 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16299159#comment-16299159
 ] 

genericqa commented on YARN-7669:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-6592 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
49s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
 8s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
41s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
59s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
1s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 48s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
10s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api in 
YARN-6592 has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
32s{color} | {color:green} YARN-6592 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  7m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m  
1s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 54s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 1 new + 279 unchanged - 1 fixed = 280 total (was 280) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 40s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
40s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
11s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 61m 16s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}134m 39s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestNodeLabelContainerAllocation
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7669 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12903091/YARN-7669-YARN-6592.005.patch
 |
| Optional Tests |  

[jira] [Commented] (YARN-7577) Unit Fail: TestAMRestart#testPreemptedAMRestartOnRMRestart

2017-12-20 Thread Robert Kanter (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16299125#comment-16299125
 ] 

Robert Kanter commented on YARN-7577:
-

+1

> Unit Fail: TestAMRestart#testPreemptedAMRestartOnRMRestart
> --
>
> Key: YARN-7577
> URL: https://issues.apache.org/jira/browse/YARN-7577
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Miklos Szegedi
>Assignee: Miklos Szegedi
> Attachments: YARN-7577.000.patch, YARN-7577.001.patch, 
> YARN-7577.002.patch, YARN-7577.003.patch, YARN-7577.004.patch, 
> YARN-7577.005.patch, YARN-7577.006.patch
>
>
> This happens, if Fair Scheduler is the default. The test should run with both 
> schedulers
> {code}
> java.lang.AssertionError: 
> Expected :-102
> Actual   :-106
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRestart.testPreemptedAMRestartOnRMRestart(TestAMRestart.java:583)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5367) HDFS delegation tokens in ApplicationSubmissionContext should be added to systemCrednetials

2017-12-20 Thread Martin Serrano (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16299124#comment-16299124
 ] 

Martin Serrano commented on YARN-5367:
--

Is the same as YARN-5305?  Seems to be.

> HDFS delegation tokens in ApplicationSubmissionContext should be added to 
> systemCrednetials
> ---
>
> Key: YARN-5367
> URL: https://issues.apache.org/jira/browse/YARN-5367
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Reporter: Xianyin Xin
>Assignee: Xianyin Xin
> Attachments: YARN-5367.001.patch
>
>
> App log aggregation may failed because of the below flow:
> 0) suppose the token.max-lifetime is 7 days and renew interval is 1 day;
> 1) start a long running job, like sparkJDBC, of which the AM acts as a 
> service. When submitting the job, HDFS token A in 
> ApplicationSubmissionContext will be added to DelegationTokenRenewer, but not 
> added to systemCredentials;
> 2) after 1 day, submit a spark query. After received the query, AM will 
> request containers and start tasks. When start the containers, a new HDFS 
> token B is used;
> 3) after 1 day, kill the job, when doing log aggregation, exception occurs 
> which show token B is not in the HDFS token cache so the connecting to HDFS 
> fails;
> We should add token A to systemCredentials to make sure token A can be 
> delivered to NMs in time.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7590) Improve container-executor validation check

2017-12-20 Thread Eric Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-7590:

Description: 
There is minimum check for prefix path for container-executor.  If YARN is 
compromised, attacker  can use container-executor to change system files 
ownership:

{code}
/usr/local/hadoop/bin/container-executor spark yarn 0 etc /home/yarn/tokens 
/home/spark / ls
{code}

This will change /etc to be owned by spark user:

{code}
# ls -ld /etc
drwxr-s---. 110 spark hadoop 8192 Nov 21 20:00 /etc
{code}

Spark user can rewrite /etc files to gain more access.  We can improve this 
with additional check in container-executor:

# Make sure the prefix path is owned by the same user as the caller to 
container-executor.
# Make sure the log directory prefix is owned by the same user as the caller.

  was:
There is minimum check for prefix path for container-executor.  If YARN is 
compromised, attacker  can use container-executor to change system files 
ownership:

{code}
/usr/local/hadoop/bin/container-executor spark yarn 0 etc /home/yarn/tokens 
/home/spark / ls
{code}

This will change /etc to be owned by spark user:

{code}
# ls -ld /etc
drwxr-s---. 110 spark hadoop 8192 Nov 21 20:00 /etc
{code}

Spark user can rewrite /etc files to gain more access.  We can improve this 
with additional check in container-executor:

# Make sure the prefix path is same as the one in yarn-site.xml, and 
yarn-site.xml is owned by root, 644, and marked as final in property.
# Make sure the user path is not a symlink, usercache is not a symlink.


> Improve container-executor validation check
> ---
>
> Key: YARN-7590
> URL: https://issues.apache.org/jira/browse/YARN-7590
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: security, yarn
>Affects Versions: 2.0.1-alpha, 2.2.0, 2.3.0, 2.4.0, 2.5.0, 2.6.0, 2.7.0, 
> 2.8.0, 2.8.1, 3.0.0-beta1
>Reporter: Eric Yang
>Assignee: Eric Yang
> Attachments: YARN-7590.001.patch, YARN-7590.002.patch
>
>
> There is minimum check for prefix path for container-executor.  If YARN is 
> compromised, attacker  can use container-executor to change system files 
> ownership:
> {code}
> /usr/local/hadoop/bin/container-executor spark yarn 0 etc /home/yarn/tokens 
> /home/spark / ls
> {code}
> This will change /etc to be owned by spark user:
> {code}
> # ls -ld /etc
> drwxr-s---. 110 spark hadoop 8192 Nov 21 20:00 /etc
> {code}
> Spark user can rewrite /etc files to gain more access.  We can improve this 
> with additional check in container-executor:
> # Make sure the prefix path is owned by the same user as the caller to 
> container-executor.
> # Make sure the log directory prefix is owned by the same user as the caller.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7676) Fix inconsistent priority ordering in Priority and SchedulerRequestKey

2017-12-20 Thread Botong Huang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16299050#comment-16299050
 ] 

Botong Huang commented on YARN-7676:


Yes I agree. If there's no other proposal to avoid this confusing code, I guess 
we will just leave it as is. Thanks [~asuresh] and [~jlowe] for the fast 
response! 

> Fix inconsistent priority ordering in Priority and SchedulerRequestKey
> --
>
> Key: YARN-7676
> URL: https://issues.apache.org/jira/browse/YARN-7676
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Botong Huang
>Assignee: Botong Huang
>Priority: Minor
> Attachments: YARN-7676.v1.patch
>
>
> Today the priority ordering in _Priority.compareTo()_ and 
> _SchedulerRequestKey.compareTo()_ is inconsistent. Both _compareTo_ method is 
> trying to reverse the order: 
> P0.compareTo(P1) > 0, meaning priority wise P0 < P1. However, 
> SK(P0).comapreTo(SK(P1)) < 0, meaning priority wise SK(P0) > SK(P1). 
> This is attempting to fix that by undo both reversing logic. So that priority 
> wise P0 > P1 and SK(P0) > SK(P1). 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7669) [API] Introduce interfaces for placement constraint processing

2017-12-20 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16299045#comment-16299045
 ] 

genericqa commented on YARN-7669:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
27s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-6592 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 
34s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m  
7s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
20s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
14s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m  6s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
59s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api in 
YARN-6592 has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
43s{color} | {color:green} YARN-6592 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 11m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
37s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 49s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 1 new + 280 unchanged - 1 fixed = 281 total (was 281) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m  2s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  8m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
32s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
36s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  5m 
50s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 84m 10s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}212m 21s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesSchedulerActivities |
|   | hadoop.yarn.server.resourcemanager.webapp.TestRMWebServiceAppsNodelabel |
|   | 
hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesForCSWithPartitions |
|   | hadoop.yarn.server.resourcemanager.TestRMRestart |
|   | 

[jira] [Commented] (YARN-7605) Implement doAs for Api Service REST API

2017-12-20 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16299025#comment-16299025
 ] 

genericqa commented on YARN-7605:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
33s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 17m  
3s{color} | {color:red} root in trunk failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 11m 
10s{color} | {color:red} root in trunk failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  6m 
59s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 14m 
59s{color} | {color:red} branch has errors when building and testing our client 
artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
14s{color} | {color:red} hadoop-common in trunk failed. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
13s{color} | {color:red} hadoop-yarn-common in trunk failed. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
15s{color} | {color:red} hadoop-yarn-server-resourcemanager in trunk failed. 
{color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
36s{color} | {color:red} hadoop-yarn-services-core in trunk failed. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
30s{color} | {color:red} hadoop-yarn-services-api in trunk failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
27s{color} | {color:red} hadoop-common in trunk failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
15s{color} | {color:red} hadoop-yarn-common in trunk failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
28s{color} | {color:red} hadoop-yarn-server-resourcemanager in trunk failed. 
{color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
25s{color} | {color:red} hadoop-yarn-services-core in trunk failed. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
20s{color} | {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
25s{color} | {color:red} hadoop-yarn-services-api in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
20s{color} | {color:red} hadoop-yarn-services-core in the patch failed. {color} 
|
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
22s{color} | {color:red} hadoop-yarn-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
22s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
27s{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 27s{color} 
| {color:red} root in the patch failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m  8s{color} | {color:orange} root: The patch generated 2 new + 158 unchanged 
- 2 fixed = 160 total (was 160) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
22s{color} | {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
28s{color} | {color:red} hadoop-yarn-services-api in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
31s{color} | {color:red} hadoop-yarn-services-core in the patch failed. {color} 
|
| {color:red}-1{color} | 

[jira] [Resolved] (YARN-7381) Enable the configuration: yarn.nodemanager.log-container-debug-info.enabled by default in yarn-default.xml

2017-12-20 Thread Eric Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang resolved YARN-7381.
-
Resolution: Fixed

When NM_LOG_CONTAINER_DEBUG_INFO is enabled, and there is a problem to execute 
container script.  A ExitCodeException would be thrown to notify the execution 
failure.  By default, I think this is correct to make sure failure are notified 
to caller.  The past behavior swallows the exception, which is not exactly 
correct.  Most people don't use DefaultContainerExecutor, and container 
launcher code is usually successful.  This is the reason that swallowed 
exception was not noticeable.

TestContainerLaunch unit test is not quite accurate because the script 
instructed to execute "hello" script, which does not exist.  Hence, the 
throwing of exception is the proper behavior.  I am inclined to close this 
issue as fixed.  When ExitCodeException triggers more exception else where, it 
will help developer to look at the root causes that trigger launcher failures 
more closely.

> Enable the configuration: yarn.nodemanager.log-container-debug-info.enabled 
> by default in yarn-default.xml
> --
>
> Key: YARN-7381
> URL: https://issues.apache.org/jira/browse/YARN-7381
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.9.0, 3.0.0, 3.1.0
>Reporter: Xuan Gong
>Assignee: Xuan Gong
>Priority: Critical
> Fix For: 3.0.0
>
> Attachments: 
> TEST-org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch.xml,
>  YARN-7381.1.patch, 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch-output.txt,
>  
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch.txt
>
>
> Enable the configuration "yarn.nodemanager.log-container-debug-info.enabled", 
> so we can aggregate launch_container.sh and directory.info



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7676) Fix inconsistent priority ordering in Priority and SchedulerRequestKey

2017-12-20 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16299014#comment-16299014
 ] 

Jason Lowe commented on YARN-7676:
--

I'm not sure we can fix this in a backwards-compatible way.  The Priority class 
is simply a priority number with no built-in semantics on the ordering of those 
numbers.  Two systems decided to implement them differently.  It's not 
inherently broken since these Priority objects are completely separate, but it 
can be confusing.


> Fix inconsistent priority ordering in Priority and SchedulerRequestKey
> --
>
> Key: YARN-7676
> URL: https://issues.apache.org/jira/browse/YARN-7676
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Botong Huang
>Assignee: Botong Huang
>Priority: Minor
> Attachments: YARN-7676.v1.patch
>
>
> Today the priority ordering in _Priority.compareTo()_ and 
> _SchedulerRequestKey.compareTo()_ is inconsistent. Both _compareTo_ method is 
> trying to reverse the order: 
> P0.compareTo(P1) > 0, meaning priority wise P0 < P1. However, 
> SK(P0).comapreTo(SK(P1)) < 0, meaning priority wise SK(P0) > SK(P1). 
> This is attempting to fix that by undo both reversing logic. So that priority 
> wise P0 > P1 and SK(P0) > SK(P1). 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4167) NPE on RMActiveServices#serviceStop when store is null

2017-12-20 Thread Konstantin Shvachko (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated YARN-4167:
--
Fix Version/s: 2.7.6

> NPE on RMActiveServices#serviceStop when store is null
> --
>
> Key: YARN-4167
> URL: https://issues.apache.org/jira/browse/YARN-4167
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>Priority: Minor
> Fix For: 2.8.0, 3.0.0-alpha1, 2.7.6
>
> Attachments: 0001-YARN-4167.patch, 0001-YARN-4167.patch, 
> 0002-YARN-4167.patch
>
>
> Configure 
> {{yarn.resourcemanager.container-tokens.master-key-rolling-interval-secs}} 
> mismatching with {{yarn.nm.liveness-monitor.expiry-interval-ms}}
> On startup NPE is thrown on {{RMActiveServices#serviceStop}}
> {noformat}
> 2015-09-16 12:23:29,504 INFO org.apache.hadoop.service.AbstractService: 
> Service RMActiveServices failed in state INITED; cause: 
> java.lang.IllegalArgumentException: 
> yarn.resourcemanager.container-tokens.master-key-rolling-interval-secs should 
> be more than 3 X yarn.nm.liveness-monitor.expiry-interval-ms
> java.lang.IllegalArgumentException: 
> yarn.resourcemanager.container-tokens.master-key-rolling-interval-secs should 
> be more than 3 X yarn.nm.liveness-monitor.expiry-interval-ms
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.security.RMContainerTokenSecretManager.(RMContainerTokenSecretManager.java:82)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.RMSecretManagerService.createContainerTokenSecretManager(RMSecretManagerService.java:109)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.RMSecretManagerService.(RMSecretManagerService.java:57)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.createRMSecretManagerService(ResourceManager.java:)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMActiveServices.serviceInit(ResourceManager.java:423)
>  at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.createAndInitActiveServices(ResourceManager.java:963)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceInit(ResourceManager.java:256)
>  at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1193)
> 2015-09-16 12:23:29,507 ERROR 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error closing 
> store.
> java.lang.NullPointerException
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMActiveServices.serviceStop(ResourceManager.java:608)
>  at org.apache.hadoop.service.AbstractService.stop(AbstractService.java:221)
>  at 
> org.apache.hadoop.service.ServiceOperations.stop(ServiceOperations.java:52)
>  at 
> org.apache.hadoop.service.ServiceOperations.stopQuietly(ServiceOperations.java:80)
>  at org.apache.hadoop.service.AbstractService.init(AbstractService.java:171)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.createAndInitActiveServices(ResourceManager.java:963)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceInit(ResourceManager.java:256)
>  at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1193
> {noformat}
> *Impact Area*: RM failover with wrong configuration



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6632) Backport YARN-3425 to branch 2.7

2017-12-20 Thread JIRA

[ 
https://issues.apache.org/jira/browse/YARN-6632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16299013#comment-16299013
 ] 

Íñigo Goiri commented on YARN-6632:
---

Thanks [~shv].

> Backport YARN-3425 to branch 2.7
> 
>
> Key: YARN-6632
> URL: https://issues.apache.org/jira/browse/YARN-6632
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
> Fix For: 2.7.6
>
> Attachments: YARN-3425-branch-2.7.patch
>
>
> NPE from RMNodeLabelsManager.serviceStop when NodeLabelsManager.serviceInit 
> failed



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-3425) NPE from RMNodeLabelsManager.serviceStop when NodeLabelsManager.serviceInit failed

2017-12-20 Thread Konstantin Shvachko (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated YARN-3425:
--
Fix Version/s: 2.7.6

> NPE from RMNodeLabelsManager.serviceStop when NodeLabelsManager.serviceInit 
> failed
> --
>
> Key: YARN-3425
> URL: https://issues.apache.org/jira/browse/YARN-3425
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
> Environment: 1 RM, 1 NM , 1 NN , I DN
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>Priority: Minor
> Fix For: 2.8.0, 3.0.0-alpha1, 2.7.6
>
> Attachments: YARN-3425.001.patch
>
>
> Configure yarn.node-labels.enabled to true 
> and yarn.node-labels.fs-store.root-dir /node-labels
> Start resource manager without starting DN/NM
> {quote}
> 2015-03-31 16:44:13,782 WARN org.apache.hadoop.service.AbstractService: When 
> stopping the service 
> org.apache.hadoop.yarn.nodelabels.CommonNodeLabelsManager : 
> java.lang.NullPointerException
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.yarn.nodelabels.CommonNodeLabelsManager.stopDispatcher(CommonNodeLabelsManager.java:261)
>   at 
> org.apache.hadoop.yarn.nodelabels.CommonNodeLabelsManager.serviceStop(CommonNodeLabelsManager.java:267)
>   at 
> org.apache.hadoop.service.AbstractService.stop(AbstractService.java:221)
>   at 
> org.apache.hadoop.service.ServiceOperations.stop(ServiceOperations.java:52)
>   at 
> org.apache.hadoop.service.ServiceOperations.stopQuietly(ServiceOperations.java:80)
>   at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:171)
>   at 
> org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:107)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMActiveServices.serviceInit(ResourceManager.java:556)
>   at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.createAndInitActiveServices(ResourceManager.java:984)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceInit(ResourceManager.java:251)
>   at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1207)
> {quote}
> {code}
>  protected void stopDispatcher() {
> AsyncDispatcher asyncDispatcher = (AsyncDispatcher) dispatcher;
>asyncDispatcher.stop(); 
>   }
> {code}
> Null check missing during stop



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7676) Fix inconsistent priority ordering in Priority and SchedulerRequestKey

2017-12-20 Thread Botong Huang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16299005#comment-16299005
 ] 

Botong Huang edited comment on YARN-7676 at 12/20/17 8:03 PM:
--

Yeah, {{TestApplicationPriority}} indeed fails with v1 patch reversing the 
{{Priority}} order. So basically Application priority is using {{Priority}} 
assuming larger value means higher priority, but {{ResourceRequest}} is using 
{{Priority}} assuming smaller value means higher priority...


was (Author: botong):
Yeah, {{TestApplicationPriority}} indeed fails with v1 patch reversing the 
Priority order. So basically Application priority is using Priority assuming 
larger value means higher priority, but {{ResourceRequest}} is using Priority 
assuming smaller value means higher priority...

> Fix inconsistent priority ordering in Priority and SchedulerRequestKey
> --
>
> Key: YARN-7676
> URL: https://issues.apache.org/jira/browse/YARN-7676
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Botong Huang
>Assignee: Botong Huang
>Priority: Minor
> Attachments: YARN-7676.v1.patch
>
>
> Today the priority ordering in _Priority.compareTo()_ and 
> _SchedulerRequestKey.compareTo()_ is inconsistent. Both _compareTo_ method is 
> trying to reverse the order: 
> P0.compareTo(P1) > 0, meaning priority wise P0 < P1. However, 
> SK(P0).comapreTo(SK(P1)) < 0, meaning priority wise SK(P0) > SK(P1). 
> This is attempting to fix that by undo both reversing logic. So that priority 
> wise P0 > P1 and SK(P0) > SK(P1). 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7676) Fix inconsistent priority ordering in Priority and SchedulerRequestKey

2017-12-20 Thread Botong Huang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16299005#comment-16299005
 ] 

Botong Huang commented on YARN-7676:


Yeah, {{TestApplicationPriority}} indeed fails with v1 patch reversing the 
Priority order. So basically Application priority is using Priority assuming 
larger value means higher priority, but {{ResourceRequest}} is using Priority 
assuming smaller value means higher priority...

> Fix inconsistent priority ordering in Priority and SchedulerRequestKey
> --
>
> Key: YARN-7676
> URL: https://issues.apache.org/jira/browse/YARN-7676
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Botong Huang
>Assignee: Botong Huang
>Priority: Minor
> Attachments: YARN-7676.v1.patch
>
>
> Today the priority ordering in _Priority.compareTo()_ and 
> _SchedulerRequestKey.compareTo()_ is inconsistent. Both _compareTo_ method is 
> trying to reverse the order: 
> P0.compareTo(P1) > 0, meaning priority wise P0 < P1. However, 
> SK(P0).comapreTo(SK(P1)) < 0, meaning priority wise SK(P0) > SK(P1). 
> This is attempting to fix that by undo both reversing logic. So that priority 
> wise P0 > P1 and SK(P0) > SK(P1). 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7590) Improve container-executor validation check

2017-12-20 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16298990#comment-16298990
 ] 

Miklos Szegedi commented on YARN-7590:
--

Thank you for the patch, [~eyang].
I see two more issues.
{{uid}} could just be a global variable saving some code but using locals is 
fine. However, we have now a caller uid, a yarn uid and a run as uid. Please 
rename the uid you created as you pass along the functions as caller_uid.
Also, the patch does not apply to the scenario in the initial description. 
Please do the check in {{create_log_dirs}} as well.

> Improve container-executor validation check
> ---
>
> Key: YARN-7590
> URL: https://issues.apache.org/jira/browse/YARN-7590
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: security, yarn
>Affects Versions: 2.0.1-alpha, 2.2.0, 2.3.0, 2.4.0, 2.5.0, 2.6.0, 2.7.0, 
> 2.8.0, 2.8.1, 3.0.0-beta1
>Reporter: Eric Yang
>Assignee: Eric Yang
> Attachments: YARN-7590.001.patch, YARN-7590.002.patch
>
>
> There is minimum check for prefix path for container-executor.  If YARN is 
> compromised, attacker  can use container-executor to change system files 
> ownership:
> {code}
> /usr/local/hadoop/bin/container-executor spark yarn 0 etc /home/yarn/tokens 
> /home/spark / ls
> {code}
> This will change /etc to be owned by spark user:
> {code}
> # ls -ld /etc
> drwxr-s---. 110 spark hadoop 8192 Nov 21 20:00 /etc
> {code}
> Spark user can rewrite /etc files to gain more access.  We can improve this 
> with additional check in container-executor:
> # Make sure the prefix path is same as the one in yarn-site.xml, and 
> yarn-site.xml is owned by root, 644, and marked as final in property.
> # Make sure the user path is not a symlink, usercache is not a symlink.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7676) Fix inconsistent priority ordering in Priority and SchedulerRequestKey

2017-12-20 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16298987#comment-16298987
 ] 

Arun Suresh commented on YARN-7676:
---

Thanks for raising this Botong.
[~leftnoteasy] / [~sunilg] will this affect application priority (ordering of 
the apps itself) ? Looking at {{TestApplicationPriority}} am guessing it would.

> Fix inconsistent priority ordering in Priority and SchedulerRequestKey
> --
>
> Key: YARN-7676
> URL: https://issues.apache.org/jira/browse/YARN-7676
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Botong Huang
>Assignee: Botong Huang
>Priority: Minor
> Attachments: YARN-7676.v1.patch
>
>
> Today the priority ordering in _Priority.compareTo()_ and 
> _SchedulerRequestKey.compareTo()_ is inconsistent. Both _compareTo_ method is 
> trying to reverse the order: 
> P0.compareTo(P1) > 0, meaning priority wise P0 < P1. However, 
> SK(P0).comapreTo(SK(P1)) < 0, meaning priority wise SK(P0) > SK(P1). 
> This is attempting to fix that by undo both reversing logic. So that priority 
> wise P0 > P1 and SK(P0) > SK(P1). 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7669) [API] Introduce interfaces for placement constraint processing

2017-12-20 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-7669:
--
Attachment: YARN-7669-YARN-6592.005.patch

Updating patch based on suggestions:

bq. You mention in the comment of the first enum that it is not retry-able. I 
think it can be retry-able, 
My opinion was - if indeed it were retryable, the algorithm can choose to 
re-place the scheduling request on another node in the same run - in which case 
nobody outside of the algorithm even has to know if it were retried. In 
anycase, I agree with you - it could depend on the algorithm. I removed the 
comment.

bq. So, maybe what we mean but this error is "unsatisfiable user constraints?".
It might be due to an unsatisfiable constratint - but the end outcome was that 
the framworf was not able to place the request on a node - which is what i 
wanted to capture.

bq. Most importantly, will we be using this enum to decide whether we are 
retrying placement or is it just for knowing what went wrong?
So, this enum is used to notify the AM that the SchedulingRequest was rejected 
and why. The AM can choose to retry if it wants. But internally, it will not be 
used to signal a retry. It should be used to tell the AM that even after 
retying, we could not schedule this request.

bq. The Algorithm* classes' names seem too generic. I would prefer to add a 
prefix like Placement or Constraint to all of them.
Done - but IMO, the names seem too long, but it doesnt bother me much.

bq. the RejectionReason in multiple occasions, probably we should give the two 
reasons a more specific name. Like "constraint_violation_on_node" etc
Done.

> [API] Introduce interfaces for placement constraint processing
> --
>
> Key: YARN-7669
> URL: https://issues.apache.org/jira/browse/YARN-7669
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-7669-YARN-6592.001.patch, 
> YARN-7669-YARN-6592.002.patch, YARN-7669-YARN-6592.003.patch, 
> YARN-7669-YARN-6592.004.patch, YARN-7669-YARN-6592.005.patch
>
>
> As per discussions in YARN-7612. This JIRA will introduce the generic 
> interfaces which will be implemented in YARN-7612



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7676) Fix inconsistent priority ordering in Priority and SchedulerRequestKey

2017-12-20 Thread Botong Huang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Botong Huang updated YARN-7676:
---
Attachment: YARN-7676.v1.patch

> Fix inconsistent priority ordering in Priority and SchedulerRequestKey
> --
>
> Key: YARN-7676
> URL: https://issues.apache.org/jira/browse/YARN-7676
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Botong Huang
>Assignee: Botong Huang
>Priority: Minor
> Attachments: YARN-7676.v1.patch
>
>
> Today the priority ordering in _Priority.compareTo()_ and 
> _SchedulerRequestKey.compareTo()_ is inconsistent. Both _compareTo_ method is 
> trying to reverse the order: 
> P0.compareTo(P1) > 0, meaning priority wise P0 < P1. However, 
> SK(P0).comapreTo(SK(P1)) < 0, meaning priority wise SK(P0) > SK(P1). 
> This is attempting to fix that by undo both reversing logic. So that priority 
> wise P0 > P1 and SK(P0) > SK(P1). 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7677) HADOOP_CONF_DIR should not be automatically put in task environment

2017-12-20 Thread Eric Badger (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16298953#comment-16298953
 ] 

Eric Badger commented on YARN-7677:
---

My proposal would be to remove {{HADOOP_CONF_DIR}}, as well as potentially 
{{USER}}, {{LOGNAME}}, {{HOME}}, and {{PWD}} from ContainerLaunch.java and 
require them to be in the environment whitelist if they are to be taken from 
Nodemanager environment. Arguably, these all should be removed, but the 
strongest case can be made for {{HADOOP_CONF_DIR}}, since it is already in the 
default environment whitelist. So the only way this would break a use case is 
if someone was using their own whitelist and didn't include 
{{HADOOP_CONF_DIR}}. 

While this change would be incompatible, I think it makes sense for the 
non-docker case, and is paramount for the docker case.

> HADOOP_CONF_DIR should not be automatically put in task environment
> ---
>
> Key: YARN-7677
> URL: https://issues.apache.org/jira/browse/YARN-7677
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Eric Badger
>Assignee: Eric Badger
>
> Currently, {{HADOOP_CONF_DIR}} is being put into the task environment whether 
> it's set by the user or not. It completely bypasses the whitelist and so 
> there is no way for a task to not have {{HADOOP_CONF_DIR}} set. This causes 
> problems in the Docker use case where Docker containers will set up their own 
> environment and have their own {{HADOOP_CONF_DIR}} preset in the image 
> itself. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7677) HADOOP_CONF_DIR should not be automatically put in task environment

2017-12-20 Thread Eric Badger (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16298946#comment-16298946
 ] 

Eric Badger commented on YARN-7677:
---

Linking YARN-3611 since this is related to Docker development. Not putting it 
as a subtask, however, because this JIRA has impacts outside of Docker.

> HADOOP_CONF_DIR should not be automatically put in task environment
> ---
>
> Key: YARN-7677
> URL: https://issues.apache.org/jira/browse/YARN-7677
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Eric Badger
>Assignee: Eric Badger
>
> Currently, {{HADOOP_CONF_DIR}} is being put into the task environment whether 
> it's set by the user or not. It completely bypasses the whitelist and so 
> there is no way for a task to not have {{HADOOP_CONF_DIR}} set. This causes 
> problems in the Docker use case where Docker containers will set up their own 
> environment and have their own {{HADOOP_CONF_DIR}} preset in the image 
> itself. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7677) HADOOP_CONF_DIR should not be automatically put in task environment

2017-12-20 Thread Eric Badger (JIRA)
Eric Badger created YARN-7677:
-

 Summary: HADOOP_CONF_DIR should not be automatically put in task 
environment
 Key: YARN-7677
 URL: https://issues.apache.org/jira/browse/YARN-7677
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Eric Badger
Assignee: Eric Badger


Currently, {{HADOOP_CONF_DIR}} is being put into the task environment whether 
it's set by the user or not. It completely bypasses the whitelist and so there 
is no way for a task to not have {{HADOOP_CONF_DIR}} set. This causes problems 
in the Docker use case where Docker containers will set up their own 
environment and have their own {{HADOOP_CONF_DIR}} preset in the image itself. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7676) Fix inconsistent priority ordering in Priority and SchedulerRequestKey

2017-12-20 Thread Botong Huang (JIRA)
Botong Huang created YARN-7676:
--

 Summary: Fix inconsistent priority ordering in Priority and 
SchedulerRequestKey
 Key: YARN-7676
 URL: https://issues.apache.org/jira/browse/YARN-7676
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Botong Huang
Assignee: Botong Huang
Priority: Minor


Today the priority ordering in _Priority.compareTo()_ and 
_SchedulerRequestKey.compareTo()_ is inconsistent. Both _compareTo_ method is 
trying to reverse the order: 

P0.compareTo(P1) > 0, meaning priority wise P0 < P1. However, 
SK(P0).comapreTo(SK(P1)) < 0, meaning priority wise SK(P0) > SK(P1). 

This is attempting to fix that by undo both reversing logic. So that priority 
wise P0 > P1 and SK(P0) > SK(P1). 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7605) Implement doAs for Api Service REST API

2017-12-20 Thread Eric Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-7605:

Attachment: YARN-7605.007.patch

- Fix checkstyle issues.

> Implement doAs for Api Service REST API
> ---
>
> Key: YARN-7605
> URL: https://issues.apache.org/jira/browse/YARN-7605
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Assignee: Eric Yang
> Fix For: yarn-native-services
>
> Attachments: YARN-7605.001.patch, YARN-7605.004.patch, 
> YARN-7605.005.patch, YARN-7605.006.patch, YARN-7605.007.patch
>
>
> In YARN-7540, all client entry points for API service is centralized to use 
> REST API instead of having direct file system and resource manager rpc calls. 
>  This change helped to centralize yarn metadata to be owned by yarn user 
> instead of crawling through every user's home directory to find metadata.  
> The next step is to make sure "doAs" calls work properly for API Service.  
> The metadata is stored by YARN user, but the actual workload still need to be 
> performed as end users, hence API service must authenticate end user kerberos 
> credential, and perform doAs call when requesting containers via 
> ServiceClient.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7675) The new UI won't load for pre 2.8 Hadoop versions because queueCapacitiesByPartition is missing from the scheduler API

2017-12-20 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16298863#comment-16298863
 ] 

genericqa commented on YARN-7675:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  9m  
5s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
23m 43s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 43s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 43m  7s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7675 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12903080/YARN-7675.001.patch |
| Optional Tests |  asflicense  shadedclient  |
| uname | Linux 868a8106d6c0 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 13ad747 |
| maven | version: Apache Maven 3.3.9 |
| Max. process+thread count | 410 (vs. ulimit of 5000) |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/18997/console |
| Powered by | Apache Yetus 0.7.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> The new UI won't load for pre 2.8 Hadoop versions because 
> queueCapacitiesByPartition is missing from the scheduler API
> --
>
> Key: YARN-7675
> URL: https://issues.apache.org/jira/browse/YARN-7675
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Reporter: Gergely Novák
>Assignee: Gergely Novák
> Attachments: YARN-7675.001.patch
>
>
> If we connect the new YARN UI to any Hadoop versions older than 2.8 it won't 
> load. The console shows this trace:
> {noformat}
> TypeError: Cannot read property 'queueCapacitiesByPartition' of undefined
> at Class.normalizeSingleResponse (yarn-ui.js:13903)
> at Class.superWrapper [as normalizeSingleResponse] (vendor.js:31811)
> at Class.handleQueue (yarn-ui.js:13928)
> at Class.normalizeArrayResponse (yarn-ui.js:13952)
> at Class.normalizeQueryResponse (vendor.js:101566)
> at Class.normalizeResponse (vendor.js:101468)
> at 
> ember$data$lib$system$store$serializer$response$$normalizeResponseHelper 
> (vendor.js:95345)
> at vendor.js:95672
> at Backburner.run (vendor.js:10426)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7669) [API] Introduce interfaces for placement constraint processing

2017-12-20 Thread Konstantinos Karanasos (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16298858#comment-16298858
 ] 

Konstantinos Karanasos commented on YARN-7669:
--

Thanks [~asuresh] for updating the patch.
A couple of things:
* Following [~sunilg] comment on using the RejectionReason in multiple 
occasions, probably we should give the two reasons a more specific name. Like 
"constraint_violation_on_node" etc.?
* You mention in the comment of the first enum that it is not retry-able. I 
think it can be retry-able, for example if we use global cluster constraints 
and something changes between attempts. Also, things get more complicated with 
inter-application constraints. So, maybe what we mean but this error is 
"unsatisfiable user constraints?". Most importantly, will we be using this enum 
to decide whether we are retrying placement or is it just for knowing what went 
wrong?
* The Algorithm* classes' names seem too generic. I would prefer to add a 
prefix like Placement or Constraint to all of them.

> [API] Introduce interfaces for placement constraint processing
> --
>
> Key: YARN-7669
> URL: https://issues.apache.org/jira/browse/YARN-7669
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-7669-YARN-6592.001.patch, 
> YARN-7669-YARN-6592.002.patch, YARN-7669-YARN-6592.003.patch, 
> YARN-7669-YARN-6592.004.patch
>
>
> As per discussions in YARN-7612. This JIRA will introduce the generic 
> interfaces which will be implemented in YARN-7612



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7565) Yarn service pre-maturely releases the container after AM restart

2017-12-20 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16298837#comment-16298837
 ] 

Eric Yang commented on YARN-7565:
-

More information revealed that, there was a problem with znode on my cluster.  
I am not sure how it reached that state.  By removing the faulty znode for DNS 
registry, the null pointer exception problem doesn't happen any more.  

> Yarn service pre-maturely releases the container after AM restart 
> --
>
> Key: YARN-7565
> URL: https://issues.apache.org/jira/browse/YARN-7565
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Chandni Singh
>Assignee: Chandni Singh
> Fix For: 3.1.0
>
> Attachments: YARN-7565.001.patch, YARN-7565.002.patch, 
> YARN-7565.003.patch, YARN-7565.004.patch, YARN-7565.005.patch, 
> YARN-7565.addendum.001.patch
>
>
> With YARN-6168, recovered containers can be reported to AM in response to the 
> AM heartbeat. 
> Currently, the Service Master will release the containers, that are not 
> reported in the AM registration response, immediately.
> Instead, the master can wait for a configured amount of time for the 
> containers to be recovered by RM. These containers are sent to AM in the 
> heartbeat response. Once a container is not reported in the configured 
> interval, it can be released by the master.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7674) Update Timeline Reader web app address in UI2

2017-12-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16298792#comment-16298792
 ] 

Hudson commented on YARN-7674:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13410 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13410/])
YARN-7674. Update Timeline Reader web app address in UI2. Contributed by 
(rohithsharmaks: rev 13ad7479b0e35a2c2d398e28c676871d9e672dc3)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/initializers/loader.js


> Update Timeline Reader web app address in UI2
> -
>
> Key: YARN-7674
> URL: https://issues.apache.org/jira/browse/YARN-7674
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Rohith Sharma K S
>Assignee: Sunil G
> Fix For: 3.1.0, 2.10.0, 3.0.1
>
> Attachments: YARN-7674.001.patch
>
>
> YARN-7662 introduces a new set of configurations. It is required to update in 
> UI2 as well. 
> cc :/ [~sunilg]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7565) Yarn service pre-maturely releases the container after AM restart

2017-12-20 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16298747#comment-16298747
 ] 

Eric Yang edited comment on YARN-7565 at 12/20/17 5:33 PM:
---

Thank you for point out the ServiceRecord.description maps to container name 
(and not Service Spec description field), but it appears to be a race condition 
for newly created application.  serviceStart is invoked recoverComponent first. 
 Application hasn't registered with Registry yet.  This looks like the reason 
that we get null pointer exception.


was (Author: eyang):
Thank you for point out the record.description maps to container name, but it 
appears to be a race condition for newly created application.  serviceStart is 
invoked recoverComponent first.  Application hasn't registered with Registry 
yet.  This looks like the reason that we get null pointer exception.

> Yarn service pre-maturely releases the container after AM restart 
> --
>
> Key: YARN-7565
> URL: https://issues.apache.org/jira/browse/YARN-7565
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Chandni Singh
>Assignee: Chandni Singh
> Fix For: 3.1.0
>
> Attachments: YARN-7565.001.patch, YARN-7565.002.patch, 
> YARN-7565.003.patch, YARN-7565.004.patch, YARN-7565.005.patch, 
> YARN-7565.addendum.001.patch
>
>
> With YARN-6168, recovered containers can be reported to AM in response to the 
> AM heartbeat. 
> Currently, the Service Master will release the containers, that are not 
> reported in the AM registration response, immediately.
> Instead, the master can wait for a configured amount of time for the 
> containers to be recovered by RM. These containers are sent to AM in the 
> heartbeat response. Once a container is not reported in the configured 
> interval, it can be released by the master.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7675) The new UI won't load for pre 2.8 Hadoop versions because queueCapacitiesByPartition is missing from the scheduler API

2017-12-20 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/YARN-7675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gergely Novák updated YARN-7675:

Attachment: YARN-7675.001.patch

> The new UI won't load for pre 2.8 Hadoop versions because 
> queueCapacitiesByPartition is missing from the scheduler API
> --
>
> Key: YARN-7675
> URL: https://issues.apache.org/jira/browse/YARN-7675
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Reporter: Gergely Novák
>Assignee: Gergely Novák
> Attachments: YARN-7675.001.patch
>
>
> If we connect the new YARN UI to any Hadoop versions older than 2.8 it won't 
> load. The console shows this trace:
> {noformat}
> TypeError: Cannot read property 'queueCapacitiesByPartition' of undefined
> at Class.normalizeSingleResponse (yarn-ui.js:13903)
> at Class.superWrapper [as normalizeSingleResponse] (vendor.js:31811)
> at Class.handleQueue (yarn-ui.js:13928)
> at Class.normalizeArrayResponse (yarn-ui.js:13952)
> at Class.normalizeQueryResponse (vendor.js:101566)
> at Class.normalizeResponse (vendor.js:101468)
> at 
> ember$data$lib$system$store$serializer$response$$normalizeResponseHelper 
> (vendor.js:95345)
> at vendor.js:95672
> at Backburner.run (vendor.js:10426)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-7675) The new UI won't load for pre 2.8 Hadoop versions because queueCapacitiesByPartition is missing from the scheduler API

2017-12-20 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/YARN-7675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gergely Novák reassigned YARN-7675:
---

Assignee: Gergely Novák

> The new UI won't load for pre 2.8 Hadoop versions because 
> queueCapacitiesByPartition is missing from the scheduler API
> --
>
> Key: YARN-7675
> URL: https://issues.apache.org/jira/browse/YARN-7675
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Reporter: Gergely Novák
>Assignee: Gergely Novák
> Attachments: YARN-7675.001.patch
>
>
> If we connect the new YARN UI to any Hadoop versions older than 2.8 it won't 
> load. The console shows this trace:
> {noformat}
> TypeError: Cannot read property 'queueCapacitiesByPartition' of undefined
> at Class.normalizeSingleResponse (yarn-ui.js:13903)
> at Class.superWrapper [as normalizeSingleResponse] (vendor.js:31811)
> at Class.handleQueue (yarn-ui.js:13928)
> at Class.normalizeArrayResponse (yarn-ui.js:13952)
> at Class.normalizeQueryResponse (vendor.js:101566)
> at Class.normalizeResponse (vendor.js:101468)
> at 
> ember$data$lib$system$store$serializer$response$$normalizeResponseHelper 
> (vendor.js:95345)
> at vendor.js:95672
> at Backburner.run (vendor.js:10426)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7565) Yarn service pre-maturely releases the container after AM restart

2017-12-20 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16298747#comment-16298747
 ] 

Eric Yang commented on YARN-7565:
-

Thank you for point out the record.description maps to container name, but it 
appears to be a race condition for newly created application.  serviceStart is 
invoked recoverComponent first.  Application hasn't registered with Registry 
yet.  This looks like the reason that we get null pointer exception.

> Yarn service pre-maturely releases the container after AM restart 
> --
>
> Key: YARN-7565
> URL: https://issues.apache.org/jira/browse/YARN-7565
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Chandni Singh
>Assignee: Chandni Singh
> Fix For: 3.1.0
>
> Attachments: YARN-7565.001.patch, YARN-7565.002.patch, 
> YARN-7565.003.patch, YARN-7565.004.patch, YARN-7565.005.patch, 
> YARN-7565.addendum.001.patch
>
>
> With YARN-6168, recovered containers can be reported to AM in response to the 
> AM heartbeat. 
> Currently, the Service Master will release the containers, that are not 
> reported in the AM registration response, immediately.
> Instead, the master can wait for a configured amount of time for the 
> containers to be recovered by RM. These containers are sent to AM in the 
> heartbeat response. Once a container is not reported in the configured 
> interval, it can be released by the master.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7669) [API] Introduce interfaces for placement constraint processing

2017-12-20 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-7669:
--
Attachment: YARN-7669-YARN-6592.004.patch

Updating patch with some checkstyle fixes etc.

> [API] Introduce interfaces for placement constraint processing
> --
>
> Key: YARN-7669
> URL: https://issues.apache.org/jira/browse/YARN-7669
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-7669-YARN-6592.001.patch, 
> YARN-7669-YARN-6592.002.patch, YARN-7669-YARN-6592.003.patch, 
> YARN-7669-YARN-6592.004.patch
>
>
> As per discussions in YARN-7612. This JIRA will introduce the generic 
> interfaces which will be implemented in YARN-7612



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7672) hadoop-sls can not simulate huge scale of YARN

2017-12-20 Thread Wei Yan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16298689#comment-16298689
 ] 

Wei Yan commented on YARN-7672:
---

bq. All of NM and AM simulators are all cpu type of task. So cpu.load will go 
up to 100+ (only 32 cores) And as we know, Scheduler will also use one process 
for allocating resources.

So in that case, even separating into two hosts, the host running NM/AM 
simulators still hit the CPU bottleneck, right? Although the Scheduler doesn't 
need to compete with simulators.

Another interesting idea is to launch a large MapReduce job (like 5000 
containers), each container runs as NM/AM simulator, to issue requests to the 
real RM. similar to the idea for HDFS [Dynamometer | 
https://lists.apache.org/thread.html/7223d22fbc26e055369695f83395e9a7767043f7245af25df385b535@%3Chdfs-dev.hadoop.apache.org%3E].
 But this involves more complext setup..

> hadoop-sls can not simulate huge scale of YARN
> --
>
> Key: YARN-7672
> URL: https://issues.apache.org/jira/browse/YARN-7672
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: zhangshilong
>Assignee: zhangshilong
> Attachments: YARN-7672.patch
>
>
> Our YARN cluster scale to nearly 10 thousands nodes. We need to do scheduler 
> pressure test.
> Using SLS,we start  2000+ threads to simulate NM and AM. But  cpu.load very 
> high to 100+. I thought that will affect  performance evaluation of 
> scheduler. 
> So I thought to separate the scheduler from the simulator.
> I start a real RM. Then SLS will register nodes to RM,And submit apps to RM 
> using RM RPC.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7669) [API] Introduce interfaces for placement constraint processing

2017-12-20 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16298665#comment-16298665
 ] 

Arun Suresh commented on YARN-7669:
---

Thanks for the review [~sunilg]

bq. How ApplicationMasterServiceUtils#addToRejectedSchedulingRequests is 
invoked? I could not find a caller or is it a part of some dependent tickets?
I had used it in the original YARN-7612 patch (v005 and v005 patch), but when I 
spit that, I kept this method here since it is more of a utility method and 
similar to the other existing methods in ApplicationMasterServiceUtil. The 
intention was to make the new YARN-7612 easier to review. Do take a look at the 
older patches in YARN-7612 to see how it is being used. I shall update 
YARN-7612 once this is ready.

bq. AllocateResponse#getRejectedSchedulingRequests gives rejected requests from 
previous allocate to current one. But will this info be retained at scheduler 
assuming RM restarted between AM heartbeats? Also one more doubt here,
Good point. Did spend some time thinking about that. I was thinking we tackle 
that when we get there. Worst case - we just state for the time being that 
retries are reset in the event of a failover. Also, this is not public / user 
facing - only used internally in the framework / scheduler.

bq. Could RejectionReason also be used in cases like Node Constraints too? Its 
also possible that placement could fail may be node constraint also violated on 
a node.
It can and should be used for Node constraints as well. As you suggested above, 
we can add more enums as and when required. Only thing is, we should return 
RejectedSchedulingRequests only if the AM uses SchedulingRequests, not 
ResourceRequests.



> [API] Introduce interfaces for placement constraint processing
> --
>
> Key: YARN-7669
> URL: https://issues.apache.org/jira/browse/YARN-7669
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-7669-YARN-6592.001.patch, 
> YARN-7669-YARN-6592.002.patch, YARN-7669-YARN-6592.003.patch
>
>
> As per discussions in YARN-7612. This JIRA will introduce the generic 
> interfaces which will be implemented in YARN-7612



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5366) Improve handling of the Docker container life cycle

2017-12-20 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16298621#comment-16298621
 ] 

genericqa commented on YARN-5366:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 15m 
24s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 14 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
47s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 15s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
14s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api in 
trunk has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
47s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  6m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
54s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 10s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 2 new + 595 unchanged - 0 fixed = 597 total (was 595) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  3s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
58s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
40s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
11s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 19m 
10s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
17s{color} | {color:green} hadoop-yarn-site in the patch passed. {color} |
| 

[jira] [Commented] (YARN-7669) [API] Introduce interfaces for placement constraint processing

2017-12-20 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16298560#comment-16298560
 ] 

Sunil G commented on YARN-7669:
---

Thanks [~asuresh] and [~kkaranasos]

Few doubts and comments in this patch.

# How {{ApplicationMasterServiceUtils#addToRejectedSchedulingRequests}} is 
invoked? I could not find a caller or is it a part of some dependent tickets?
# {{AllocateResponse#getRejectedSchedulingRequests}} gives rejected requests 
from previous allocate to current one. But will this info be retained at 
scheduler assuming RM restarted between AM heartbeats? Also one more doubt  
here, 
# Could {{RejectionReason}} also be used in cases like Node Constraints too? 
Its also possible that placement could fail may be node constraint also 
violated on a node.
# In line with above comment, I think *RejectionReason* enum could only provide 
a very minimal reject reason. COULD_NOT_SCHEDULE_ON_NODE could happen due to 
multiple reasons. But I also think having a string to give diag is also too 
descriptive and AM's could not decode same. I think we could add more enums as 
error codes than giving a high level one error code like 
COULD_NOT_SCHEDULE_ON_NODE?


> [API] Introduce interfaces for placement constraint processing
> --
>
> Key: YARN-7669
> URL: https://issues.apache.org/jira/browse/YARN-7669
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-7669-YARN-6592.001.patch, 
> YARN-7669-YARN-6592.002.patch, YARN-7669-YARN-6592.003.patch
>
>
> As per discussions in YARN-7612. This JIRA will introduce the generic 
> interfaces which will be implemented in YARN-7612



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5366) Improve handling of the Docker container life cycle

2017-12-20 Thread Shane Kumpf (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shane Kumpf updated YARN-5366:
--
Attachment: (was: YARN-5366.008.patch)

> Improve handling of the Docker container life cycle
> ---
>
> Key: YARN-5366
> URL: https://issues.apache.org/jira/browse/YARN-5366
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
>  Labels: oct16-medium
> Attachments: YARN-5366.001.patch, YARN-5366.002.patch, 
> YARN-5366.003.patch, YARN-5366.004.patch, YARN-5366.005.patch, 
> YARN-5366.006.patch, YARN-5366.007.patch, YARN-5366.008.patch
>
>
> There are several paths that need to be improved with regard to the Docker 
> container lifecycle when running Docker containers on YARN.
> 1) Provide the ability to keep a container on the NodeManager for a set 
> period of time for debugging purposes.
> 2) Support sending signals to the process in the container to allow for 
> triggering stack traces, heap dumps, etc.
> 3) Support for Docker's live restore, which means moving away from the use of 
> {{docker wait}}. (YARN-5818)
> 4) Improve the resiliency of liveliness checks (kill -0) by adding retries.
> 5) Improve the resiliency of container removal by adding retries.
> 6) Only attempt to stop, kill, and remove containers if the current container 
> state allows for it.
> 7) Better handling of short lived containers when the container is stopped 
> before the PID can be retrieved. (YARN-6305)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5366) Improve handling of the Docker container life cycle

2017-12-20 Thread Shane Kumpf (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shane Kumpf updated YARN-5366:
--
Attachment: YARN-5366.008.patch

> Improve handling of the Docker container life cycle
> ---
>
> Key: YARN-5366
> URL: https://issues.apache.org/jira/browse/YARN-5366
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
>  Labels: oct16-medium
> Attachments: YARN-5366.001.patch, YARN-5366.002.patch, 
> YARN-5366.003.patch, YARN-5366.004.patch, YARN-5366.005.patch, 
> YARN-5366.006.patch, YARN-5366.007.patch, YARN-5366.008.patch
>
>
> There are several paths that need to be improved with regard to the Docker 
> container lifecycle when running Docker containers on YARN.
> 1) Provide the ability to keep a container on the NodeManager for a set 
> period of time for debugging purposes.
> 2) Support sending signals to the process in the container to allow for 
> triggering stack traces, heap dumps, etc.
> 3) Support for Docker's live restore, which means moving away from the use of 
> {{docker wait}}. (YARN-5818)
> 4) Improve the resiliency of liveliness checks (kill -0) by adding retries.
> 5) Improve the resiliency of container removal by adding retries.
> 6) Only attempt to stop, kill, and remove containers if the current container 
> state allows for it.
> 7) Better handling of short lived containers when the container is stopped 
> before the PID can be retrieved. (YARN-6305)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4227) FairScheduler: RM quits processing expired container from a removed node

2017-12-20 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16298346#comment-16298346
 ] 

genericqa commented on YARN-4227:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 33s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 27s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 60m 39s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}106m 59s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.TestOpportunisticContainerAllocatorAMService 
|
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-4227 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12903016/YARN-4227.5.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 1ac53727bf56 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / d62932c |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/18994/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/18994/testReport/ |
| Max. process+thread count | 835 (vs. ulimit of 5000) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 

[jira] [Assigned] (YARN-7535) We should display origin value of demand in fair scheduler page

2017-12-20 Thread Wilfred Spiegelenburg (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7535?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wilfred Spiegelenburg reassigned YARN-7535:
---

Assignee: Wilfred Spiegelenburg  (was: YunFan Zhou)

> We should display origin value of demand in fair scheduler page
> ---
>
> Key: YARN-7535
> URL: https://issues.apache.org/jira/browse/YARN-7535
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: scheduler
>Reporter: YunFan Zhou
>Assignee: Wilfred Spiegelenburg
>
> The value of *demand* of leaf queue that we now view on the fair scheduler 
> page shows only the value of *maxResources* when the demand value is greater 
> than *maxResources*. It doesn't reflect the real situation. Most of the time, 
> when we expand the queue, we often rely on seeing the current demand real 
> value.
> {code:java}
> private void updateDemandForApp(FSAppAttempt sched, Resource maxRes) {
> sched.updateDemand();
> Resource toAdd = sched.getDemand();
> if (LOG.isDebugEnabled()) {
>   LOG.debug("Counting resource from " + sched.getName() + " " + toAdd
>   + "; Total resource consumption for " + getName() + " now "
>   + demand);
> }
> demand = Resources.add(demand, toAdd);
> demand = Resources.componentwiseMin(demand, maxRes);
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7674) Update Timeline Reader web app address in UI2

2017-12-20 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16298221#comment-16298221
 ] 

genericqa commented on YARN-7674:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
25s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
24m 21s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  5s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 35m 35s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7674 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12903000/YARN-7674.001.patch |
| Optional Tests |  asflicense  shadedclient  |
| uname | Linux 6436898ed3ca 4.4.0-89-generic #112-Ubuntu SMP Mon Jul 31 
19:38:41 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / d62932c |
| maven | version: Apache Maven 3.3.9 |
| Max. process+thread count | 441 (vs. ulimit of 5000) |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/18993/console |
| Powered by | Apache Yetus 0.7.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Update Timeline Reader web app address in UI2
> -
>
> Key: YARN-7674
> URL: https://issues.apache.org/jira/browse/YARN-7674
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Rohith Sharma K S
>Assignee: Sunil G
> Attachments: YARN-7674.001.patch
>
>
> YARN-7662 introduces a new set of configurations. It is required to update in 
> UI2 as well. 
> cc :/ [~sunilg]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4227) FairScheduler: RM quits processing expired container from a removed node

2017-12-20 Thread Wilfred Spiegelenburg (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wilfred Spiegelenburg updated YARN-4227:

Attachment: YARN-4227.5.patch

I ran into this again and the current point of failure is still the same point 
in the code just a different code path to get there:
{code}
ERROR org.apache.hadoop.yarn.YarnUncaughtExceptionhandler: Thread 
Thread[Preemption Timer,5,main] threw an Exception.
java.lang.NullPointerException
  at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.completedContainer(FairScheduler.java:699)
  at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSPreemptionThread$PreemptContainersTask.run(FSPreemptionThread.java:230)
  at java.util.TimerThread.mainLoop(Timer.java:555)
  at java.util.TimerThread.run(Timer.java:505)
{code}

In the log we also had the entry for an unknown host:
{code}
ERROR 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.VisitedResourceRequestTracker:
 Found ResourceRequest for a non-existant node/rack named 
{code}
Which shows that there was a node that used to be present but is no longer

[~Steven Rand]: The ClusterNodeTracker is used for all schedulers. We can not 
change what {{ClusterNodeTracker#getNode}} returns without impacting all 
schedulers and thus affecting a huge amount of code. Adding more checks to make 
sure the node is not null is not needed. This seems to be the last place in 
which we do not handle a removed node correctly.

Rebased the fix to trunk. 

> FairScheduler: RM quits processing expired container from a removed node
> 
>
> Key: YARN-4227
> URL: https://issues.apache.org/jira/browse/YARN-4227
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.3.0, 2.5.0, 2.7.1
>Reporter: Wilfred Spiegelenburg
>Assignee: Wilfred Spiegelenburg
>Priority: Critical
> Attachments: YARN-4227.2.patch, YARN-4227.3.patch, YARN-4227.4.patch, 
> YARN-4227.5.patch, YARN-4227.patch
>
>
> Under some circumstances the node is removed before an expired container 
> event is processed causing the RM to exit:
> {code}
> 2015-10-04 21:14:01,063 INFO 
> org.apache.hadoop.yarn.util.AbstractLivelinessMonitor: 
> Expired:container_1436927988321_1307950_01_12 Timed out after 600 secs
> 2015-10-04 21:14:01,063 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: 
> container_1436927988321_1307950_01_12 Container Transitioned from 
> ACQUIRED to EXPIRED
> 2015-10-04 21:14:01,063 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSSchedulerApp: 
> Completed container: container_1436927988321_1307950_01_12 in state: 
> EXPIRED event:EXPIRE
> 2015-10-04 21:14:01,063 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=system_op   
>OPERATION=AM Released Container TARGET=SchedulerApp RESULT=SUCCESS  
> APPID=application_1436927988321_1307950 
> CONTAINERID=container_1436927988321_1307950_01_12
> 2015-10-04 21:14:01,063 FATAL 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error in 
> handling event type CONTAINER_EXPIRED to the scheduler
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.completedContainer(FairScheduler.java:849)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.handle(FairScheduler.java:1273)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.handle(FairScheduler.java:122)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$SchedulerEventDispatcher$EventProcessor.run(ResourceManager.java:585)
>   at java.lang.Thread.run(Thread.java:745)
> 2015-10-04 21:14:01,063 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Exiting, bbye..
> {code}
> The stack trace is from 2.3.0 but the same issue has been observed in 2.5.0 
> and 2.6.0 by different customers.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7672) hadoop-sls can not simulate huge scale of YARN

2017-12-20 Thread zhangshilong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16298192#comment-16298192
 ] 

zhangshilong edited comment on YARN-7672 at 12/20/17 9:55 AM:
--

[~cxcw] I use two daemons deployed in different two hosts.
  I start 1000~5000 threads to simulate NM/AM,because I need to simulate 1 
apps running with 1 NM nodes.
one task use 1vcore and 2304Mb. And one NM has 50 vcore and 50*2304 Mb 
resources. 
All of NM and AM simulators are all cpu type of task.  So cpu.load will go up 
to 100+ (only 32 cores)  And as we know, Scheduler will also use one process 
for allocating resources.



was (Author: zsl2007):
[~cxcw] I use two daemons deployed on different two hosts.
  I start 1000~5000 threads to simulate NM/AM,because I need to simulate 1 
apps running with 1 NM nodes.
one task use 1vcore and 2304Mb. And one NM has 50 vcore and 50*2304 Mb 
resources. 
All of NM and AM simulators are all cpu type of task.  So cpu.load will go up 
to 100+ (only 32 cores)  And as we know, Scheduler will also use one process 
for allocating resources.


> hadoop-sls can not simulate huge scale of YARN
> --
>
> Key: YARN-7672
> URL: https://issues.apache.org/jira/browse/YARN-7672
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: zhangshilong
>Assignee: zhangshilong
> Attachments: YARN-7672.patch
>
>
> Our YARN cluster scale to nearly 10 thousands nodes. We need to do scheduler 
> pressure test.
> Using SLS,we start  2000+ threads to simulate NM and AM. But  cpu.load very 
> high to 100+. I thought that will affect  performance evaluation of 
> scheduler. 
> So I thought to separate the scheduler from the simulator.
> I start a real RM. Then SLS will register nodes to RM,And submit apps to RM 
> using RM RPC.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7672) hadoop-sls can not simulate huge scale of YARN

2017-12-20 Thread zhangshilong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16298192#comment-16298192
 ] 

zhangshilong commented on YARN-7672:


[~cxcw] I use two daemons deployed on different two hosts.
  I start 1000~5000 threads to simulate NM/AM,because I need to simulate 1 
apps running with 1 NM nodes.
one task use 1vcore and 2304Mb. And one NM has 50 vcore and 50*2304 Mb 
resources. 
All of NM and AM simulators are all cpu type of task.  So cpu.load will go up 
to 100+ (only 32 cores)  And as we know, Scheduler will also use one process 
for allocating resources.


> hadoop-sls can not simulate huge scale of YARN
> --
>
> Key: YARN-7672
> URL: https://issues.apache.org/jira/browse/YARN-7672
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: zhangshilong
>Assignee: zhangshilong
> Attachments: YARN-7672.patch
>
>
> Our YARN cluster scale to nearly 10 thousands nodes. We need to do scheduler 
> pressure test.
> Using SLS,we start  2000+ threads to simulate NM and AM. But  cpu.load very 
> high to 100+. I thought that will affect  performance evaluation of 
> scheduler. 
> So I thought to separate the scheduler from the simulator.
> I start a real RM. Then SLS will register nodes to RM,And submit apps to RM 
> using RM RPC.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7669) [API] Introduce interfaces for placement constraint processing

2017-12-20 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16298179#comment-16298179
 ] 

genericqa commented on YARN-7669:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-6592 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
50s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
36s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
23s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 4s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
13s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 25s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
19s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api in 
YARN-6592 has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
49s{color} | {color:green} YARN-6592 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  7m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
24s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m  2s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 14 new + 280 unchanged - 1 fixed = 294 total (was 281) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 57s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
57s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
44s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
15s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 71m 10s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  1m 
 8s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}154m 30s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.webapp.TestRMWithCSRFFilter |
|   | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestNodeLabelContainerAllocation
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7669 |
| JIRA Patch URL | 

[jira] [Comment Edited] (YARN-6592) Rich placement constraints in YARN

2017-12-20 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16297925#comment-16297925
 ] 

Weiwei Yang edited comment on YARN-6592 at 12/20/17 9:43 AM:
-

Thanks [~kkaranasos], is this umbrella depending on YARN-3409 (that one seems 
to be the umbrella to add node attributes)? I roughly went through the 
discussions in YARN-3409 (what a long discussion :P) and it appears to so, 
correct me if I am wrong. Thank you.


was (Author: cheersyang):
Thanks [~kkaranasos], is this umbrella depending on YARN-3409 (that one seems 
to be the umbrella to add node attributes)? I ams asking because I did not find 
any child tasks under this umbrella to manage node attributes and process 
constraints with respect to the attributes.

One more thing, except simple operator {{IN}} or {{NOT_IN}}, I think there are 
some more to be supported such as {{GT}} (greater than), {{GE}} (greater than 
or equal to), {{LT}} (less than), {{LE}} (less than or equal to). For example,

{code}
{target: node-attribute:diskNum GT 5, scope host}
{code}

allocate to node where its diskNum > 5. This is very useful for long running 
services.

> Rich placement constraints in YARN
> --
>
> Key: YARN-6592
> URL: https://issues.apache.org/jira/browse/YARN-6592
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Konstantinos Karanasos
> Attachments: YARN-6592-Rich-Placement-Constraints-Design-V1.pdf
>
>
> This JIRA consolidates the efforts of YARN-5468 and YARN-4902.
> It adds support for rich placement constraints to YARN, such as affinity and 
> anti-affinity between allocations within the same or across applications.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7674) Update Timeline Reader web app address in UI2

2017-12-20 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7674?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-7674:
--
Attachment: YARN-7674.001.patch

uploading v1 version

> Update Timeline Reader web app address in UI2
> -
>
> Key: YARN-7674
> URL: https://issues.apache.org/jira/browse/YARN-7674
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Rohith Sharma K S
>Assignee: Sunil G
> Attachments: YARN-7674.001.patch
>
>
> YARN-7662 introduces a new set of configurations. It is required to update in 
> UI2 as well. 
> cc :/ [~sunilg]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org