[jira] [Commented] (YARN-7613) Implement Planning algorithms for rich placement

2017-12-26 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16304182#comment-16304182
 ] 

Arun Suresh commented on YARN-7613:
---

Thanks for updating [~pgaref]

Some comments:
* Lets change the name of {{attemptAllocationOnNode}} in the Algortihm to 
{{attemptPlacementOnNode}} - since we already have a similar method on the 
Scheduler.
* The cleanTempContainers can become inconsistent if the placement thread pool 
size > 1 and if an Appllication sends sends multiple batches. But it might not 
be that big a deal, since we are going to request that apps batch all their 
schedulingRequest together anyway.
* Add docs to the SchedulingRequestWrapper
* The @Override should be on the line above the method.

> Implement Planning algorithms for rich placement
> 
>
> Key: YARN-7613
> URL: https://issues.apache.org/jira/browse/YARN-7613
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Panagiotis Garefalakis
> Attachments: YARN-7613-YARN-6592.001.patch, YARN-7613.wip.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3409) Support Node Attribute functionality

2017-12-26 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16304118#comment-16304118
 ] 

Naganarasimha G R commented on YARN-3409:
-

Thanks [~cheersyang] for looking into this jira and the design Doc, 
bq.  then there is a request asking for NUM_OF_DISKS > 3, there is only 1 node 
satisfy this requirement but its resource is already used up by other 
containers. What will happen here? Will you preempt containers on this node to 
make room for such request?
Attribute based scheduling is more like best effort basis and doesn't guarantee 
capacity guarantee across attribute labels, as a given node can have multiple 
attributes and scheduling might happen based on a subset of attributes. Hence 
it would be impossible to preempt as it might lead to cyclic pre-emptions. So 
if guaranteed resources based on a specific label is the requirement then we 
need to use {{"Partition Labels"}}.
But well the api's which are coming in YARN-6592 will help to specify for an 
container how long to wait and they try with other options. so hard and soft on 
the request side can be controlled (though not on the allocation side which you 
were expecting)
 

> Support Node Attribute functionality
> 
>
> Key: YARN-3409
> URL: https://issues.apache.org/jira/browse/YARN-3409
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: api, client, RM
>Reporter: Wangda Tan
>Assignee: Naganarasimha G R
> Attachments: 3409-apiChanges_v2.pdf (4).pdf, 
> Constraint-Node-Labels-Requirements-Design-doc_v1.pdf, YARN-3409.WIP.001.patch
>
>
> Specify only one label for each node (IAW, partition a cluster) is a way to 
> determinate how resources of a special set of nodes could be shared by a 
> group of entities (like teams, departments, etc.). Partitions of a cluster 
> has following characteristics:
> - Cluster divided to several disjoint sub clusters.
> - ACL/priority can apply on partition (Only market team / marke team has 
> priority to use the partition).
> - Percentage of capacities can apply on partition (Market team has 40% 
> minimum capacity and Dev team has 60% of minimum capacity of the partition).
> Attributes are orthogonal to partition, they’re describing features of node’s 
> hardware/software just for affinity. Some example of attributes:
> - glibc version
> - JDK version
> - Type of CPU (x86_64/i686)
> - Type of OS (windows, linux, etc.)
> With this, application can be able to ask for resource has (glibc.version >= 
> 2.20 && JDK.version >= 8u20 && x86_64).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7613) Implement Planning algorithms for rich placement

2017-12-26 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16304108#comment-16304108
 ] 

genericqa commented on YARN-7613:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 30m 
43s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-6592 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
27s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
15s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
29s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 4s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
29s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 16s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
13s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api in 
YARN-6592 has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
3s{color} | {color:green} YARN-6592 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
22s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  1m 
53s{color} | {color:red} hadoop-yarn in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  1m 53s{color} 
| {color:red} hadoop-yarn in the patch failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 51s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 22 new + 258 unchanged - 1 fixed = 280 total (was 259) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
24s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  3m 
33s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
18s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 34s{color} 
| {color:red} hadoop-yarn-api in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 24s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
20s{color} | {color:red} The patch generated 5 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 82m 45s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.conf.TestYarnConfigurationFields |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7613 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12903741/YARN-7613-YARN-6592.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux b9fb6f9f90c0 4.4.0-64-generic #8

[jira] [Commented] (YARN-6856) Support CLI for Node Attributes Mapping

2017-12-26 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16304090#comment-16304090
 ] 

genericqa commented on YARN-6856:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} yarn-3409 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  5m 
22s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
15s{color} | {color:red} root in yarn-3409 failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
55s{color} | {color:green} yarn-3409 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
18s{color} | {color:green} yarn-3409 passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
24s{color} | {color:red} hadoop-common in yarn-3409 failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
25s{color} | {color:red} hadoop-yarn in yarn-3409 failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
23s{color} | {color:red} hadoop-yarn-client in yarn-3409 failed. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  0m 
52s{color} | {color:red} branch has errors when building and testing our client 
artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
23s{color} | {color:red} hadoop-common in yarn-3409 failed. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
24s{color} | {color:red} hadoop-yarn-client in yarn-3409 failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
24s{color} | {color:red} hadoop-common in yarn-3409 failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
28s{color} | {color:red} hadoop-yarn in yarn-3409 failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
23s{color} | {color:red} hadoop-yarn-client in yarn-3409 failed. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
33s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
23s{color} | {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
28s{color} | {color:red} hadoop-yarn in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
22s{color} | {color:red} hadoop-yarn-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 
13s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 19s{color} | {color:orange} root: The patch generated 24 new + 26 unchanged 
- 0 fixed = 50 total (was 26) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
31s{color} | {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
25s{color} | {color:red} hadoop-yarn in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
32s{color} | {color:red} hadoop-yarn-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
26s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m 
30s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 2 line(s) with tabs. {color} |
| {color:red}-1{color} | {color:red} sha

[jira] [Commented] (YARN-6856) Support CLI for Node Attributes Mapping

2017-12-26 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16304088#comment-16304088
 ] 

genericqa commented on YARN-6856:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} yarn-3409 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  5m 
19s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
15s{color} | {color:red} root in yarn-3409 failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
21s{color} | {color:green} yarn-3409 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
21s{color} | {color:green} yarn-3409 passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
25s{color} | {color:red} hadoop-common in yarn-3409 failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
24s{color} | {color:red} hadoop-yarn in yarn-3409 failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
32s{color} | {color:red} hadoop-yarn-client in yarn-3409 failed. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  0m 
58s{color} | {color:red} branch has errors when building and testing our client 
artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
30s{color} | {color:red} hadoop-common in yarn-3409 failed. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
27s{color} | {color:red} hadoop-yarn-client in yarn-3409 failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
24s{color} | {color:red} hadoop-common in yarn-3409 failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
29s{color} | {color:red} hadoop-yarn in yarn-3409 failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
24s{color} | {color:red} hadoop-yarn-client in yarn-3409 failed. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
34s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
24s{color} | {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
27s{color} | {color:red} hadoop-yarn in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
23s{color} | {color:red} hadoop-yarn-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 13m 
54s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 18s{color} | {color:orange} root: The patch generated 24 new + 26 unchanged 
- 0 fixed = 50 total (was 26) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
24s{color} | {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
25s{color} | {color:red} hadoop-yarn in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
29s{color} | {color:red} hadoop-yarn-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
25s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m 
29s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 2 line(s) with tabs. {color} |
| {color:red}-1{color} | {color:red} sha

[jira] [Commented] (YARN-7682) Expose canAssign method in the PlacementConstraintManager

2017-12-26 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16304079#comment-16304079
 ] 

Arun Suresh commented on YARN-7682:
---

bq. Following up on the discussion: My only concern is that 
PlacementConstraints util class is part of the API package and TagManager is 
part of the RM package. ..
Ah.. Good point.. In that case feel free to create a 'PlacementConstraintsUtil' 
class in the resourcemanager package.

> Expose canAssign method in the PlacementConstraintManager
> -
>
> Key: YARN-7682
> URL: https://issues.apache.org/jira/browse/YARN-7682
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Panagiotis Garefalakis
> Attachments: YARN-7682.wip.patch
>
>
> As per discussion in YARN-7613. Lets expose {{canAssign}} method in the 
> PlacementConstraintManager that takes a sourceTags, applicationId, 
> SchedulerNode and AllocationTagsManager and returns true if constraints are 
> not violated by placing the container on the node.
> I prefer not passing in the SchedulingRequest, since it can have > 1 
> numAllocations. We want this api to be called for single allocations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7682) Expose canAssign method in the PlacementConstraintManager

2017-12-26 Thread Panagiotis Garefalakis (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16304075#comment-16304075
 ] 

Panagiotis Garefalakis edited comment on YARN-7682 at 12/26/17 11:43 PM:
-

bq. I think we need to pass the whole SchedulerNode in, not just the nodeId, 
since we need to get node's rack.
Correct

[~asuresh]  [~kkaranasos]  
Following up on the discussion: My only concern is that PlacementConstraints 
util class is part of the API package and TagManager is part of the RM package. 
We would have to move one of them to avoid creating circular maven dependencies.

Thoughts?



was (Author: pgaref):
bq. I think we need to pass the whole SchedulerNode in, not just the nodeId, 
since we need to get node's rack.
Correct

[~asuresh]  [~kkaranasos]  
Following the discussion: My only concern is that PlacementConstraints util 
class is part of the API package and TagManager is part of the RM package. We 
would have to move one of them to avoid creating circular maven dependencies.

Thoughts?


> Expose canAssign method in the PlacementConstraintManager
> -
>
> Key: YARN-7682
> URL: https://issues.apache.org/jira/browse/YARN-7682
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Panagiotis Garefalakis
> Attachments: YARN-7682.wip.patch
>
>
> As per discussion in YARN-7613. Lets expose {{canAssign}} method in the 
> PlacementConstraintManager that takes a sourceTags, applicationId, 
> SchedulerNode and AllocationTagsManager and returns true if constraints are 
> not violated by placing the container on the node.
> I prefer not passing in the SchedulingRequest, since it can have > 1 
> numAllocations. We want this api to be called for single allocations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7682) Expose canAssign method in the PlacementConstraintManager

2017-12-26 Thread Panagiotis Garefalakis (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16304075#comment-16304075
 ] 

Panagiotis Garefalakis commented on YARN-7682:
--

bq. I think we need to pass the whole SchedulerNode in, not just the nodeId, 
since we need to get node's rack.
Correct

[~asuresh]  [~kkaranasos]  
Following the discussion: My only concern is that PlacementConstraints util 
class is part of the API package and TagManager is part of the RM package. We 
would have to move one of them to avoid creating circular maven dependencies.

Thoughts?


> Expose canAssign method in the PlacementConstraintManager
> -
>
> Key: YARN-7682
> URL: https://issues.apache.org/jira/browse/YARN-7682
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Panagiotis Garefalakis
> Attachments: YARN-7682.wip.patch
>
>
> As per discussion in YARN-7613. Lets expose {{canAssign}} method in the 
> PlacementConstraintManager that takes a sourceTags, applicationId, 
> SchedulerNode and AllocationTagsManager and returns true if constraints are 
> not violated by placing the container on the node.
> I prefer not passing in the SchedulingRequest, since it can have > 1 
> numAllocations. We want this api to be called for single allocations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7613) Implement Planning algorithms for rich placement

2017-12-26 Thread Panagiotis Garefalakis (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Panagiotis Garefalakis updated YARN-7613:
-
Attachment: YARN-7613-YARN-6592.001.patch

Attaching patch v001 taking into account the comments above:
* Extending AllocationTagsManager to keep track of temporary container tags 
during placement cycle.
* Removing applicationId from add/remove container methods since it can be 
derived from containerId.
* BasicPlacementAlgorithm implementation with two simple Iterators (Serial and 
PopularTags)

> Implement Planning algorithms for rich placement
> 
>
> Key: YARN-7613
> URL: https://issues.apache.org/jira/browse/YARN-7613
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Panagiotis Garefalakis
> Attachments: YARN-7613-YARN-6592.001.patch, YARN-7613.wip.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6856) Support CLI for Node Attributes Mapping

2017-12-26 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16304056#comment-16304056
 ] 

Naganarasimha G R commented on YARN-6856:
-

Hi [~sunilg], have rebased yarn-3409 branch and retriggered the build.

> Support CLI for Node Attributes Mapping
> ---
>
> Key: YARN-6856
> URL: https://issues.apache.org/jira/browse/YARN-6856
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api, capacityscheduler, client
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
> Attachments: YARN-6856-YARN-3409.001.patch, 
> YARN-6856-YARN-3409.002.patch, YARN-6856-yarn-3409.003.patch, 
> YARN-6856-yarn-3409.004.patch
>
>
> This focuses on the new CLI for the mapping of Node Attributes



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7682) Expose canAssign method in the PlacementConstraintManager

2017-12-26 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16304039#comment-16304039
 ] 

Arun Suresh commented on YARN-7682:
---

Also, I think we need to pass the whole SchedulerNode in, not just the nodeId, 
since we need to get node's rack.

> Expose canAssign method in the PlacementConstraintManager
> -
>
> Key: YARN-7682
> URL: https://issues.apache.org/jira/browse/YARN-7682
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Panagiotis Garefalakis
> Attachments: YARN-7682.wip.patch
>
>
> As per discussion in YARN-7613. Lets expose {{canAssign}} method in the 
> PlacementConstraintManager that takes a sourceTags, applicationId, 
> SchedulerNode and AllocationTagsManager and returns true if constraints are 
> not violated by placing the container on the node.
> I prefer not passing in the SchedulingRequest, since it can have > 1 
> numAllocations. We want this api to be called for single allocations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7682) Expose canAssign method in the PlacementConstraintManager

2017-12-26 Thread Konstantinos Karanasos (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16304038#comment-16304038
 ] 

Konstantinos Karanasos commented on YARN-7682:
--

bq. Cool. What do you think about putting method in the PlacementConstraints 
utility class that we already have for creating Placement Constraints ?
I think this makes sense.

> Expose canAssign method in the PlacementConstraintManager
> -
>
> Key: YARN-7682
> URL: https://issues.apache.org/jira/browse/YARN-7682
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Panagiotis Garefalakis
> Attachments: YARN-7682.wip.patch
>
>
> As per discussion in YARN-7613. Lets expose {{canAssign}} method in the 
> PlacementConstraintManager that takes a sourceTags, applicationId, 
> SchedulerNode and AllocationTagsManager and returns true if constraints are 
> not violated by placing the container on the node.
> I prefer not passing in the SchedulingRequest, since it can have > 1 
> numAllocations. We want this api to be called for single allocations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7682) Expose canAssign method in the PlacementConstraintManager

2017-12-26 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16304037#comment-16304037
 ] 

Arun Suresh commented on YARN-7682:
---

Cool. What do you think about putting method in the {{PlacementConstraints}} 
utility class that we already have for creating Placement Constraints ?

> Expose canAssign method in the PlacementConstraintManager
> -
>
> Key: YARN-7682
> URL: https://issues.apache.org/jira/browse/YARN-7682
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Panagiotis Garefalakis
> Attachments: YARN-7682.wip.patch
>
>
> As per discussion in YARN-7613. Lets expose {{canAssign}} method in the 
> PlacementConstraintManager that takes a sourceTags, applicationId, 
> SchedulerNode and AllocationTagsManager and returns true if constraints are 
> not violated by placing the container on the node.
> I prefer not passing in the SchedulingRequest, since it can have > 1 
> numAllocations. We want this api to be called for single allocations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7682) Expose canAssign method in the PlacementConstraintManager

2017-12-26 Thread Konstantinos Karanasos (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16304034#comment-16304034
 ] 

Konstantinos Karanasos commented on YARN-7682:
--

If we pass everything as a parameter, then to me it should be a separate class, 
because it becomes a utility method like you say.
I was trying to fit it in the PCM :) that's why I suggested this.

bq. since now we have an implicit initialization ordering
Agreed, that's why I suggested using the RMContext in the subsequent comment. 
But on the other hand, there is an implicit order in any case, in the sense 
that a tags manager has to exist to call the isSatisfiable method. And given 
there will be only one tag manager, I think we can get it from the context. 
Otherwise, I would vote to separate the class.

> Expose canAssign method in the PlacementConstraintManager
> -
>
> Key: YARN-7682
> URL: https://issues.apache.org/jira/browse/YARN-7682
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Panagiotis Garefalakis
> Attachments: YARN-7682.wip.patch
>
>
> As per discussion in YARN-7613. Lets expose {{canAssign}} method in the 
> PlacementConstraintManager that takes a sourceTags, applicationId, 
> SchedulerNode and AllocationTagsManager and returns true if constraints are 
> not violated by placing the container on the node.
> I prefer not passing in the SchedulingRequest, since it can have > 1 
> numAllocations. We want this api to be called for single allocations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7682) Expose canAssign method in the PlacementConstraintManager

2017-12-26 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16304032#comment-16304032
 ] 

Arun Suresh edited comment on YARN-7682 at 12/26/17 9:46 PM:
-

To be honest - I prefer passing it as a parameter.
That way the API is clear and dependencies are explicit - "to verify constraint 
violation, we need the following things: the TagsManager, the soruce tags and 
the Node on which we intend to check"
I am not a big fan of init() based injection - since now we have an implicit 
initialization ordering. When the RM starts up, we need to start the 
AllocationTagsManager first and THEN pass that to the PCM's init - this can 
lead to bugs later and is more difficult to test. Also, other than for this 
method, the PCM has no use for the TagsManager

To be really honest, this should actually be a utility function, but having it 
in the PCM is still fine, because the PCM end of the day stores the constraint 
and might be more appropriate to transform constraints her. So am fine with it 
being in the PCM.


was (Author: asuresh):
To be honest - I prefer passing it as a parameter.
That way the API is clear - "to verify constraint violation, we need the 
following things: the TagsManager, the soruce tags and the Node on which we 
intend to check"
I am not a big fan of init() based injection - since now we have an implicit 
initialization ordering. When the RM starts up, we need to start the 
AllocationTagsManager first and THEN pass that to the PCM's init - this can 
lead to bugs later and is more difficult to test. Also, other than for this 
method, the PCM has no use for the TagsManager

To be really honest, this should actually be a utility function, but having it 
in the PCM is still fine, because the PCM end of the day stores the constraint 
and might be more appropriate to transform constraints her. So am fine with it 
being in the PCM.

> Expose canAssign method in the PlacementConstraintManager
> -
>
> Key: YARN-7682
> URL: https://issues.apache.org/jira/browse/YARN-7682
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Panagiotis Garefalakis
> Attachments: YARN-7682.wip.patch
>
>
> As per discussion in YARN-7613. Lets expose {{canAssign}} method in the 
> PlacementConstraintManager that takes a sourceTags, applicationId, 
> SchedulerNode and AllocationTagsManager and returns true if constraints are 
> not violated by placing the container on the node.
> I prefer not passing in the SchedulingRequest, since it can have > 1 
> numAllocations. We want this api to be called for single allocations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7682) Expose canAssign method in the PlacementConstraintManager

2017-12-26 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16304032#comment-16304032
 ] 

Arun Suresh edited comment on YARN-7682 at 12/26/17 9:45 PM:
-

To be honest - I prefer passing it as a parameter.
That way the API is clear - "to verify constraint violation, we need the 
following things: the TagsManager, the soruce tags and the Node on which we 
intend to check"
I am not a big fan of init() based injection - since now we have an implicit 
initialization ordering. When the RM starts up, we need to start the 
AllocationTagsManager first and THEN pass that to the PCM's init - this can 
lead to bugs later and is more difficult to test. Also, other than for this 
method, the PCM has no use for the TagsManager

To be really honest, this should actually be a utility function, but having it 
in the PCM is still fine, because the PCM end of the day stores the constraint 
and might be more appropriate to transform constraints her. So am fine with it 
being in the PCM.


was (Author: asuresh):
To be honest - I prefer passing it as a parameter.
That way the API is clear - "to verify constraint violation, we need the 
following things: the TagsManager, the soruce tags and the Node on which we 
intend to check"
I am not a big fan of init() based injection - since now we have an implicit 
initialization ordering. When the RM starts up, we need to start the 
AllocationTagsManager first and THEN pass that to the PCM's init - this can 
lead to bugs later and is more difficult to test.

To be really honest, this should actually be a utility function, but having it 
in the PCM is still fine, because the PCM end of the day stores the constraint 
and might be more appropriate to transform constraints her. So am fine with it 
being in the PCM.

> Expose canAssign method in the PlacementConstraintManager
> -
>
> Key: YARN-7682
> URL: https://issues.apache.org/jira/browse/YARN-7682
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Panagiotis Garefalakis
> Attachments: YARN-7682.wip.patch
>
>
> As per discussion in YARN-7613. Lets expose {{canAssign}} method in the 
> PlacementConstraintManager that takes a sourceTags, applicationId, 
> SchedulerNode and AllocationTagsManager and returns true if constraints are 
> not violated by placing the container on the node.
> I prefer not passing in the SchedulingRequest, since it can have > 1 
> numAllocations. We want this api to be called for single allocations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7682) Expose canAssign method in the PlacementConstraintManager

2017-12-26 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16304032#comment-16304032
 ] 

Arun Suresh commented on YARN-7682:
---

To be honest - I prefer passing it as a parameter.
That way the API is clear - "to verify constraint violation, we need the 
following things: the TagsManager, the soruce tags and the Node on which we 
intend to check"
I am not a big fan of init() based injection - since now we have an implicit 
initialization ordering. When the RM starts up, we need to start the 
AllocationTagsManager first and THEN pass that to the PCM's init - this can 
lead to bugs later and is more difficult to test.

To be really honest, this should actually be a utility function, but having it 
in the PCM is still fine, because the PCM end of the day stores the constraint 
and might be more appropriate to transform constraints her. So am fine with it 
being in the PCM.

> Expose canAssign method in the PlacementConstraintManager
> -
>
> Key: YARN-7682
> URL: https://issues.apache.org/jira/browse/YARN-7682
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Panagiotis Garefalakis
> Attachments: YARN-7682.wip.patch
>
>
> As per discussion in YARN-7613. Lets expose {{canAssign}} method in the 
> PlacementConstraintManager that takes a sourceTags, applicationId, 
> SchedulerNode and AllocationTagsManager and returns true if constraints are 
> not violated by placing the container on the node.
> I prefer not passing in the SchedulingRequest, since it can have > 1 
> numAllocations. We want this api to be called for single allocations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5366) Improve handling of the Docker container life cycle

2017-12-26 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16304030#comment-16304030
 ] 

genericqa commented on YARN-5366:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 30m 
10s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 14 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 38s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
12s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api in 
trunk has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
1s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  6m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 6s{color} | {color:green} hadoop-yarn-project/hadoop-yarn: The patch generated 
0 new + 593 unchanged - 2 fixed = 593 total (was 595) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 56s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
57s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
40s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m  
2s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 19m  
7s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
19s{color} | {color:green} hadoop-yarn-site in the patch passed. {c

[jira] [Commented] (YARN-7682) Expose canAssign method in the PlacementConstraintManager

2017-12-26 Thread Konstantinos Karanasos (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16304028#comment-16304028
 ] 

Konstantinos Karanasos commented on YARN-7682:
--

bq. If we want to make the canAssign be part of the PCM, I think we should make 
the tags manager be a field of the PCM rather than passing it as a parameter 
(i.e., pass the tag manager during PCM's initialization
bq. Make sense to me

Thinking more about it: another way would be to get the tags manager from the 
RMContext. Either way works, but in any case let's not pass it as a parameter.

> Expose canAssign method in the PlacementConstraintManager
> -
>
> Key: YARN-7682
> URL: https://issues.apache.org/jira/browse/YARN-7682
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Panagiotis Garefalakis
> Attachments: YARN-7682.wip.patch
>
>
> As per discussion in YARN-7613. Lets expose {{canAssign}} method in the 
> PlacementConstraintManager that takes a sourceTags, applicationId, 
> SchedulerNode and AllocationTagsManager and returns true if constraints are 
> not violated by placing the container on the node.
> I prefer not passing in the SchedulingRequest, since it can have > 1 
> numAllocations. We want this api to be called for single allocations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7682) Expose canAssign method in the PlacementConstraintManager

2017-12-26 Thread Panagiotis Garefalakis (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16304026#comment-16304026
 ] 

Panagiotis Garefalakis commented on YARN-7682:
--

Thanks for the comments [~asuresh] and [~kkaranasos]!

bq. If we want to make the canAssign be part of the PCM, I think we should make 
the tags manager be a field of the PCM rather than passing it as a parameter 
(i.e., pass the tag manager during PCM's initialization
Make sense to me

bq. We can support composite constraints as a second step (including delayed 
or).
Sure, working on an updated version with Simple constraints now.

bq. What about rack scope, given YARN-7653 is there
Agree, changing the API accordingly for the new patch 



> Expose canAssign method in the PlacementConstraintManager
> -
>
> Key: YARN-7682
> URL: https://issues.apache.org/jira/browse/YARN-7682
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Panagiotis Garefalakis
> Attachments: YARN-7682.wip.patch
>
>
> As per discussion in YARN-7613. Lets expose {{canAssign}} method in the 
> PlacementConstraintManager that takes a sourceTags, applicationId, 
> SchedulerNode and AllocationTagsManager and returns true if constraints are 
> not violated by placing the container on the node.
> I prefer not passing in the SchedulingRequest, since it can have > 1 
> numAllocations. We want this api to be called for single allocations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7682) Expose canAssign method in the PlacementConstraintManager

2017-12-26 Thread Konstantinos Karanasos (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16304021#comment-16304021
 ] 

Konstantinos Karanasos edited comment on YARN-7682 at 12/26/17 9:10 PM:


Thanks for the initial patch [~pgaref]. Some comments:
* If we want to make the canAssign be part of the PCM, I think we should make 
the tags manager be a field of the PCM rather than passing it as a parameter 
(i.e., pass the tag manager during PCM's initialization). The way it is right 
now it feels like a utility method that was pushed inside the PCM. If we do 
this change, the tags manager will be a fundamental part of the PCM so it will 
look better.
* Transformations for the non-composite placement constraints are already 
there. What we can add later is transforming composite constraints to DNF, but 
that should not be needed for the initial versions. We can assume the 
constraint tree has depth=1 for now.
* That said, we should already support all simple constraints, like [~asuresh] 
says. That is, affinity, anti-affinity, cardinality. If we just support the 
Simple Constraint and transform all constraints to that (with the existing 
transformations), we should be fine for most use cases. But only anti-affinity 
seems too restrictive (and I dont see complexity increasing by adding the other 
simple constraints).
* We can support composite constraints as a second step (including delayed or). 
Let's get a first version with all simple constraints though.
* I would call the method 
isSatisfiable/satisfyConstraints/canSatisfyConstraints or sth similar, given 
that it checks for constraint satisfiability.
* What about rack scope, given YARN-7653 is there? The API should support it 
(i.e., support different node groups), even if we dont support rack at the 
first cut.


was (Author: kkaranasos):
Thanks for the initial patch [~pgaref]. Some comments:
* If we want to make the canAssign be part of the PCM, I think we should make 
the tags manager be a field of the PCM rather than passing it as a parameter 
(i.e., pass the tag manager during PCM's initialization). The way it is right 
now it feels like a utility method that was pushed inside the PCM. If we do 
this change, the tags manager will be a fundamental part of the PCM so it will 
look better.
* Transformations for the non-composite placement constraints are already 
there. What we can add later is transforming composite constraints to DNF, but 
that should not be needed for the initial versions. We can assume the 
constraint tree has depth=1 for now.
* That said, we should already support all simple constraints, like [~asuresh] 
says. That is, affinity, anti-affinity, cardinality. If we just support the 
Simple Constraint and transform all constraints to that (with the existing 
transformations), we should be fine for most use cases. But only anti-affinity 
seems too restrictive (and I dont see complexity increasing by adding the other 
simple constraints).
* We can support composite constraints as a second step (including delayed or). 
Let's get a first version with all simple constraints though.
* I would call the method 
isSatisfiable/satisfyConstraints/canSatisfyConstraints or sth similar, given 
that it checks for constraint satisfiability.
* What about rack scope, given YARN-7653 is there? The API should support it 
(i.e.m support different node groups), even if we dont support rack at the 
first cut.

> Expose canAssign method in the PlacementConstraintManager
> -
>
> Key: YARN-7682
> URL: https://issues.apache.org/jira/browse/YARN-7682
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Panagiotis Garefalakis
> Attachments: YARN-7682.wip.patch
>
>
> As per discussion in YARN-7613. Lets expose {{canAssign}} method in the 
> PlacementConstraintManager that takes a sourceTags, applicationId, 
> SchedulerNode and AllocationTagsManager and returns true if constraints are 
> not violated by placing the container on the node.
> I prefer not passing in the SchedulingRequest, since it can have > 1 
> numAllocations. We want this api to be called for single allocations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7682) Expose canAssign method in the PlacementConstraintManager

2017-12-26 Thread Konstantinos Karanasos (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16304021#comment-16304021
 ] 

Konstantinos Karanasos commented on YARN-7682:
--

Thanks for the initial patch [~pgaref]. Some comments:
* If we want to make the canAssign be part of the PCM, I think we should make 
the tags manager be a field of the PCM rather than passing it as a parameter 
(i.e., pass the tag manager during PCM's initialization). The way it is right 
now it feels like a utility method that was pushed inside the PCM. If we do 
this change, the tags manager will be a fundamental part of the PCM so it will 
look better.
* Transformations for the non-composite placement constraints are already 
there. What we can add later is transforming composite constraints to DNF, but 
that should not be needed for the initial versions. We can assume the 
constraint tree has depth=1 for now.
* That said, we should already support all simple constraints, like [~asuresh] 
says. That is, affinity, anti-affinity, cardinality. If we just support the 
Simple Constraint and transform all constraints to that (with the existing 
transformations), we should be fine for most use cases. But only anti-affinity 
seems too restrictive (and I dont see complexity increasing by adding the other 
simple constraints).
* We can support composite constraints as a second step (including delayed or). 
Let's get a first version with all simple constraints though.
* I would call the method 
isSatisfiable/satisfyConstraints/canSatisfyConstraints or sth similar, given 
that it checks for constraint satisfiability.
* What about rack scope, given YARN-7653 is there? The API should support it 
(i.e.m support different node groups), even if we dont support rack at the 
first cut.

> Expose canAssign method in the PlacementConstraintManager
> -
>
> Key: YARN-7682
> URL: https://issues.apache.org/jira/browse/YARN-7682
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Panagiotis Garefalakis
> Attachments: YARN-7682.wip.patch
>
>
> As per discussion in YARN-7613. Lets expose {{canAssign}} method in the 
> PlacementConstraintManager that takes a sourceTags, applicationId, 
> SchedulerNode and AllocationTagsManager and returns true if constraints are 
> not violated by placing the container on the node.
> I prefer not passing in the SchedulingRequest, since it can have > 1 
> numAllocations. We want this api to be called for single allocations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7613) Implement Planning algorithms for rich placement

2017-12-26 Thread Konstantinos Karanasos (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16304015#comment-16304015
 ] 

Konstantinos Karanasos commented on YARN-7613:
--

Hi guys, I think we can make the canAssign method be part of the 
PlacementConstraintManager with some small changes. Let's continue the 
discussion on YARN-7682.

> Implement Planning algorithms for rich placement
> 
>
> Key: YARN-7613
> URL: https://issues.apache.org/jira/browse/YARN-7613
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Panagiotis Garefalakis
> Attachments: YARN-7613.wip.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5366) Improve handling of the Docker container life cycle

2017-12-26 Thread Zhongyue Nah (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16303979#comment-16303979
 ] 

Zhongyue Nah commented on YARN-5366:


I am OOO 3/17-21, please expect delay in email response.

Regards,
Zhongyue



> Improve handling of the Docker container life cycle
> ---
>
> Key: YARN-5366
> URL: https://issues.apache.org/jira/browse/YARN-5366
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
>  Labels: oct16-medium
> Attachments: YARN-5366.001.patch, YARN-5366.002.patch, 
> YARN-5366.003.patch, YARN-5366.004.patch, YARN-5366.005.patch, 
> YARN-5366.006.patch, YARN-5366.007.patch, YARN-5366.008.patch, 
> YARN-5366.009.patch, YARN-5366.010.patch
>
>
> There are several paths that need to be improved with regard to the Docker 
> container lifecycle when running Docker containers on YARN.
> 1) Provide the ability to keep a container on the NodeManager for a set 
> period of time for debugging purposes.
> 2) Support sending signals to the process in the container to allow for 
> triggering stack traces, heap dumps, etc.
> 3) Support for Docker's live restore, which means moving away from the use of 
> {{docker wait}}. (YARN-5818)
> 4) Improve the resiliency of liveliness checks (kill -0) by adding retries.
> 5) Improve the resiliency of container removal by adding retries.
> 6) Only attempt to stop, kill, and remove containers if the current container 
> state allows for it.
> 7) Better handling of short lived containers when the container is stopped 
> before the PID can be retrieved. (YARN-6305)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5366) Improve handling of the Docker container life cycle

2017-12-26 Thread Shane Kumpf (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shane Kumpf updated YARN-5366:
--
Attachment: YARN-5366.010.patch

While working on YARN-6305, I found that the changes were minimal and 
attempting to implement it in a separate patch would just lead to merge 
conflicts until this is committed. As a result, I've added that handling to 
this latest patch.

> Improve handling of the Docker container life cycle
> ---
>
> Key: YARN-5366
> URL: https://issues.apache.org/jira/browse/YARN-5366
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
>  Labels: oct16-medium
> Attachments: YARN-5366.001.patch, YARN-5366.002.patch, 
> YARN-5366.003.patch, YARN-5366.004.patch, YARN-5366.005.patch, 
> YARN-5366.006.patch, YARN-5366.007.patch, YARN-5366.008.patch, 
> YARN-5366.009.patch, YARN-5366.010.patch
>
>
> There are several paths that need to be improved with regard to the Docker 
> container lifecycle when running Docker containers on YARN.
> 1) Provide the ability to keep a container on the NodeManager for a set 
> period of time for debugging purposes.
> 2) Support sending signals to the process in the container to allow for 
> triggering stack traces, heap dumps, etc.
> 3) Support for Docker's live restore, which means moving away from the use of 
> {{docker wait}}. (YARN-5818)
> 4) Improve the resiliency of liveliness checks (kill -0) by adding retries.
> 5) Improve the resiliency of container removal by adding retries.
> 6) Only attempt to stop, kill, and remove containers if the current container 
> state allows for it.
> 7) Better handling of short lived containers when the container is stopped 
> before the PID can be retrieved. (YARN-6305)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7682) Expose canAssign method in the PlacementConstraintManager

2017-12-26 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16303925#comment-16303925
 ] 

genericqa commented on YARN-7682:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  4s{color} 
| {color:red} YARN-7682 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-7682 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12903724/YARN-7682.wip.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/19032/console |
| Powered by | Apache Yetus 0.7.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Expose canAssign method in the PlacementConstraintManager
> -
>
> Key: YARN-7682
> URL: https://issues.apache.org/jira/browse/YARN-7682
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Panagiotis Garefalakis
> Attachments: YARN-7682.wip.patch
>
>
> As per discussion in YARN-7613. Lets expose {{canAssign}} method in the 
> PlacementConstraintManager that takes a sourceTags, applicationId, 
> SchedulerNode and AllocationTagsManager and returns true if constraints are 
> not violated by placing the container on the node.
> I prefer not passing in the SchedulingRequest, since it can have > 1 
> numAllocations. We want this api to be called for single allocations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7682) Expose canAssign method in the PlacementConstraintManager

2017-12-26 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16303920#comment-16303920
 ] 

Arun Suresh commented on YARN-7682:
---

Thanks for the patch [~pgaref]..

bq. We also have the delayedOr special case that should be taken into account 
at this level I believe
I think we should ignore Delayed or for the time being. My initial thought was 
that we should pass in the placementAttempt and timestamp of the last 
placed-attempt into the canAssign. This way, the canAssign can choose which 
Constraint can be chosen from within the list of delayed-or 
PlacementConstraint. But lets tackle that later, once we get the end2end for 
SingleConstraints working correctly.

You have a TODO there that states that it currently works only for 
anti-affinity.
I think we can do affinity, anti-affinity and cardinality right ? For the 
time-being, lets just transform everything using the 
SingleConstraintTransformer into a SingleConstraint - since we can express 
targetIn, targetNotIn and cardinality using SingleConstraint. and then check if 
the constraint is violated or not. [~kkaranasos], thoughts ?






> Expose canAssign method in the PlacementConstraintManager
> -
>
> Key: YARN-7682
> URL: https://issues.apache.org/jira/browse/YARN-7682
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Panagiotis Garefalakis
> Attachments: YARN-7682.wip.patch
>
>
> As per discussion in YARN-7613. Lets expose {{canAssign}} method in the 
> PlacementConstraintManager that takes a sourceTags, applicationId, 
> SchedulerNode and AllocationTagsManager and returns true if constraints are 
> not violated by placing the container on the node.
> I prefer not passing in the SchedulingRequest, since it can have > 1 
> numAllocations. We want this api to be called for single allocations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7619) Max AM Resource value in CS UI is different for every user

2017-12-26 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16303895#comment-16303895
 ] 

Sunil G commented on YARN-7619:
---

+1 to latest patch. Thanks [~eepayne]
I could commit later tomorrow if no objections.

> Max AM Resource value in CS UI is different for every user
> --
>
> Key: YARN-7619
> URL: https://issues.apache.org/jira/browse/YARN-7619
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, yarn
>Affects Versions: 2.9.0, 3.0.0-beta1, 2.8.2, 3.1.0
>Reporter: Eric Payne
>Assignee: Eric Payne
> Attachments: Max AM Resources is Different for Each User.png, 
> YARN-7619.001.patch, YARN-7619.002.patch, YARN-7619.003.patch, 
> YARN-7619.004.branch-2.8.patch, YARN-7619.004.branch-3.0.patch, 
> YARN-7619.004.patch, YARN-7619.005.branch-2.8.patch, 
> YARN-7619.005.branch-3.0.patch, YARN-7619.005.patch
>
>
> YARN-7245 addressed the problem that the {{Max AM Resource}} in the capacity 
> scheduler UI used to contain the queue-level AM limit instead of the 
> user-level AM limit. It fixed this by using the user-specific AM limit that 
> is calculated in {{LeafQueue#activateApplications}}, stored in each user's 
> {{LeafQueue#User}} object, and retrieved via 
> {{UserInfo#getResourceUsageInfo}}.
> The problem is that this user-specific AM limit depends on the activity of 
> other users and other applications in a queue, and it is only calculated and 
> updated when a user's application is activated. So, when 
> {{CapacitySchedulerPage}} retrieves the user-specific AM limit, it is a stale 
> value unless an application was recently activated for a particular user.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7682) Expose canAssign method in the PlacementConstraintManager

2017-12-26 Thread Panagiotis Garefalakis (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Panagiotis Garefalakis updated YARN-7682:
-
Attachment: YARN-7682.wip.patch

Attaching a proof of concept patch after our discussion with [~asuresh].
canAssign method is now part of the placementConstraintManager and is 
responsible for single constrained allocations. 
[~kkaranasos] please take a look  - the main part missing is proper expression 
transformations that I guess should be treated differently depending on the 
type (Composite, Target, Single)?
We also have the delayedOr special case that should be taken into account at 
this level I believe.

> Expose canAssign method in the PlacementConstraintManager
> -
>
> Key: YARN-7682
> URL: https://issues.apache.org/jira/browse/YARN-7682
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Panagiotis Garefalakis
> Attachments: YARN-7682.wip.patch
>
>
> As per discussion in YARN-7613. Lets expose {{canAssign}} method in the 
> PlacementConstraintManager that takes a sourceTags, applicationId, 
> SchedulerNode and AllocationTagsManager and returns true if constraints are 
> not violated by placing the container on the node.
> I prefer not passing in the SchedulingRequest, since it can have > 1 
> numAllocations. We want this api to be called for single allocations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7683) what's the meaning of the existence of method sendKillEvent in containerImpl?

2017-12-26 Thread ChengHuang (JIRA)
ChengHuang created YARN-7683:


 Summary: what's the meaning of the existence of  method 
sendKillEvent in containerImpl?
 Key: YARN-7683
 URL: https://issues.apache.org/jira/browse/YARN-7683
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: ChengHuang


I've found that in NM, while sometimes kill a container by sending 
ContainerKillEvent directly,sometimes sendKillEvent method of containerImpl is 
used。The total difference is sendKillEvent will set container‘s member variable 
isMarkeForKilling true。
And this variable is only used in ContainerLaunch while invoking 
launchContainer method. Besides while ContainerLaunch’s call() is called, we 
have already checked container's state。
So,what's the meaning of the existence of sendKillEvent and isMarkeForKilling?
Besides,isMarkeForKilling has a spell error:(



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7242) Support specify values of different resource types in DistributedShell for easier testing

2017-12-26 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7242?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16303736#comment-16303736
 ] 

Sunil G commented on YARN-7242:
---

Thanks [~GergelyNovak]
Could you please add few more tests
# Add a case where non-existent resource is added from DS but not present in RM
# Provide tests where 8024, 8Gi is used for memory and see whether it gets 
updated correctly. (a bigger value than 1Gb) for both container res and AM.
# some more basic UT where --container-resource is provided but with no value 
etc.

> Support specify values of different resource types in DistributedShell for 
> easier testing
> -
>
> Key: YARN-7242
> URL: https://issues.apache.org/jira/browse/YARN-7242
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Wangda Tan
>Assignee: Gergely Novák
>Priority: Critical
>  Labels: newbie
> Attachments: YARN-7242.001.patch, YARN-7242.002.patch, 
> YARN-7242.003.patch
>
>
> Currently, DS supports specify resource profile, it's better to allow user to 
> directly specify resource keys/values from command line.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7601) Incorrect container states recovered as LevelDB uses alphabetical order

2017-12-26 Thread Sampada Dehankar (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16303705#comment-16303705
 ] 

Sampada Dehankar edited comment on YARN-7601 at 12/26/17 9:47 AM:
--

[~asuresh]: Can you please review this patch? Looks like this particular test 
[TestContainerSchedulerQueuing: testKillOnlyRequiredOpportunisticContainers] 
passes intermittently. After submitting the same patch again, it passed.


was (Author: sampada15):
[~asuresh]: Can you please review this patch?

> Incorrect container states recovered as LevelDB uses alphabetical order
> ---
>
> Key: YARN-7601
> URL: https://issues.apache.org/jira/browse/YARN-7601
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Sampada Dehankar
>Assignee: Sampada Dehankar
> Attachments: YARN-7601.001.patch, YARN-7601.002.patch
>
>
> LevelDB stores key-value pairs in the alphabetical order. Container id 
> concatenated by its state is used as key. So, even if container goes through 
> any states in its life cycle, the order of states for following values 
> retrieved from LevelDB is always going to be as below`:
> LAUNCHED
> PAUSED
> QUEUED
> For eg: If a container is LAUNCHED then PAUSED and LAUNCHED again, the 
> recovered container state is PAUSED currently instead of LAUNCHED.
> We propose to store the timestamp as the value while making call to 
>   
>   storeContainerLaunched
>   storeContainerPaused
>   storeContainerQueued
>   
> so that correct container state is recovered based on timestamps.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7601) Incorrect container states recovered as LevelDB uses alphabetical order

2017-12-26 Thread Sampada Dehankar (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16303705#comment-16303705
 ] 

Sampada Dehankar commented on YARN-7601:


[~asuresh]: Can you please review this patch?

> Incorrect container states recovered as LevelDB uses alphabetical order
> ---
>
> Key: YARN-7601
> URL: https://issues.apache.org/jira/browse/YARN-7601
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Sampada Dehankar
>Assignee: Sampada Dehankar
> Attachments: YARN-7601.001.patch, YARN-7601.002.patch
>
>
> LevelDB stores key-value pairs in the alphabetical order. Container id 
> concatenated by its state is used as key. So, even if container goes through 
> any states in its life cycle, the order of states for following values 
> retrieved from LevelDB is always going to be as below`:
> LAUNCHED
> PAUSED
> QUEUED
> For eg: If a container is LAUNCHED then PAUSED and LAUNCHED again, the 
> recovered container state is PAUSED currently instead of LAUNCHED.
> We propose to store the timestamp as the value while making call to 
>   
>   storeContainerLaunched
>   storeContainerPaused
>   storeContainerQueued
>   
> so that correct container state is recovered based on timestamps.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7601) Incorrect container states recovered as LevelDB uses alphabetical order

2017-12-26 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16303681#comment-16303681
 ] 

genericqa commented on YARN-7601:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  9s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 28s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 17m 
34s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 60m 10s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7601 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12903697/YARN-7601.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 92e34a5a0be1 4.4.0-89-generic #112-Ubuntu SMP Mon Jul 31 
19:38:41 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 52babbb |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/19031/testReport/ |
| Max. process+thread count | 440 (vs. ulimit of 5000) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/19031/console |
| Powered by | Apache Yetus 0.7.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Incorrect container states recovered as LevelDB 

[jira] [Commented] (YARN-6894) RM Apps API returns only active apps when query parameter queue used

2017-12-26 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16303665#comment-16303665
 ] 

genericqa commented on YARN-6894:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
28m  7s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 55s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 41m 32s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-6894 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12884053/YARN-6894.002.patch |
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux 90f64df5c131 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 52babbb |
| maven | version: Apache Maven 3.3.9 |
| Max. process+thread count | 302 (vs. ulimit of 5000) |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/19029/console |
| Powered by | Apache Yetus 0.7.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> RM Apps API returns only active apps when query parameter queue used
> 
>
> Key: YARN-6894
> URL: https://issues.apache.org/jira/browse/YARN-6894
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager, restapi
>Reporter: Grant Sohn
>Assignee: Gergely Novák
>Priority: Minor
> Attachments: YARN-6894.001.patch, YARN-6894.002.patch
>
>
> If you run RM's Cluster Applications API with no query parameters, you get a 
> list of apps.
> If you run RM's Cluster Applications API with any query parameters other than 
> "queue" you get the list of apps with the parameter filters being applied.
> However, when you use the "queue" query parameter, you only see the 
> applications that are active in the cluster (NEW, NEW_SAVING, SUBMITTED, 
> ACCEPTED, RUNNING).  This behavior is inconsistent with the API.  If there is 
> a sound reason behind this, it should be documented and it seems like there 
> might be as the mapred queue CLI behaves similarly.
> http://hadoop.apache.org/docs/stable/hadoop-yarn/hadoop-yarn-site/ResourceManagerRest.html#Cluster_Applications_API



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5151) [YARN-3368] Support kill application from new YARN UI

2017-12-26 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16303639#comment-16303639
 ] 

genericqa commented on YARN-5151:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  4s{color} 
| {color:red} YARN-5151 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-5151 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12855623/YARN-5151.002.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/19030/console |
| Powered by | Apache Yetus 0.7.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> [YARN-3368] Support kill application from new YARN UI
> -
>
> Key: YARN-5151
> URL: https://issues.apache.org/jira/browse/YARN-5151
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Gergely Novák
> Attachments: YARN-5151.001.patch, YARN-5151.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6894) RM Apps API returns only active apps when query parameter queue used

2017-12-26 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16303636#comment-16303636
 ] 

Sunil G commented on YARN-6894:
---

Could we say like below and append to existing explanation. 

{noformat}
All query parameters for this api will filter on all applications. However 
`queue` query parameter will only implicitly filter on unfinished applications 
that are currently in the given queue.
{noformat}

> RM Apps API returns only active apps when query parameter queue used
> 
>
> Key: YARN-6894
> URL: https://issues.apache.org/jira/browse/YARN-6894
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager, restapi
>Reporter: Grant Sohn
>Assignee: Gergely Novák
>Priority: Minor
> Attachments: YARN-6894.001.patch, YARN-6894.002.patch
>
>
> If you run RM's Cluster Applications API with no query parameters, you get a 
> list of apps.
> If you run RM's Cluster Applications API with any query parameters other than 
> "queue" you get the list of apps with the parameter filters being applied.
> However, when you use the "queue" query parameter, you only see the 
> applications that are active in the cluster (NEW, NEW_SAVING, SUBMITTED, 
> ACCEPTED, RUNNING).  This behavior is inconsistent with the API.  If there is 
> a sound reason behind this, it should be documented and it seems like there 
> might be as the mapred queue CLI behaves similarly.
> http://hadoop.apache.org/docs/stable/hadoop-yarn/hadoop-yarn-site/ResourceManagerRest.html#Cluster_Applications_API



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7601) Incorrect container states recovered as LevelDB uses alphabetical order

2017-12-26 Thread Sampada Dehankar (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sampada Dehankar updated YARN-7601:
---
Attachment: YARN-7601.002.patch

> Incorrect container states recovered as LevelDB uses alphabetical order
> ---
>
> Key: YARN-7601
> URL: https://issues.apache.org/jira/browse/YARN-7601
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Sampada Dehankar
>Assignee: Sampada Dehankar
> Attachments: YARN-7601.001.patch, YARN-7601.002.patch
>
>
> LevelDB stores key-value pairs in the alphabetical order. Container id 
> concatenated by its state is used as key. So, even if container goes through 
> any states in its life cycle, the order of states for following values 
> retrieved from LevelDB is always going to be as below`:
> LAUNCHED
> PAUSED
> QUEUED
> For eg: If a container is LAUNCHED then PAUSED and LAUNCHED again, the 
> recovered container state is PAUSED currently instead of LAUNCHED.
> We propose to store the timestamp as the value while making call to 
>   
>   storeContainerLaunched
>   storeContainerPaused
>   storeContainerQueued
>   
> so that correct container state is recovered based on timestamps.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5151) [YARN-3368] Support kill application from new YARN UI

2017-12-26 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16303631#comment-16303631
 ] 

Sunil G commented on YARN-5151:
---

Lets revive this. Its a regression compared to old UI. [~GergelyNovak], could 
you please help to rebase as per current UI theme?

> [YARN-3368] Support kill application from new YARN UI
> -
>
> Key: YARN-5151
> URL: https://issues.apache.org/jira/browse/YARN-5151
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Gergely Novák
> Attachments: YARN-5151.001.patch, YARN-5151.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org