[jira] [Updated] (YARN-7612) Add Processor Framework for Rich Placement Constraints

2017-12-22 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-7612:
--
Summary: Add Processor Framework for Rich Placement Constraints  (was: Add 
Placement Processor Framework)

> Add Processor Framework for Rich Placement Constraints
> --
>
> Key: YARN-7612
> URL: https://issues.apache.org/jira/browse/YARN-7612
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-7612-YARN-6592.001.patch, 
> YARN-7612-YARN-6592.002.patch, YARN-7612-YARN-6592.003.patch, 
> YARN-7612-YARN-6592.004.patch, YARN-7612-YARN-6592.005.patch, 
> YARN-7612-YARN-6592.006.patch, YARN-7612-YARN-6592.007.patch, 
> YARN-7612-YARN-6592.008.patch, YARN-7612-YARN-6592.009.patch, 
> YARN-7612-YARN-6592.010.patch, YARN-7612-YARN-6592.011.patch, 
> YARN-7612-YARN-6592.012.patch, YARN-7612-v2.wip.patch, YARN-7612.wip.patch
>
>
> This introduces a Placement Processor and a Planning algorithm framework to 
> handle placement constraints and scheduling requests from an app and places 
> them on nodes.
> The actual planning algorithm(s) will be handled in a YARN-7613.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7612) Add Placement Processor Framework

2017-12-22 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16302223#comment-16302223
 ] 

Arun Suresh commented on YARN-7612:
---

Thanks for the reviews [~kkaranasos], [~sunilg] and [~cheersyang].
Committing this shortly (will fix the checkstyles as I commit)

> Add Placement Processor Framework
> -
>
> Key: YARN-7612
> URL: https://issues.apache.org/jira/browse/YARN-7612
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-7612-YARN-6592.001.patch, 
> YARN-7612-YARN-6592.002.patch, YARN-7612-YARN-6592.003.patch, 
> YARN-7612-YARN-6592.004.patch, YARN-7612-YARN-6592.005.patch, 
> YARN-7612-YARN-6592.006.patch, YARN-7612-YARN-6592.007.patch, 
> YARN-7612-YARN-6592.008.patch, YARN-7612-YARN-6592.009.patch, 
> YARN-7612-YARN-6592.010.patch, YARN-7612-YARN-6592.011.patch, 
> YARN-7612-YARN-6592.012.patch, YARN-7612-v2.wip.patch, YARN-7612.wip.patch
>
>
> This introduces a Placement Processor and a Planning algorithm framework to 
> handle placement constraints and scheduling requests from an app and places 
> them on nodes.
> The actual planning algorithm(s) will be handled in a YARN-7613.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7612) Add Placement Processor Framework

2017-12-22 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16302218#comment-16302218
 ] 

genericqa commented on YARN-7612:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-6592 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
50s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
15s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
26s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
15s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
30s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m  3s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
15s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api in 
YARN-6592 has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
48s{color} | {color:green} YARN-6592 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
59s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m  6s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 7 new + 399 unchanged - 0 fixed = 406 total (was 399) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 35s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
42s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m  
8s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 62m 11s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}148m 25s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestNodeLabelContainerAllocation
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7612 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12903478/YARN-7612-YARN-6592.

[jira] [Commented] (YARN-2185) Use pipes when localizing archives

2017-12-22 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16302207#comment-16302207
 ] 

genericqa commented on YARN-2185:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
20s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
 8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 39s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
20s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api in 
trunk has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
26s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 13m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
13s{color} | {color:green} root: The patch generated 0 new + 368 unchanged - 6 
fixed = 368 total (was 374) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  7s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
32s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
9s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 59s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
42s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m  
6s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}110m  5s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common |
|  |  Switch statement found in 
org.apache.hadoop.yarn.util.FSDownload.unpack(Path, Path, FileSystem, 
FileSystem) where one case falls through to the next case  At 
FSDownload.java:Path, FileSystem, FileSystem) where one case falls through to 
the next case  At FSDownload.

[jira] [Comment Edited] (YARN-7612) Add Placement Processor Framework

2017-12-22 Thread Konstantinos Karanasos (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16302197#comment-16302197
 ] 

Konstantinos Karanasos edited comment on YARN-7612 at 12/23/17 4:24 AM:


Thanks [~asuresh], +1 to latest patch.


was (Author: kkaranasos):
Thanks @arun suresh, +1 to latest patch.

> Add Placement Processor Framework
> -
>
> Key: YARN-7612
> URL: https://issues.apache.org/jira/browse/YARN-7612
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-7612-YARN-6592.001.patch, 
> YARN-7612-YARN-6592.002.patch, YARN-7612-YARN-6592.003.patch, 
> YARN-7612-YARN-6592.004.patch, YARN-7612-YARN-6592.005.patch, 
> YARN-7612-YARN-6592.006.patch, YARN-7612-YARN-6592.007.patch, 
> YARN-7612-YARN-6592.008.patch, YARN-7612-YARN-6592.009.patch, 
> YARN-7612-YARN-6592.010.patch, YARN-7612-YARN-6592.011.patch, 
> YARN-7612-YARN-6592.012.patch, YARN-7612-v2.wip.patch, YARN-7612.wip.patch
>
>
> This introduces a Placement Processor and a Planning algorithm framework to 
> handle placement constraints and scheduling requests from an app and places 
> them on nodes.
> The actual planning algorithm(s) will be handled in a YARN-7613.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7612) Add Placement Processor Framework

2017-12-22 Thread Konstantinos Karanasos (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16302197#comment-16302197
 ] 

Konstantinos Karanasos commented on YARN-7612:
--

Thanks @arun suresh, +1 to latest patch.

> Add Placement Processor Framework
> -
>
> Key: YARN-7612
> URL: https://issues.apache.org/jira/browse/YARN-7612
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-7612-YARN-6592.001.patch, 
> YARN-7612-YARN-6592.002.patch, YARN-7612-YARN-6592.003.patch, 
> YARN-7612-YARN-6592.004.patch, YARN-7612-YARN-6592.005.patch, 
> YARN-7612-YARN-6592.006.patch, YARN-7612-YARN-6592.007.patch, 
> YARN-7612-YARN-6592.008.patch, YARN-7612-YARN-6592.009.patch, 
> YARN-7612-YARN-6592.010.patch, YARN-7612-YARN-6592.011.patch, 
> YARN-7612-YARN-6592.012.patch, YARN-7612-v2.wip.patch, YARN-7612.wip.patch
>
>
> This introduces a Placement Processor and a Planning algorithm framework to 
> handle placement constraints and scheduling requests from an app and places 
> them on nodes.
> The actual planning algorithm(s) will be handled in a YARN-7613.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7677) HADOOP_CONF_DIR should not be automatically put in task environment

2017-12-22 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7677?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated YARN-7677:
---
Hadoop Flags: Incompatible change

> HADOOP_CONF_DIR should not be automatically put in task environment
> ---
>
> Key: YARN-7677
> URL: https://issues.apache.org/jira/browse/YARN-7677
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Eric Badger
>Assignee: Eric Badger
>
> Currently, {{HADOOP_CONF_DIR}} is being put into the task environment whether 
> it's set by the user or not. It completely bypasses the whitelist and so 
> there is no way for a task to not have {{HADOOP_CONF_DIR}} set. This causes 
> problems in the Docker use case where Docker containers will set up their own 
> environment and have their own {{HADOOP_CONF_DIR}} preset in the image 
> itself. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7590) Improve container-executor validation check

2017-12-22 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16302137#comment-16302137
 ] 

Miklos Szegedi commented on YARN-7590:
--

Thank you for the patch [~eyang]. I have a few style issues:
configuration.c has a new line with the patch that is not needed.
{code}
fprintf(LOGFILE, "Error checking file stats for %s.\n", nm_root);
{code}
It will be helpful to print out the actual error code for debugging.
{code}
fprintf(LOGFILE, "Permission mismatch for %s for uid: %d.\n", nm_root, 
caller_uid);
{code}
How about printing {{info.st_uid}} as well?
{code}
 if (check != 0 || strstr(container_log_dir, "/../") != 0) {
{code}
It is safer to check for ".." and also this check should be in a separate if 
with a proper log message to help debugging.



> Improve container-executor validation check
> ---
>
> Key: YARN-7590
> URL: https://issues.apache.org/jira/browse/YARN-7590
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: security, yarn
>Affects Versions: 2.0.1-alpha, 2.2.0, 2.3.0, 2.4.0, 2.5.0, 2.6.0, 2.7.0, 
> 2.8.0, 2.8.1, 3.0.0-beta1
>Reporter: Eric Yang
>Assignee: Eric Yang
> Attachments: YARN-7590.001.patch, YARN-7590.002.patch, 
> YARN-7590.003.patch, YARN-7590.004.patch, YARN-7590.005.patch
>
>
> There is minimum check for prefix path for container-executor.  If YARN is 
> compromised, attacker  can use container-executor to change system files 
> ownership:
> {code}
> /usr/local/hadoop/bin/container-executor spark yarn 0 etc /home/yarn/tokens 
> /home/spark / ls
> {code}
> This will change /etc to be owned by spark user:
> {code}
> # ls -ld /etc
> drwxr-s---. 110 spark hadoop 8192 Nov 21 20:00 /etc
> {code}
> Spark user can rewrite /etc files to gain more access.  We can improve this 
> with additional check in container-executor:
> # Make sure the prefix path is owned by the same user as the caller to 
> container-executor.
> # Make sure the log directory prefix is owned by the same user as the caller.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-2185) Use pipes when localizing archives

2017-12-22 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16302123#comment-16302123
 ] 

Miklos Szegedi edited comment on YARN-2185 at 12/23/17 12:10 AM:
-

Attaching my suggestion how to solve this. The code streams HDFS as standard 
input to the tar and gzip commands. It handles Windows as well. As an addition 
I create the temporary directory with permissions 700 instead of 755. I do not 
create any additional temporary directories for extraction, one is enough. A 
difference is that I use jar command for zips as well, so that it handles 
Windows properly. Also I added an additional switch to be able to disable the 
modification time check specifying -1 as the timestamp. I also do parallel copy 
for directory localization to leverage the distributed storage in HDFS.


was (Author: miklos.szeg...@cloudera.com):
Attaching my suggestion how to solve this. The code streams HDFS as standard 
input to the tar and gzip commands. It handles Windows as well. As an addition 
I create temporary files with permissions 700 instead of 755. I do not create 
any additional temporary directories for extraction, one is enough. A 
difference is that I use jar command for zips as well, so that it handles 
Windows properly. Also I added an additional switch to be able to disable the 
modification time check specifying -1 as the timestamp. I also do parallel copy 
for directory localization to leverage the distributed storage in HDFS.

> Use pipes when localizing archives
> --
>
> Key: YARN-2185
> URL: https://issues.apache.org/jira/browse/YARN-2185
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Affects Versions: 2.4.0
>Reporter: Jason Lowe
>Assignee: Miklos Szegedi
> Attachments: YARN-2185.000.patch
>
>
> Currently the nodemanager downloads an archive to a local file, unpacks it, 
> and then removes it.  It would be more efficient to stream the data as it's 
> being unpacked to avoid both the extra disk space requirements and the 
> additional disk activity from storing the archive.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-2185) Use pipes when localizing archives

2017-12-22 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16302123#comment-16302123
 ] 

Miklos Szegedi edited comment on YARN-2185 at 12/23/17 12:08 AM:
-

Attaching my suggestion how to solve this. The code streams HDFS as standard 
input to the tar and gzip commands. It handles Windows as well. As an addition 
I create temporary files with permissions 700 instead of 755. I do not create 
any additional temporary directories for extraction, one is enough. A 
difference is that I use jar command for zips as well, so that it handles 
Windows properly. Also I added an additional switch to be able to disable the 
modification time check specifying -1 as the timestamp. I also do parallel copy 
for directory localization to leverage the distributed storage in HDFS.


was (Author: miklos.szeg...@cloudera.com):
Attaching my suggestion how to solve this. The code streams HDFS as standard 
input to the tar and gzip commands. It handles Windows as well. As an addition 
I create temporary files with permissions 700 instead of 755. I do not create 
any additional temporary directories for extraction, one is enough. A 
difference is that I use jar command for zips as well, so that it handles 
Windows properly. Also I added an additional switch to be able to disable the 
modification time check specifying -1 as the timestamp.

> Use pipes when localizing archives
> --
>
> Key: YARN-2185
> URL: https://issues.apache.org/jira/browse/YARN-2185
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Affects Versions: 2.4.0
>Reporter: Jason Lowe
>Assignee: Miklos Szegedi
> Attachments: YARN-2185.000.patch
>
>
> Currently the nodemanager downloads an archive to a local file, unpacks it, 
> and then removes it.  It would be more efficient to stream the data as it's 
> being unpacked to avoid both the extra disk space requirements and the 
> additional disk activity from storing the archive.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-2185) Use pipes when localizing archives

2017-12-22 Thread Miklos Szegedi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2185?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Szegedi updated YARN-2185:
-
Attachment: YARN-2185.000.patch

Attaching my suggestion how to solve this. The code streams HDFS as standard 
input to the tar and gzip commands. It handles Windows as well. As an addition 
I create temporary files with permissions 700 instead of 755. I do not create 
any additional temporary directories for extraction, one is enough. A 
difference is that I use jar command for zips as well, so that it handles 
Windows properly. Also I added an additional switch to be able to disable the 
modification time check specifying -1 as the timestamp.

> Use pipes when localizing archives
> --
>
> Key: YARN-2185
> URL: https://issues.apache.org/jira/browse/YARN-2185
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Affects Versions: 2.4.0
>Reporter: Jason Lowe
>Assignee: Miklos Szegedi
> Attachments: YARN-2185.000.patch
>
>
> Currently the nodemanager downloads an archive to a local file, unpacks it, 
> and then removes it.  It would be more efficient to stream the data as it's 
> being unpacked to avoid both the extra disk space requirements and the 
> additional disk activity from storing the archive.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7612) Add Placement Processor Framework

2017-12-22 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-7612:
--
Attachment: YARN-7612-YARN-6592.012.patch

Updating patch based on [~kkaranasos]'s suggestions.

> Add Placement Processor Framework
> -
>
> Key: YARN-7612
> URL: https://issues.apache.org/jira/browse/YARN-7612
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-7612-YARN-6592.001.patch, 
> YARN-7612-YARN-6592.002.patch, YARN-7612-YARN-6592.003.patch, 
> YARN-7612-YARN-6592.004.patch, YARN-7612-YARN-6592.005.patch, 
> YARN-7612-YARN-6592.006.patch, YARN-7612-YARN-6592.007.patch, 
> YARN-7612-YARN-6592.008.patch, YARN-7612-YARN-6592.009.patch, 
> YARN-7612-YARN-6592.010.patch, YARN-7612-YARN-6592.011.patch, 
> YARN-7612-YARN-6592.012.patch, YARN-7612-v2.wip.patch, YARN-7612.wip.patch
>
>
> This introduces a Placement Processor and a Planning algorithm framework to 
> handle placement constraints and scheduling requests from an app and places 
> them on nodes.
> The actual planning algorithm(s) will be handled in a YARN-7613.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7612) Add Placement Processor Framework

2017-12-22 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16302115#comment-16302115
 ] 

genericqa commented on YARN-7612:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-6592 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
20s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
23s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 5s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
16s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 30s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
14s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api in 
YARN-6592 has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
46s{color} | {color:green} YARN-6592 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
17s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m  4s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 3 new + 399 unchanged - 0 fixed = 402 total (was 399) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 55s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
40s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m  
4s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 61m 31s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}143m 21s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.constraint.TestPlacementProcessor |
|   | hadoop.yarn.server.resourcemanager.reservation.TestCapacityOverTimePolicy 
|
|   | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestNodeLabelContainerAllocation
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce S

[jira] [Commented] (YARN-1709) Admission Control: Reservation subsystem

2017-12-22 Thread Subru Krishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16302037#comment-16302037
 ] 

Subru Krishnan commented on YARN-1709:
--

[~xingbao], thanks for your interest. I have responded to you in YARN-1051 
[here|https://issues.apache.org/jira/browse/YARN-1051?focusedCommentId=16302033].

> Admission Control: Reservation subsystem
> 
>
> Key: YARN-1709
> URL: https://issues.apache.org/jira/browse/YARN-1709
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Carlo Curino
>Assignee: Subru Krishnan
> Fix For: 2.6.0
>
> Attachments: YARN-1709.patch, YARN-1709.patch, YARN-1709.patch, 
> YARN-1709.patch, YARN-1709.patch, YARN-1709.patch, YARN-1709.patch
>
>
> This JIRA is about the key data structure used to track resources over time 
> to enable YARN-1051. The Reservation subsystem is conceptually a "plan" of 
> how the scheduler will allocate resources over-time.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-1051) YARN Admission Control/Planner: enhancing the resource allocation model with time.

2017-12-22 Thread Subru Krishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16302033#comment-16302033
 ] 

Subru Krishnan edited comment on YARN-1051 at 12/22/17 10:42 PM:
-

[~xingbao], the behavior depends on whether there's any job that's using more 
than it's guaranteed resources in the specific node and if preemption is 
enabled or not in the cluster. 

If there's no job using excess resources in the specific node, then either:
* relax locality to rack
* wait for one of the running job AMs to release container(s)

If there is at least one job which is using excess resources in the specific 
node, then:
* If you have preemption is enabled (refer [here | 
http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/CapacityScheduler.html#Capacity_Scheduler_container_preemption]
 on how to enable it), the over allocated container(s) will get preempted
*  wait for one of the running job AMs to release container(s)


was (Author: subru):
[~xingbao], the behavior depends on whether there's any job that's using more 
than it's guaranteed resources in the specific node and if preemption is 
enabled or not in the cluster. 

If there's no job using excess resources in the specific node, then either:
* relax locality to rack
* wait for one of the running job AMs to release container(s)

If there is at least one job which is using excess resources in the specific 
node, then:
* If you have preemption is enabled (refer 
[http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/CapacityScheduler.html#Capacity_Scheduler_container_preemption|here]
 on how to enable it), the over allocated container(s) will get preempted
*  wait for one of the running job AMs to release container(s)

> YARN Admission Control/Planner: enhancing the resource allocation model with 
> time.
> --
>
> Key: YARN-1051
> URL: https://issues.apache.org/jira/browse/YARN-1051
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacityscheduler, resourcemanager, scheduler
>Reporter: Carlo Curino
>Assignee: Carlo Curino
> Fix For: 2.6.0
>
> Attachments: YARN-1051-design.pdf, YARN-1051.1.patch, 
> YARN-1051.patch, curino_MSR-TR-2013-108.pdf, socc14-paper15.pdf, 
> techreport.pdf
>
>
> In this umbrella JIRA we propose to extend the YARN RM to handle time 
> explicitly, allowing users to "reserve" capacity over time. This is an 
> important step towards SLAs, long-running services, workflows, and helps for 
> gang scheduling.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-1051) YARN Admission Control/Planner: enhancing the resource allocation model with time.

2017-12-22 Thread Subru Krishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16302033#comment-16302033
 ] 

Subru Krishnan commented on YARN-1051:
--

[~xingbao], the behavior depends on whether there's any job that's using more 
than it's guaranteed resources in the specific node and if preemption is 
enabled or not in the cluster. 

If there's no job using excess resources in the specific node, then either:
* relax locality to rack
* wait for one of the running job AMs to release container(s)

If there is at least one job which is using excess resources in the specific 
node, then:
* If you have preemption is enabled (refer 
[http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/CapacityScheduler.html#Capacity_Scheduler_container_preemption|here]
 on how to enable it), the over allocated container(s) will get preempted
*  wait for one of the running job AMs to release container(s)

> YARN Admission Control/Planner: enhancing the resource allocation model with 
> time.
> --
>
> Key: YARN-1051
> URL: https://issues.apache.org/jira/browse/YARN-1051
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacityscheduler, resourcemanager, scheduler
>Reporter: Carlo Curino
>Assignee: Carlo Curino
> Fix For: 2.6.0
>
> Attachments: YARN-1051-design.pdf, YARN-1051.1.patch, 
> YARN-1051.patch, curino_MSR-TR-2013-108.pdf, socc14-paper15.pdf, 
> techreport.pdf
>
>
> In this umbrella JIRA we propose to extend the YARN RM to handle time 
> explicitly, allowing users to "reserve" capacity over time. This is an 
> important step towards SLAs, long-running services, workflows, and helps for 
> gang scheduling.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7590) Improve container-executor validation check

2017-12-22 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16302020#comment-16302020
 ] 

genericqa commented on YARN-7590:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
30m 20s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 13s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 17m 
58s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 62m 31s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7590 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12903462/YARN-7590.005.patch |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux a5d83844ddc3 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 52babbb |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/19019/testReport/ |
| Max. process+thread count | 341 (vs. ulimit of 5000) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/19019/console |
| Powered by | Apache Yetus 0.7.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Improve container-executor validation check
> ---
>
> Key: YARN-7590
> URL: https://issues.apache.org/jira/browse/YARN-7590
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: security, yarn
>Affects Versions: 2.0.1-alpha, 2.2.0, 2.3.0, 2.4.0, 2.5.0, 2.6.0, 2.7.0, 
> 2.8.0, 2.8.1, 3.0.0-beta1
>Reporter: Eric Yang
>Assignee: Eric Yang
> Attachments: YARN-7590.001.patch, YARN-7590.002.patch, 
> YARN-7590.003.patch, YARN-7590.004.patch, YARN-7590.005.patch
>
>
> There is minimum check for prefix path for container-executor.  If YARN is 
> compromised, attacker  can use container-executor to change system files 
> ownership:
> {code}

[jira] [Commented] (YARN-7612) Add Placement Processor Framework

2017-12-22 Thread Konstantinos Karanasos (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16302013#comment-16302013
 ] 

Konstantinos Karanasos commented on YARN-7612:
--

Thanks for the patch [~asuresh]. Some comments:
* We might not need the volatile in the RMContainerImpl
* Is the CapacityScheduler the only place we need to set the allocation tags? 
Probably, but just checking.
* Do we need the NodeCandidateSelector?
* {{PlacementDispatcher}}:
** What is the difference between dispatch and addRequests? Do we need both?
** It seems that in the pullPlacedRequests() and pullRejectedRequests() you 
need to clean up the entry for that appID. Also you can avoid the lambdas (an 
addAll will do), but you can keep them if you prefer.
** algorithm.place(requests, this); maybe you can make the algorithm return 
what is needed instead of passing {{this}}?
** We need some comments to the class and its methods.
* PlacementProcessor:
** I think we should unify the dispatchRequestsForPlacement and 
reDispatchRetryableRequests. This is related to the comment above that we 
should not have both dispatch and addRequests in the PlacementDispatcher
** Why do we add to the black list immediately after a retry? Maybe we should 
black list after N attempts?
** LOG.info("Constraints added for application [ {}] against tags [ {}]" You 
will print all constraints and mappings, not just the tags there.
* [ typo ] ApplicationMasterService: Ensure only single instance of 
PlacementProcessor in included
* [ typo ] BatchedRequests: are -> as


> Add Placement Processor Framework
> -
>
> Key: YARN-7612
> URL: https://issues.apache.org/jira/browse/YARN-7612
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-7612-YARN-6592.001.patch, 
> YARN-7612-YARN-6592.002.patch, YARN-7612-YARN-6592.003.patch, 
> YARN-7612-YARN-6592.004.patch, YARN-7612-YARN-6592.005.patch, 
> YARN-7612-YARN-6592.006.patch, YARN-7612-YARN-6592.007.patch, 
> YARN-7612-YARN-6592.008.patch, YARN-7612-YARN-6592.009.patch, 
> YARN-7612-YARN-6592.010.patch, YARN-7612-YARN-6592.011.patch, 
> YARN-7612-v2.wip.patch, YARN-7612.wip.patch
>
>
> This introduces a Placement Processor and a Planning algorithm framework to 
> handle placement constraints and scheduling requests from an app and places 
> them on nodes.
> The actual planning algorithm(s) will be handled in a YARN-7613.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6596) Introduce Placement Constraint Manager module

2017-12-22 Thread Konstantinos Karanasos (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16301984#comment-16301984
 ] 

Konstantinos Karanasos commented on YARN-6596:
--

Thanks [~asuresh], as well as [~sunilg] and [~cheersyang] for the feedback.

> Introduce Placement Constraint Manager module
> -
>
> Key: YARN-6596
> URL: https://issues.apache.org/jira/browse/YARN-6596
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Fix For: 3.1.0
>
> Attachments: YARN-6596-YARN-6592.001.patch, 
> YARN-6596-YARN-6592.002.patch, YARN-6596-YARN-6592.003.patch
>
>
> This RM module will be responsible for storing placement constraints, 
> allocation tags, and node attributes.
> It will be used when determining the placement of SchedulingRequests with 
> constraints.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6596) Introduce Placement Constraint Manager module

2017-12-22 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16301973#comment-16301973
 ] 

Arun Suresh commented on YARN-6596:
---

Committed to branch YARN-6592

> Introduce Placement Constraint Manager module
> -
>
> Key: YARN-6596
> URL: https://issues.apache.org/jira/browse/YARN-6596
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Fix For: 3.1.0
>
> Attachments: YARN-6596-YARN-6592.001.patch, 
> YARN-6596-YARN-6592.002.patch, YARN-6596-YARN-6592.003.patch
>
>
> This RM module will be responsible for storing placement constraints, 
> allocation tags, and node attributes.
> It will be used when determining the placement of SchedulingRequests with 
> constraints.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7590) Improve container-executor validation check

2017-12-22 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16301970#comment-16301970
 ] 

Eric Yang commented on YARN-7590:
-

[~miklos.szeg...@cloudera.com] Thank you for the feedback, and I revised the 
patch according to your feedback.  I am going to take time off for next week.  
If there is any improve to be done, let's sync up after the New Year.  Merry 
Christmas, and Happy New Year.

> Improve container-executor validation check
> ---
>
> Key: YARN-7590
> URL: https://issues.apache.org/jira/browse/YARN-7590
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: security, yarn
>Affects Versions: 2.0.1-alpha, 2.2.0, 2.3.0, 2.4.0, 2.5.0, 2.6.0, 2.7.0, 
> 2.8.0, 2.8.1, 3.0.0-beta1
>Reporter: Eric Yang
>Assignee: Eric Yang
> Attachments: YARN-7590.001.patch, YARN-7590.002.patch, 
> YARN-7590.003.patch, YARN-7590.004.patch, YARN-7590.005.patch
>
>
> There is minimum check for prefix path for container-executor.  If YARN is 
> compromised, attacker  can use container-executor to change system files 
> ownership:
> {code}
> /usr/local/hadoop/bin/container-executor spark yarn 0 etc /home/yarn/tokens 
> /home/spark / ls
> {code}
> This will change /etc to be owned by spark user:
> {code}
> # ls -ld /etc
> drwxr-s---. 110 spark hadoop 8192 Nov 21 20:00 /etc
> {code}
> Spark user can rewrite /etc files to gain more access.  We can improve this 
> with additional check in container-executor:
> # Make sure the prefix path is owned by the same user as the caller to 
> container-executor.
> # Make sure the log directory prefix is owned by the same user as the caller.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7590) Improve container-executor validation check

2017-12-22 Thread Eric Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-7590:

Attachment: YARN-7590.005.patch

- Added directory permission check before creation of directories.
- Reuse nm_uid instead of introducing new variable that does the same thing.
- Add detection for attempt to make directory above parent prefix.

> Improve container-executor validation check
> ---
>
> Key: YARN-7590
> URL: https://issues.apache.org/jira/browse/YARN-7590
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: security, yarn
>Affects Versions: 2.0.1-alpha, 2.2.0, 2.3.0, 2.4.0, 2.5.0, 2.6.0, 2.7.0, 
> 2.8.0, 2.8.1, 3.0.0-beta1
>Reporter: Eric Yang
>Assignee: Eric Yang
> Attachments: YARN-7590.001.patch, YARN-7590.002.patch, 
> YARN-7590.003.patch, YARN-7590.004.patch, YARN-7590.005.patch
>
>
> There is minimum check for prefix path for container-executor.  If YARN is 
> compromised, attacker  can use container-executor to change system files 
> ownership:
> {code}
> /usr/local/hadoop/bin/container-executor spark yarn 0 etc /home/yarn/tokens 
> /home/spark / ls
> {code}
> This will change /etc to be owned by spark user:
> {code}
> # ls -ld /etc
> drwxr-s---. 110 spark hadoop 8192 Nov 21 20:00 /etc
> {code}
> Spark user can rewrite /etc files to gain more access.  We can improve this 
> with additional check in container-executor:
> # Make sure the prefix path is owned by the same user as the caller to 
> container-executor.
> # Make sure the log directory prefix is owned by the same user as the caller.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-6596) Introduce Placement Constraint Manager module

2017-12-22 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16301966#comment-16301966
 ] 

Arun Suresh edited comment on YARN-6596 at 12/22/17 9:17 PM:
-

Thanks for the update [~kkaranasos].
+1 Committing this shortly. (Will fix the remaining checkstyles as I commit)


was (Author: asuresh):
Thanks for the update [~kkaranasos].
+1 Committing this shortly.

> Introduce Placement Constraint Manager module
> -
>
> Key: YARN-6596
> URL: https://issues.apache.org/jira/browse/YARN-6596
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Attachments: YARN-6596-YARN-6592.001.patch, 
> YARN-6596-YARN-6592.002.patch, YARN-6596-YARN-6592.003.patch
>
>
> This RM module will be responsible for storing placement constraints, 
> allocation tags, and node attributes.
> It will be used when determining the placement of SchedulingRequests with 
> constraints.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6596) Introduce Placement Constraint Manager module

2017-12-22 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16301966#comment-16301966
 ] 

Arun Suresh commented on YARN-6596:
---

Thanks for the update [~kkaranasos].
+1 Committing this shortly.

> Introduce Placement Constraint Manager module
> -
>
> Key: YARN-6596
> URL: https://issues.apache.org/jira/browse/YARN-6596
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Attachments: YARN-6596-YARN-6592.001.patch, 
> YARN-6596-YARN-6592.002.patch, YARN-6596-YARN-6592.003.patch
>
>
> This RM module will be responsible for storing placement constraints, 
> allocation tags, and node attributes.
> It will be used when determining the placement of SchedulingRequests with 
> constraints.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6596) Introduce Placement Constraint Manager module

2017-12-22 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16301950#comment-16301950
 ] 

genericqa commented on YARN-6596:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 16m  
9s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-6592 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
33s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
37s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
37s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 28s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
2s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} YARN-6592 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 22s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 3 new + 55 unchanged - 0 fixed = 58 total (was 55) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 28s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 62m 28s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}122m 12s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestNodeLabelContainerAllocation
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-6596 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12903447/YARN-6596-YARN-6592.003.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 9e3d1269f710 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | YARN-6592 / 5b0ea0f |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/19018/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourceman

[jira] [Commented] (YARN-7605) Implement doAs for Api Service REST API

2017-12-22 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16301946#comment-16301946
 ] 

genericqa commented on YARN-7605:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
32s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
28s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 24s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
37s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 13m 
43s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m  1s{color} | {color:orange} root: The patch generated 2 new + 158 unchanged 
- 2 fixed = 160 total (was 160) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 10s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
46s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 17s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
12s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 64m 
51s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m  
3s{color} | {color:green} hadoop-yarn-services-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
34s{color} | {color:green} hadoop-yarn-services-api in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
38s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}179m 42s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.fs.viewfs.TestViewFileSystemWithAuthorityLocalFileSystem |
|   | hadoop.ipc.TestProtoBufRpc |
|   | hadoop.ipc.TestCallQueueManager |
|   | hadoop.fs.viewfs.TestViewFileSystemLocalFileSystem |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce

[jira] [Commented] (YARN-5366) Improve handling of the Docker container life cycle

2017-12-22 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16301864#comment-16301864
 ] 

genericqa commented on YARN-5366:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 17m 
46s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 14 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
24s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
50s{color} | {color:red} root in trunk failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
29s{color} | {color:red} hadoop-yarn in trunk failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 31s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
6s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api in 
trunk has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
30s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  8m 
34s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  8m 34s{color} 
| {color:red} hadoop-yarn-project_hadoop-yarn generated 80 new + 0 unchanged - 
0 fixed = 80 total (was 0) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
20s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 32 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 384 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  6s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
57s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
37s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
58s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 18m 
47s{color} | {color:

[jira] [Assigned] (YARN-7451) Resources Types should be visible in the Cluster Apps API "resourceRequests" section

2017-12-22 Thread Szilard Nemeth (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szilard Nemeth reassigned YARN-7451:


Assignee: Yufei Gu  (was: Szilard Nemeth)

> Resources Types should be visible in the Cluster Apps API "resourceRequests" 
> section
> 
>
> Key: YARN-7451
> URL: https://issues.apache.org/jira/browse/YARN-7451
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager, restapi
>Affects Versions: 3.0.0
>Reporter: Grant Sohn
>Assignee: Yufei Gu
>
> When running jobs that request resource types the RM Cluster Apps API should 
> include this in the "resourceRequests" object.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-7451) Resources Types should be visible in the Cluster Apps API "resourceRequests" section

2017-12-22 Thread Szilard Nemeth (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szilard Nemeth reassigned YARN-7451:


Assignee: Szilard Nemeth  (was: Yufei Gu)

> Resources Types should be visible in the Cluster Apps API "resourceRequests" 
> section
> 
>
> Key: YARN-7451
> URL: https://issues.apache.org/jira/browse/YARN-7451
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager, restapi
>Affects Versions: 3.0.0
>Reporter: Grant Sohn
>Assignee: Szilard Nemeth
>
> When running jobs that request resource types the RM Cluster Apps API should 
> include this in the "resourceRequests" object.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6596) Introduce Placement Constraint Manager module

2017-12-22 Thread Konstantinos Karanasos (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16301804#comment-16301804
 ] 

Konstantinos Karanasos commented on YARN-6596:
--

Thanks [~sunilg] for the comments.
bq. When global constraints are added, will it be possible that it may 
contradict with some app specific constraints? In that case, how app owner will 
know regarding same? I was mentioning about an interface api for app owners to 
get the global constrains and based on that place new requests. Please correct 
me if I understood this case wrongly.
Good point, yes conflicts can happen. The way I have it in mind is that when 
you are to place an application, for a given tag you you will also get the 
global constraints. As a follow-up I will create some transformations that will 
merge the app-specific and the global constraints. In case of conflicts, I 
think we should just deny the placement (or introduce other conflict resolution 
strategies, like "the global constraint always wins"). But given the current 
API, the application could request the global constraints for this tag instead, 
and act accordingly (e.g., relaxing its constraints) to avoid conflicts. Makes 
sense?

bq. Map, PlacementConstraint> getConstraints(ApplicationId appId); 
looks a bit complicated return value. Its a set of allocation tags 
corresponding to one constraint, correct?
A map is actually needed here, because you need a list of pairs from tags to 
constraints. Example: you are an HBase app and your first pair is 
"hbase-master" -> "node-antiaffinity with hbase-sec", and "hbase-rs" -> 
"rack-affinity with hbase-rs".

bq. Multiple source tags to one constraint is not supported in this patch, 
correct?
That is right. Initially I was thinking to add support for sets of tags (with a 
trie-like data structure), but I think it is better to have a first end-to-end 
version without much complications. Once we have that, I will be happy to 
extend this. Even in the current version, you could bypass the single-tag 
limitation by concatenating tags, but I think in almost all cases single tags 
will be sufficient. I kept the API as set of tags though, so that we can easily 
extend it.

> Introduce Placement Constraint Manager module
> -
>
> Key: YARN-6596
> URL: https://issues.apache.org/jira/browse/YARN-6596
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Attachments: YARN-6596-YARN-6592.001.patch, 
> YARN-6596-YARN-6592.002.patch, YARN-6596-YARN-6592.003.patch
>
>
> This RM module will be responsible for storing placement constraints, 
> allocation tags, and node attributes.
> It will be used when determining the placement of SchedulingRequests with 
> constraints.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6596) Introduce Placement Constraint Manager module

2017-12-22 Thread Konstantinos Karanasos (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6596?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantinos Karanasos updated YARN-6596:
-
Attachment: YARN-6596-YARN-6592.003.patch

New version of the patch to fix checkstyle issue and remove 
TestMemoryPlacementConstraintManager class given we have only one 
implementation at the moment (this fixes the issue that [~sunilg] mentioned 
about some commented out code).

> Introduce Placement Constraint Manager module
> -
>
> Key: YARN-6596
> URL: https://issues.apache.org/jira/browse/YARN-6596
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Attachments: YARN-6596-YARN-6592.001.patch, 
> YARN-6596-YARN-6592.002.patch, YARN-6596-YARN-6592.003.patch
>
>
> This RM module will be responsible for storing placement constraints, 
> allocation tags, and node attributes.
> It will be used when determining the placement of SchedulingRequests with 
> constraints.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7612) Add Placement Processor Framework

2017-12-22 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-7612:
--
Attachment: YARN-7612-YARN-6592.011.patch

Updating patch (v011):

* Addressed some of [~sunilg]'s comments
* Fixed some checkstyles and testcase failures.

I decided not to move SampleAlgorithm to test. Since I would like to have a 
default Algorithm (it does simple anti-affinity after all). In anycase - 
YARN-7613 will have a proper implementaion which will replace this.

I will submit the patch once YARN-6596 is committed.

> Add Placement Processor Framework
> -
>
> Key: YARN-7612
> URL: https://issues.apache.org/jira/browse/YARN-7612
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-7612-YARN-6592.001.patch, 
> YARN-7612-YARN-6592.002.patch, YARN-7612-YARN-6592.003.patch, 
> YARN-7612-YARN-6592.004.patch, YARN-7612-YARN-6592.005.patch, 
> YARN-7612-YARN-6592.006.patch, YARN-7612-YARN-6592.007.patch, 
> YARN-7612-YARN-6592.008.patch, YARN-7612-YARN-6592.009.patch, 
> YARN-7612-YARN-6592.010.patch, YARN-7612-YARN-6592.011.patch, 
> YARN-7612-v2.wip.patch, YARN-7612.wip.patch
>
>
> This introduces a Placement Processor and a Planning algorithm framework to 
> handle placement constraints and scheduling requests from an app and places 
> them on nodes.
> The actual planning algorithm(s) will be handled in a YARN-7613.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5366) Improve handling of the Docker container life cycle

2017-12-22 Thread Shane Kumpf (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shane Kumpf updated YARN-5366:
--
Attachment: (was: YARN-5366.009.patch)

> Improve handling of the Docker container life cycle
> ---
>
> Key: YARN-5366
> URL: https://issues.apache.org/jira/browse/YARN-5366
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
>  Labels: oct16-medium
> Attachments: YARN-5366.001.patch, YARN-5366.002.patch, 
> YARN-5366.003.patch, YARN-5366.004.patch, YARN-5366.005.patch, 
> YARN-5366.006.patch, YARN-5366.007.patch, YARN-5366.008.patch, 
> YARN-5366.009.patch
>
>
> There are several paths that need to be improved with regard to the Docker 
> container lifecycle when running Docker containers on YARN.
> 1) Provide the ability to keep a container on the NodeManager for a set 
> period of time for debugging purposes.
> 2) Support sending signals to the process in the container to allow for 
> triggering stack traces, heap dumps, etc.
> 3) Support for Docker's live restore, which means moving away from the use of 
> {{docker wait}}. (YARN-5818)
> 4) Improve the resiliency of liveliness checks (kill -0) by adding retries.
> 5) Improve the resiliency of container removal by adding retries.
> 6) Only attempt to stop, kill, and remove containers if the current container 
> state allows for it.
> 7) Better handling of short lived containers when the container is stopped 
> before the PID can be retrieved. (YARN-6305)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5366) Improve handling of the Docker container life cycle

2017-12-22 Thread Shane Kumpf (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shane Kumpf updated YARN-5366:
--
Attachment: YARN-5366.009.patch

> Improve handling of the Docker container life cycle
> ---
>
> Key: YARN-5366
> URL: https://issues.apache.org/jira/browse/YARN-5366
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
>  Labels: oct16-medium
> Attachments: YARN-5366.001.patch, YARN-5366.002.patch, 
> YARN-5366.003.patch, YARN-5366.004.patch, YARN-5366.005.patch, 
> YARN-5366.006.patch, YARN-5366.007.patch, YARN-5366.008.patch, 
> YARN-5366.009.patch
>
>
> There are several paths that need to be improved with regard to the Docker 
> container lifecycle when running Docker containers on YARN.
> 1) Provide the ability to keep a container on the NodeManager for a set 
> period of time for debugging purposes.
> 2) Support sending signals to the process in the container to allow for 
> triggering stack traces, heap dumps, etc.
> 3) Support for Docker's live restore, which means moving away from the use of 
> {{docker wait}}. (YARN-5818)
> 4) Improve the resiliency of liveliness checks (kill -0) by adding retries.
> 5) Improve the resiliency of container removal by adding retries.
> 6) Only attempt to stop, kill, and remove containers if the current container 
> state allows for it.
> 7) Better handling of short lived containers when the container is stopped 
> before the PID can be retrieved. (YARN-6305)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7605) Implement doAs for Api Service REST API

2017-12-22 Thread Eric Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-7605:

Attachment: YARN-7605.009.patch

- Fix a bug where pseudo security use doAs=username to allow impersonation.  
This will allow yarn framework to impersonate other unix users, if Linux 
container executor is in use.

> Implement doAs for Api Service REST API
> ---
>
> Key: YARN-7605
> URL: https://issues.apache.org/jira/browse/YARN-7605
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Assignee: Eric Yang
> Fix For: yarn-native-services
>
> Attachments: YARN-7605.001.patch, YARN-7605.004.patch, 
> YARN-7605.005.patch, YARN-7605.006.patch, YARN-7605.007.patch, 
> YARN-7605.008.patch, YARN-7605.009.patch
>
>
> In YARN-7540, all client entry points for API service is centralized to use 
> REST API instead of having direct file system and resource manager rpc calls. 
>  This change helped to centralize yarn metadata to be owned by yarn user 
> instead of crawling through every user's home directory to find metadata.  
> The next step is to make sure "doAs" calls work properly for API Service.  
> The metadata is stored by YARN user, but the actual workload still need to be 
> performed as end users, hence API service must authenticate end user kerberos 
> credential, and perform doAs call when requesting containers via 
> ServiceClient.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7612) Add Placement Processor Framework

2017-12-22 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16301708#comment-16301708
 ] 

Sunil G commented on YARN-7612:
---

Yes.. Gotcha. It could be possible to have some entries in that list in that 
case. Thanks [~asuresh]

> Add Placement Processor Framework
> -
>
> Key: YARN-7612
> URL: https://issues.apache.org/jira/browse/YARN-7612
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-7612-YARN-6592.001.patch, 
> YARN-7612-YARN-6592.002.patch, YARN-7612-YARN-6592.003.patch, 
> YARN-7612-YARN-6592.004.patch, YARN-7612-YARN-6592.005.patch, 
> YARN-7612-YARN-6592.006.patch, YARN-7612-YARN-6592.007.patch, 
> YARN-7612-YARN-6592.008.patch, YARN-7612-YARN-6592.009.patch, 
> YARN-7612-YARN-6592.010.patch, YARN-7612-v2.wip.patch, YARN-7612.wip.patch
>
>
> This introduces a Placement Processor and a Planning algorithm framework to 
> handle placement constraints and scheduling requests from an app and places 
> them on nodes.
> The actual planning algorithm(s) will be handled in a YARN-7613.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7612) Add Placement Processor Framework

2017-12-22 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16301699#comment-16301699
 ] 

Arun Suresh commented on YARN-7612:
---

bq. Then schedulePlacedRequests is inturn calls addToRetryList which again 
picks up the same in one call flow? My question was to remove the second checks 
in addToRetryList method.
Ah.. got your point. But, if you notice, the addToRetryList is called from 
within handleSchedulingResponse.. which is actually called asynchronously 
(calls to the scheduler are asynchronous and sent as a task to the 
schedulingThreadPool line 216 - 228 in the v010 patch). This means we can't be 
sure at that time the retryList is clear - so we have to both the checks.
Hope that made sense.

> Add Placement Processor Framework
> -
>
> Key: YARN-7612
> URL: https://issues.apache.org/jira/browse/YARN-7612
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-7612-YARN-6592.001.patch, 
> YARN-7612-YARN-6592.002.patch, YARN-7612-YARN-6592.003.patch, 
> YARN-7612-YARN-6592.004.patch, YARN-7612-YARN-6592.005.patch, 
> YARN-7612-YARN-6592.006.patch, YARN-7612-YARN-6592.007.patch, 
> YARN-7612-YARN-6592.008.patch, YARN-7612-YARN-6592.009.patch, 
> YARN-7612-YARN-6592.010.patch, YARN-7612-v2.wip.patch, YARN-7612.wip.patch
>
>
> This introduces a Placement Processor and a Planning algorithm framework to 
> handle placement constraints and scheduling requests from an app and places 
> them on nodes.
> The actual planning algorithm(s) will be handled in a YARN-7613.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7612) Add Placement Processor Framework

2017-12-22 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16301676#comment-16301676
 ] 

Sunil G edited comment on YARN-7612 at 12/22/17 5:14 PM:
-

Thanks [~asuresh]
bq.That should be OK right ? if it gets cleared up, then it will be picked up 
in the next allocate call.
So code is something like this
{code}
166   public void allocate(ApplicationAttemptId appAttemptId,
167   AllocateRequest request, AllocateResponse response) throws 
YarnException {
168 List schedulingRequests =
169 request.getSchedulingRequests();
170 dispatchRequestsForPlacement(appAttemptId, schedulingRequests);
171 reDispatchRetryableRequests(appAttemptId);
172 schedulePlacedRequests(appAttemptId);
{code}
Here reDispatchRetryableRequests clears contents of requestsToRetry. Then 
*schedulePlacedRequests* is inturn calls addToRetryList which again picks up 
the same in one call flow? My question was to remove the second checks in 
addToRetryList method.

bq.I was hoping to handle all these efficiency improvements in a separate JIRA 
(as am sure more will pop up once we start doing scalability tests)
Yes. Makes sense to me.


was (Author: sunilg):
Thanks [~asuresh]
bq.That should be OK right ? if it gets cleared up, then it will be picked up 
in the next allocate call.
So code is something like this
{code}
166   public void allocate(ApplicationAttemptId appAttemptId,
167   AllocateRequest request, AllocateResponse response) throws 
YarnException {
168 List schedulingRequests =
169 request.getSchedulingRequests();
170 dispatchRequestsForPlacement(appAttemptId, schedulingRequests);
171 reDispatchRetryableRequests(appAttemptId);
172 schedulePlacedRequests(appAttemptId);
{code}
Here reDispatchRetryableRequests clears contents of requestsToRetry. Then 
*schedulePlacedRequests* is inturn calls addToRetryList which again picks up 
the same in one call flow? My question was to remove the second checks in 
addToRetryList method.

> Add Placement Processor Framework
> -
>
> Key: YARN-7612
> URL: https://issues.apache.org/jira/browse/YARN-7612
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-7612-YARN-6592.001.patch, 
> YARN-7612-YARN-6592.002.patch, YARN-7612-YARN-6592.003.patch, 
> YARN-7612-YARN-6592.004.patch, YARN-7612-YARN-6592.005.patch, 
> YARN-7612-YARN-6592.006.patch, YARN-7612-YARN-6592.007.patch, 
> YARN-7612-YARN-6592.008.patch, YARN-7612-YARN-6592.009.patch, 
> YARN-7612-YARN-6592.010.patch, YARN-7612-v2.wip.patch, YARN-7612.wip.patch
>
>
> This introduces a Placement Processor and a Planning algorithm framework to 
> handle placement constraints and scheduling requests from an app and places 
> them on nodes.
> The actual planning algorithm(s) will be handled in a YARN-7613.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7612) Add Placement Processor Framework

2017-12-22 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16301676#comment-16301676
 ] 

Sunil G commented on YARN-7612:
---

Thanks [~asuresh]
bq.That should be OK right ? if it gets cleared up, then it will be picked up 
in the next allocate call.
So code is something like this
{code}
166   public void allocate(ApplicationAttemptId appAttemptId,
167   AllocateRequest request, AllocateResponse response) throws 
YarnException {
168 List schedulingRequests =
169 request.getSchedulingRequests();
170 dispatchRequestsForPlacement(appAttemptId, schedulingRequests);
171 reDispatchRetryableRequests(appAttemptId);
172 schedulePlacedRequests(appAttemptId);
{code}
Here reDispatchRetryableRequests clears contents of requestsToRetry. Then 
*schedulePlacedRequests* is inturn calls addToRetryList which again picks up 
the same in one call flow? My question was to remove the second checks in 
addToRetryList method.

> Add Placement Processor Framework
> -
>
> Key: YARN-7612
> URL: https://issues.apache.org/jira/browse/YARN-7612
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-7612-YARN-6592.001.patch, 
> YARN-7612-YARN-6592.002.patch, YARN-7612-YARN-6592.003.patch, 
> YARN-7612-YARN-6592.004.patch, YARN-7612-YARN-6592.005.patch, 
> YARN-7612-YARN-6592.006.patch, YARN-7612-YARN-6592.007.patch, 
> YARN-7612-YARN-6592.008.patch, YARN-7612-YARN-6592.009.patch, 
> YARN-7612-YARN-6592.010.patch, YARN-7612-v2.wip.patch, YARN-7612.wip.patch
>
>
> This introduces a Placement Processor and a Planning algorithm framework to 
> handle placement constraints and scheduling requests from an app and places 
> them on nodes.
> The actual planning algorithm(s) will be handled in a YARN-7613.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7612) Add Placement Processor Framework

2017-12-22 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16301671#comment-16301671
 ] 

Arun Suresh commented on YARN-7612:
---

Thanks for the review [~sunilg]

bq. I had one smaller concern here. I think its better to handle a duplicate 
check provided yarn.resourcemanager.application-master-service.processors could 
be also configured with new PlacementProcessor
Ah... good point. Will add a check in the next patch to ensure user cannot add 
it.

bq. So all rejectedRequests are going in one batch. This seems fine. But in 
case where we have too many failures on placement, do you see a corner case 
where number of requests in batch is bigger.
Yup - that could happen if say the app sends a SchedReq with numAllocations 
very large (with anti-affinity but num nodes in cluster much smaller). I was 
planning on bunching SchedulingRequests with the same porperties (priority, 
allocationReqId, ExecType Resources etc.) by doing a numAllocation increment 
and retaining the same SchedulerRequest object. I was hoping to handle all 
these efficiency improvements in a separate JIRA (as am sure more will pop up 
once we start doing scalability tests)

bq. In PlacementProcessor, schedulePlacedRequests is invoked after 
reDispatchRetryableRequests. So * requestsToRetry* will be mostly cleaned up, 
correct?. In that case, does addToRetryList need to handle it again?
That should be OK right ? if it gets cleared up, then it will be picked up in 
the next allocate call.

Regarding the code snippet in the SampleAlgorithm - Yeah, it looks confusing. 
But like I mentioned, the Sample algorithm is just to verify end-2-end not to 
be used in production - And it makes certain assumptions about the 
PlacementConstraint. I've added a comment there. Maybe I will just move it to 
the test package and use it for the tests.



> Add Placement Processor Framework
> -
>
> Key: YARN-7612
> URL: https://issues.apache.org/jira/browse/YARN-7612
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-7612-YARN-6592.001.patch, 
> YARN-7612-YARN-6592.002.patch, YARN-7612-YARN-6592.003.patch, 
> YARN-7612-YARN-6592.004.patch, YARN-7612-YARN-6592.005.patch, 
> YARN-7612-YARN-6592.006.patch, YARN-7612-YARN-6592.007.patch, 
> YARN-7612-YARN-6592.008.patch, YARN-7612-YARN-6592.009.patch, 
> YARN-7612-YARN-6592.010.patch, YARN-7612-v2.wip.patch, YARN-7612.wip.patch
>
>
> This introduces a Placement Processor and a Planning algorithm framework to 
> handle placement constraints and scheduling requests from an app and places 
> them on nodes.
> The actual planning algorithm(s) will be handled in a YARN-7613.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7612) Add Placement Processor Framework

2017-12-22 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16301638#comment-16301638
 ] 

Sunil G commented on YARN-7612:
---

Thanks [~asuresh]

bq.But Currently, and to keep things less complicated, I just assumed the app 
can control this with the presence or absence of the PlacementConstraint 
mappings in the register call.
I had one smaller concern here. I think its better to handle a duplicate check 
provided {{yarn.resourcemanager.application-master-service.processors}} could 
be also configured with new {{PlacementProcessor}}

# So all rejectedRequests are going in one batch. This seems fine. But in case 
where we have too many failures on placement, do you see a corner case where 
number of requests in batch is bigger.
# In PlacementProcessor, schedulePlacedRequests is invoked after 
*reDispatchRetryableRequests*. So * requestsToRetry* will be mostly cleaned up, 
correct?. In that case, does addToRetryList need to handle it again? 
# in below code SamplePlacementAlgorithm#place, does it create bit of confusion 
as we take from head always.
{code}
93  String targetTag =
94  targetConstraint.getTargetExpressions().iterator().next()
95  .getTargetValues().iterator().next();
{code}
Also in same method, there are some toDO comments on exception handling? Is to 
be handled in this patch itself or later?
# 

> Add Placement Processor Framework
> -
>
> Key: YARN-7612
> URL: https://issues.apache.org/jira/browse/YARN-7612
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-7612-YARN-6592.001.patch, 
> YARN-7612-YARN-6592.002.patch, YARN-7612-YARN-6592.003.patch, 
> YARN-7612-YARN-6592.004.patch, YARN-7612-YARN-6592.005.patch, 
> YARN-7612-YARN-6592.006.patch, YARN-7612-YARN-6592.007.patch, 
> YARN-7612-YARN-6592.008.patch, YARN-7612-YARN-6592.009.patch, 
> YARN-7612-YARN-6592.010.patch, YARN-7612-v2.wip.patch, YARN-7612.wip.patch
>
>
> This introduces a Placement Processor and a Planning algorithm framework to 
> handle placement constraints and scheduling requests from an app and places 
> them on nodes.
> The actual planning algorithm(s) will be handled in a YARN-7613.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7679) Application blacklist to drop misbehave AM requests

2017-12-22 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16301611#comment-16301611
 ] 

Arun Suresh commented on YARN-7679:
---

Thanks for starting the discussion. We actually introduced YARN-6355 to solve a 
similar problem. Essentially, we can use the pre-processing framework to add a 
processor that limits requests from misbehaving apps. This can be done in the 
RM.
In production, we actually use an AMRMProxy plugin, introduced in YARN-2884, 
installed on the NM to intercept AM calls to RM to perform the job limiting. 
Since we are using the AMRMProxy anyway for federation.

> Application blacklist to drop misbehave AM requests
> ---
>
> Key: YARN-7679
> URL: https://issues.apache.org/jira/browse/YARN-7679
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: client, RM
>Reporter: Weiwei Yang
>  Labels: admin, command-line
>
> Sometimes there might be some malicious or mis-behave AMs that keep sending 
> invalid resource requests to RM, this causes resource wastes. If such 
> application provides online services, we cannot simply KILL them. Instead it 
> will be extremely useful if RM can black list such applications, drop their 
> requests. So admin could have time to check and then decide how to process. 
> Propose to add an admin command to blacklist applications.
> Open this JIRA for discussion, will upload a design doc shortly.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6596) Introduce Placement Constraint Manager module

2017-12-22 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16301583#comment-16301583
 ] 

Sunil G commented on YARN-6596:
---

Thanks [~kkaranasos]

bq.I think we can use the same or similar API that we will use for adding 
global constraints for the node attributes. Then, we can for instance say that 
HBase containers should be on machines with java-8.Does this clarify things?
Yes. Makes sense. I have some more questions here. When global constraints are 
added, will it be possible that it may contradict with some app specific 
constraints? In that case, how app owner will know regarding same? I was 
mentioning about an interface api for app owners to get the global constrains 
and based on that place new requests. Please correct me if I understood this 
case wrongly.

bq.But maybe we could factor out some code for the file or zk implementations? 
Did you have something like this in mind?
You are correct. in-memory store has a bunch of difference. I think a ZK based 
ticket could be raised and then unify it at a later stage will be better. Since 
placement related store is not ZK now, i guess its fine for now here, and we 
can take care all managers in a common ticket (including federationstore too). 
In my mind, i was thinking of creating different tree nodes of each store and 
define a common key value pair kind of storage.

Some more comments:
# {{Map, PlacementConstraint> getConstraints(ApplicationId 
appId);}} looks a bit complicated return value. Its a set of allocation tags 
corresponding to one constraint, correct? In that case, do we need this to be 
in a map? Or a simple class with {{Set}} element and PlacementConstraint be 
enough?
# There are some commented code in TestMemoryPlacementConstraintManager
# Multiple source tags to one constraint is not supported in this patch, 
correct?

> Introduce Placement Constraint Manager module
> -
>
> Key: YARN-6596
> URL: https://issues.apache.org/jira/browse/YARN-6596
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Attachments: YARN-6596-YARN-6592.001.patch, 
> YARN-6596-YARN-6592.002.patch
>
>
> This RM module will be responsible for storing placement constraints, 
> allocation tags, and node attributes.
> It will be used when determining the placement of SchedulingRequests with 
> constraints.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-6597) Store and update allocation tags in the Placement Constraint Manager

2017-12-22 Thread Panagiotis Garefalakis (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16301574#comment-16301574
 ] 

Panagiotis Garefalakis edited comment on YARN-6597 at 12/22/17 3:45 PM:


[~cheersyang]
YARN-7522 introduced AllocationTagManager component which is storing simple 
node to application-container mappings.
YARN-7653 added support for node-group/rack to application-container mappings.

I would like to keep this JIRA in order to efficiently manage container tags 
under all possible Container state transitions (EXPIRED, RELEASED, KILLED 
e.t.c). Currently we support only container allocation and completion states 
just as a proof of concept.
Does it make sense?



was (Author: pgaref):
[~cheersyang]
YARN-7522 introduced AllocationTagManager component which is storing simple 
node to application-container mappings.
YARN-7653 added support for node-group/rack to application-container mappings.

In this JIRA I would like to efficiently manage container tags under all 
possible Container state transitions (EXPIRED, RELEASED, KILLED e.t.c) as we 
currently support only container allocation and completion states as a proof of 
concept.
Does it make sense?


> Store and update allocation tags in the Placement Constraint Manager
> 
>
> Key: YARN-6597
> URL: https://issues.apache.org/jira/browse/YARN-6597
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Konstantinos Karanasos
>Assignee: Panagiotis Garefalakis
>
> Each allocation can have a set of allocation tags associated to it.
> For example, an allocation can be marked as hbase, hbase-master, spark, etc.
> These allocation-tags are active in the cluster only while that container is 
> active (from the moment it gets allocated until the moment it finishes its 
> execution).
> This JIRA is responsible for storing and updating in the 
> {{PlacementConstraintManager}} the active allocation tags in the cluster.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-6597) Store and update allocation tags in the Placement Constraint Manager

2017-12-22 Thread Panagiotis Garefalakis (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16301574#comment-16301574
 ] 

Panagiotis Garefalakis edited comment on YARN-6597 at 12/22/17 3:42 PM:


[~cheersyang]
(YARN-7522) introduced AllocationTagManager component which is storing simple 
node to application-container mappings.
(YARN-7653) added support for node-group/rack to application-container mappings.

In this JIRA I would like to efficiently manage container tags under all 
possible Container state transitions (EXPIRED, RELEASED, KILLED e.t.c) as we 
currently support only container allocation and completion states as a proof of 
concept.
Does it make sense?



was (Author: pgaref):
[~cheersyang]
(YARN-7522)[https://issues.apache.org/jira/browse/YARN-7522] introduced 
AllocationTagManager component which is storing simple node to 
application-container mappings.
(YARN-7653)[https://issues.apache.org/jira/browse/YARN-7653] added support for 
node-group/rack to application-container mappings.

In this JIRA I would like to efficiently manage container tags under all 
possible Container state transitions (EXPIRED, RELEASED, KILLED e.t.c) as we 
currently support only container allocation and completion states as a proof of 
concept.
Does it make sense?


> Store and update allocation tags in the Placement Constraint Manager
> 
>
> Key: YARN-6597
> URL: https://issues.apache.org/jira/browse/YARN-6597
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Konstantinos Karanasos
>Assignee: Panagiotis Garefalakis
>
> Each allocation can have a set of allocation tags associated to it.
> For example, an allocation can be marked as hbase, hbase-master, spark, etc.
> These allocation-tags are active in the cluster only while that container is 
> active (from the moment it gets allocated until the moment it finishes its 
> execution).
> This JIRA is responsible for storing and updating in the 
> {{PlacementConstraintManager}} the active allocation tags in the cluster.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6597) Store and update allocation tags in the Placement Constraint Manager

2017-12-22 Thread Panagiotis Garefalakis (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16301574#comment-16301574
 ] 

Panagiotis Garefalakis commented on YARN-6597:
--

[~cheersyang]
(YARN-7522)[https://issues.apache.org/jira/browse/YARN-7522] introduced 
AllocationTagManager component which is storing simple node to 
application-container mappings.
(YARN-7653)[https://issues.apache.org/jira/browse/YARN-7653] added support for 
node-group/rack to application-container mappings.

In this JIRA I would like to efficiently manage container tags under all 
possible Container state transitions (EXPIRED, RELEASED, KILLED e.t.c) as we 
currently support only container allocation and completion states as a proof of 
concept.
Does it make sense?


> Store and update allocation tags in the Placement Constraint Manager
> 
>
> Key: YARN-6597
> URL: https://issues.apache.org/jira/browse/YARN-6597
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Konstantinos Karanasos
>Assignee: Panagiotis Garefalakis
>
> Each allocation can have a set of allocation tags associated to it.
> For example, an allocation can be marked as hbase, hbase-master, spark, etc.
> These allocation-tags are active in the cluster only while that container is 
> active (from the moment it gets allocated until the moment it finishes its 
> execution).
> This JIRA is responsible for storing and updating in the 
> {{PlacementConstraintManager}} the active allocation tags in the cluster.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-6597) Store and update allocation tags in the Placement Constraint Manager

2017-12-22 Thread Panagiotis Garefalakis (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16301574#comment-16301574
 ] 

Panagiotis Garefalakis edited comment on YARN-6597 at 12/22/17 3:42 PM:


[~cheersyang]
YARN-7522 introduced AllocationTagManager component which is storing simple 
node to application-container mappings.
YARN-7653 added support for node-group/rack to application-container mappings.

In this JIRA I would like to efficiently manage container tags under all 
possible Container state transitions (EXPIRED, RELEASED, KILLED e.t.c) as we 
currently support only container allocation and completion states as a proof of 
concept.
Does it make sense?



was (Author: pgaref):
[~cheersyang]
(YARN-7522) introduced AllocationTagManager component which is storing simple 
node to application-container mappings.
(YARN-7653) added support for node-group/rack to application-container mappings.

In this JIRA I would like to efficiently manage container tags under all 
possible Container state transitions (EXPIRED, RELEASED, KILLED e.t.c) as we 
currently support only container allocation and completion states as a proof of 
concept.
Does it make sense?


> Store and update allocation tags in the Placement Constraint Manager
> 
>
> Key: YARN-6597
> URL: https://issues.apache.org/jira/browse/YARN-6597
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Konstantinos Karanasos
>Assignee: Panagiotis Garefalakis
>
> Each allocation can have a set of allocation tags associated to it.
> For example, an allocation can be marked as hbase, hbase-master, spark, etc.
> These allocation-tags are active in the cluster only while that container is 
> active (from the moment it gets allocated until the moment it finishes its 
> execution).
> This JIRA is responsible for storing and updating in the 
> {{PlacementConstraintManager}} the active allocation tags in the cluster.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7653) Node group support for AllocationTagsManager

2017-12-22 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16301557#comment-16301557
 ] 

Arun Suresh commented on YARN-7653:
---

Thanks [~pgaref] +1

> Node group support for AllocationTagsManager
> 
>
> Key: YARN-7653
> URL: https://issues.apache.org/jira/browse/YARN-7653
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Panagiotis Garefalakis
>Assignee: Panagiotis Garefalakis
> Attachments: YARN-7653-YARN-6592.001.patch, 
> YARN-7653-YARN-6592.002.patch, YARN-7653-YARN-6592.003.patch
>
>
> AllocationTagsManager currently supports node and cluster-wide tag 
> cardinality retrieval.
> If we want to support arbitrary node-groups/scopes for our placement 
> constraints TagsManager should be extended to provide such functionality.
> As a first step we need to support RACK scope cardinality retrieval (as 
> defined in our API).
> i.e. how many "spark" containers are currently running on "RACK-1"



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5366) Improve handling of the Docker container life cycle

2017-12-22 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16301374#comment-16301374
 ] 

genericqa commented on YARN-5366:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} docker {color} | {color:red}  6m 
17s{color} | {color:red} Docker failed to build yetus/hadoop:5b98639. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-5366 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12903401/YARN-5366.009.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/19015/console |
| Powered by | Apache Yetus 0.7.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Improve handling of the Docker container life cycle
> ---
>
> Key: YARN-5366
> URL: https://issues.apache.org/jira/browse/YARN-5366
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
>  Labels: oct16-medium
> Attachments: YARN-5366.001.patch, YARN-5366.002.patch, 
> YARN-5366.003.patch, YARN-5366.004.patch, YARN-5366.005.patch, 
> YARN-5366.006.patch, YARN-5366.007.patch, YARN-5366.008.patch, 
> YARN-5366.009.patch
>
>
> There are several paths that need to be improved with regard to the Docker 
> container lifecycle when running Docker containers on YARN.
> 1) Provide the ability to keep a container on the NodeManager for a set 
> period of time for debugging purposes.
> 2) Support sending signals to the process in the container to allow for 
> triggering stack traces, heap dumps, etc.
> 3) Support for Docker's live restore, which means moving away from the use of 
> {{docker wait}}. (YARN-5818)
> 4) Improve the resiliency of liveliness checks (kill -0) by adding retries.
> 5) Improve the resiliency of container removal by adding retries.
> 6) Only attempt to stop, kill, and remove containers if the current container 
> state allows for it.
> 7) Better handling of short lived containers when the container is stopped 
> before the PID can be retrieved. (YARN-6305)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5366) Improve handling of the Docker container life cycle

2017-12-22 Thread Shane Kumpf (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shane Kumpf updated YARN-5366:
--
Attachment: YARN-5366.009.patch

> Improve handling of the Docker container life cycle
> ---
>
> Key: YARN-5366
> URL: https://issues.apache.org/jira/browse/YARN-5366
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
>  Labels: oct16-medium
> Attachments: YARN-5366.001.patch, YARN-5366.002.patch, 
> YARN-5366.003.patch, YARN-5366.004.patch, YARN-5366.005.patch, 
> YARN-5366.006.patch, YARN-5366.007.patch, YARN-5366.008.patch, 
> YARN-5366.009.patch
>
>
> There are several paths that need to be improved with regard to the Docker 
> container lifecycle when running Docker containers on YARN.
> 1) Provide the ability to keep a container on the NodeManager for a set 
> period of time for debugging purposes.
> 2) Support sending signals to the process in the container to allow for 
> triggering stack traces, heap dumps, etc.
> 3) Support for Docker's live restore, which means moving away from the use of 
> {{docker wait}}. (YARN-5818)
> 4) Improve the resiliency of liveliness checks (kill -0) by adding retries.
> 5) Improve the resiliency of container removal by adding retries.
> 6) Only attempt to stop, kill, and remove containers if the current container 
> state allows for it.
> 7) Better handling of short lived containers when the container is stopped 
> before the PID can be retrieved. (YARN-6305)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-1709) Admission Control: Reservation subsystem

2017-12-22 Thread yangzhangyang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16301313#comment-16301313
 ] 

yangzhangyang commented on YARN-1709:
-

Hi,  I want to ask you a question about reservation,  "Step 7 The Scheduler 
will then provide containers from a special queue created to ensure resources 
reservation is respected. Within the limits of the reservation, the user has 
guaranteed access to the resources, above that resource sharing proceed with 
standard Capacity/Fairness sharing." When need reservation-job comes with 
reservationId, what will happen if there is no real avail res on node manager, 
just wait other am return res or direct preempt other running job's resource? 
thanks

> Admission Control: Reservation subsystem
> 
>
> Key: YARN-1709
> URL: https://issues.apache.org/jira/browse/YARN-1709
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Carlo Curino
>Assignee: Subru Krishnan
> Fix For: 2.6.0
>
> Attachments: YARN-1709.patch, YARN-1709.patch, YARN-1709.patch, 
> YARN-1709.patch, YARN-1709.patch, YARN-1709.patch, YARN-1709.patch
>
>
> This JIRA is about the key data structure used to track resources over time 
> to enable YARN-1051. The Reservation subsystem is conceptually a "plan" of 
> how the scheduler will allocate resources over-time.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7679) Application blacklist to drop misbehave AM requests

2017-12-22 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7679?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated YARN-7679:
--
Labels: admin command-line  (was: admin)

> Application blacklist to drop misbehave AM requests
> ---
>
> Key: YARN-7679
> URL: https://issues.apache.org/jira/browse/YARN-7679
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: client, RM
>Reporter: Weiwei Yang
>  Labels: admin, command-line
>
> Sometimes there might be some malicious or mis-behave AMs that keep sending 
> invalid resource requests to RM, this causes resource wastes. If such 
> application provides online services, we cannot simply KILL them. Instead it 
> will be extremely useful if RM can black list such applications, drop their 
> requests. So admin could have time to check and then decide how to process. 
> Propose to add an admin command to blacklist applications.
> Open this JIRA for discussion, will upload a design doc shortly.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-1051) YARN Admission Control/Planner: enhancing the resource allocation model with time.

2017-12-22 Thread yangzhangyang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16301283#comment-16301283
 ] 

yangzhangyang commented on YARN-1051:
-

[~curino]  I want to ask you a question about "Step 7 The Scheduler will then 
provide containers from a special queue created to ensure resources reservation 
is respected. Within the limits of the reservation, the user has guaranteed 
access to the resources, above that resource sharing proceed with standard 
Capacity/Fairness sharing."When need reservation-job comes with 
reservationId,  what will happen if there is no real avail res on node manager, 
 just wait other am return res or direct preempt other running job's resource? 
thanks

> YARN Admission Control/Planner: enhancing the resource allocation model with 
> time.
> --
>
> Key: YARN-1051
> URL: https://issues.apache.org/jira/browse/YARN-1051
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacityscheduler, resourcemanager, scheduler
>Reporter: Carlo Curino
>Assignee: Carlo Curino
> Fix For: 2.6.0
>
> Attachments: YARN-1051-design.pdf, YARN-1051.1.patch, 
> YARN-1051.patch, curino_MSR-TR-2013-108.pdf, socc14-paper15.pdf, 
> techreport.pdf
>
>
> In this umbrella JIRA we propose to extend the YARN RM to handle time 
> explicitly, allowing users to "reserve" capacity over time. This is an 
> important step towards SLAs, long-running services, workflows, and helps for 
> gang scheduling.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7653) Node group support for AllocationTagsManager

2017-12-22 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16301175#comment-16301175
 ] 

genericqa commented on YARN-7653:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-6592 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
18s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
37s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
41s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 37s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
5s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} YARN-6592 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 24s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 62m 39s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}110m 34s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestNodeLabelContainerAllocation
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7653 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12903366/YARN-7653-YARN-6592.003.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 30e08c5c7167 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | YARN-6592 / 185e3bd |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/19014/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/19014/testReport/ |
| Max. process+thread count | 828 (vs. ulimit of 5000) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-ser

[jira] [Comment Edited] (YARN-7653) Node group support for AllocationTagsManager

2017-12-22 Thread Panagiotis Garefalakis (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16301091#comment-16301091
 ] 

Panagiotis Garefalakis edited comment on YARN-7653 at 12/22/17 8:15 AM:


Thanks for the comments [~asuresh] ! 
Regarding NPE - the issue is addressed in the latest versions of the patch.
bq. What if a node goes down ?
In that case, I believe we have to follow the lifecycle of the affected 
containers and purge the tags as they become unavailable. Maybe using the 
particular RMContainer states:  COMPLETED, EXPIRED,  RELEASED,  KILLED
bq. Would we ever need a tag -> nodes mapping ?
It is a valid point - the main reason for the extra mapping is to avoid 
iterating through all the applicationIDs (as keys) and return the aggregated 
counts. Even the algorithm implementation we would iterate through Nodes not 
ApplicationIDs - so we would have to do the extra iterations to retrieve .i.e.: 
a global count of tag "mapreduce" across all applications 

Periodic cleaning could be part of another Jira I agree.


was (Author: pgaref):
Thanks for the comments [~asuresh] ! 
Regarding NPE - the issue is addressed in the latest versions of the patch.
bq. What if a node goes down ?
In that case, I believe we have to follow the lifecycle of the affected 
containers and purge the tags as they become unavailable. Maybe using the 
particular RMContainer states:  COMPLETED, EXPIRED,  RELEASED,  KILLED
bq. Would we ever need a tag -> nodes mapping ?
It is a valid point - the main reason for the extra mapping is to avoid 
iterating through all the applicationIDs (as keys) and return the aggregated 
counts. Even the algorithm implementation we would iterate through Nodes not 
ApplicationIDs - so we would have to do the extra iterations to retrieve a 
global count of tag "mapreduce" across all applications for example. 

Periodic cleaning could be part of another Jira I agree.

> Node group support for AllocationTagsManager
> 
>
> Key: YARN-7653
> URL: https://issues.apache.org/jira/browse/YARN-7653
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Panagiotis Garefalakis
>Assignee: Panagiotis Garefalakis
> Attachments: YARN-7653-YARN-6592.001.patch, 
> YARN-7653-YARN-6592.002.patch, YARN-7653-YARN-6592.003.patch
>
>
> AllocationTagsManager currently supports node and cluster-wide tag 
> cardinality retrieval.
> If we want to support arbitrary node-groups/scopes for our placement 
> constraints TagsManager should be extended to provide such functionality.
> As a first step we need to support RACK scope cardinality retrieval (as 
> defined in our API).
> i.e. how many "spark" containers are currently running on "RACK-1"



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7653) Node group support for AllocationTagsManager

2017-12-22 Thread Panagiotis Garefalakis (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16301091#comment-16301091
 ] 

Panagiotis Garefalakis commented on YARN-7653:
--

Thanks for the comments [~asuresh] ! 
Regarding NPE - the issue is addressed in the latest versions of the patch.
bq. What if a node goes down ?
In that case, I believe we have to follow the lifecycle of the affected 
containers and purge the tags as they become unavailable. Maybe using the 
particular RMContainer states:  COMPLETED, EXPIRED,  RELEASED,  KILLED
bq. Would we ever need a tag -> nodes mapping ?
It is a valid point - the main reason for the extra mapping is to avoid 
iterating through all the applicationIDs (as keys) and return the aggregated 
counts. Even the algorithm implementation we would iterate through Nodes not 
ApplicationIDs - so we would have to do the extra iterations to retrieve a 
global count of tag "mapreduce" across all applications for example. 

Periodic cleaning could be part of another Jira I agree.

> Node group support for AllocationTagsManager
> 
>
> Key: YARN-7653
> URL: https://issues.apache.org/jira/browse/YARN-7653
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Panagiotis Garefalakis
>Assignee: Panagiotis Garefalakis
> Attachments: YARN-7653-YARN-6592.001.patch, 
> YARN-7653-YARN-6592.002.patch, YARN-7653-YARN-6592.003.patch
>
>
> AllocationTagsManager currently supports node and cluster-wide tag 
> cardinality retrieval.
> If we want to support arbitrary node-groups/scopes for our placement 
> constraints TagsManager should be extended to provide such functionality.
> As a first step we need to support RACK scope cardinality retrieval (as 
> defined in our API).
> i.e. how many "spark" containers are currently running on "RACK-1"



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org