[jira] [Commented] (YARN-7735) Fix typo in YARN documentation

2018-01-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16321800#comment-16321800
 ] 

Hudson commented on YARN-7735:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13480 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13480/])
YARN-7735. Fix typo in YARN documentation. Contributed by Takanobu (aajisaka: 
rev fbbbf59c82658e18dad7e0e256613187b5b75d0f)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/YARN.md


> Fix typo in YARN documentation
> --
>
> Key: YARN-7735
> URL: https://issues.apache.org/jira/browse/YARN-7735
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: documentation
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Minor
> Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.1
>
> Attachments: HADOOP-15165.1.patch
>
>
> The link of "YARN Federation" is wrong.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7735) Fix typo in YARN documentation

2018-01-10 Thread Takanobu Asanuma (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16321793#comment-16321793
 ] 

Takanobu Asanuma commented on YARN-7735:


Thanks for reviewing and committing it, [~ajisakaa]!

> Fix typo in YARN documentation
> --
>
> Key: YARN-7735
> URL: https://issues.apache.org/jira/browse/YARN-7735
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: documentation
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Minor
> Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.1
>
> Attachments: HADOOP-15165.1.patch
>
>
> The link of "YARN Federation" is wrong.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7735) Fix typo in YARN documentation

2018-01-10 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16321783#comment-16321783
 ] 

genericqa commented on YARN-7735:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} YARN-7735 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-7735 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12905613/HADOOP-15165.1.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/19195/console |
| Powered by | Apache Yetus 0.7.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Fix typo in YARN documentation
> --
>
> Key: YARN-7735
> URL: https://issues.apache.org/jira/browse/YARN-7735
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: documentation
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Minor
> Attachments: HADOOP-15165.1.patch
>
>
> The link of "YARN Federation" is wrong.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Moved] (YARN-7735) Fix typo in YARN documentation

2018-01-10 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7735?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka moved HADOOP-15165 to YARN-7735:
--

Target Version/s:   (was: 3.1.0)
 Component/s: (was: documentation)
  documentation
 Key: YARN-7735  (was: HADOOP-15165)
 Project: Hadoop YARN  (was: Hadoop Common)

> Fix typo in YARN documentation
> --
>
> Key: YARN-7735
> URL: https://issues.apache.org/jira/browse/YARN-7735
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: documentation
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Minor
> Attachments: HADOOP-15165.1.patch
>
>
> The link of "YARN Federation" is wrong.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7715) Update CPU and Memory cgroups params on container update as well.

2018-01-10 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16321730#comment-16321730
 ] 

Miklos Szegedi commented on YARN-7715:
--

I would separate {{preStart}} into two functions, {{preStart}} and {{apply}}. 
Both would get the container as a parameter. {{preStart}} would create the 
cgroup and call {{apply}}. {{apply}} would set the cgroup settings based on the 
current state of the cgroup.

> Update CPU and Memory cgroups params on container update as well.
> -
>
> Key: YARN-7715
> URL: https://issues.apache.org/jira/browse/YARN-7715
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Arun Suresh
>
> In YARN-6673 and YARN-6674, the cgroups resource handlers update the cgroups 
> params for the containers, based on opportunistic or guaranteed, in the 
> *preStart* method.
> Now that YARN-5085 is in, Container executionType (as well as the cpu, memory 
> and any other resources) can be updated after the container has started. This 
> means we need the ability to change cgroups params after container start.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7712) Add ability to ignore timestamps in localized files

2018-01-10 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16321724#comment-16321724
 ] 

Miklos Szegedi commented on YARN-7712:
--

Thank you for the reply, [~chris.douglas]. The scenario is mainly for testing 
and demonstrating the REST API behavior for future users.
Here is the current launch command list when launching an AM from the REST API:
1. The client has to upload a dependency to localize to HDFS
2. The client has to grab the timestamp from HDFS
3. The client runs a job through the rest API specifying the localized file 
with the timestamp
The client can run a job faster and with less effort with the suggested change:
1. The client has to upload a jar to HDFS
3. The client runs a job through the rest API specifying the localized file 
with ignored timestamp
In my opinion, the timestamp specification requirement has multiple issues.
1. It does not protect security. The client gets the failing timestamp in the 
error message
2. It is an annoyance in basic clusters and testing scenarios especially REST 
api users
3. The user can restrict the directory where it uploads to in order to protect 
consistency
4. The additional hop adds latency that is not necessary in cases 2. and 3.
5. If I had to think about a design to use timestamp to protect consistency, I 
would
  a) make sure time is trusted in the cluster and modification timestamp is 
trusted in HDFS
  b) grab a launch timestamp {{tl}} (or desired minimum timestamp), when the 
client starts and place it in ContainerLaunchContext just like it is now
  c) verify that the file modification time is less than the launch or any 
other specified timestamp at localization time {{tm < tl}}.
  This would ensure the same level of consistency without additional latency to 
REST users through Python for example.
6. The PathHandle that you suggested is a better option, I admit.





> Add ability to ignore timestamps in localized files
> ---
>
> Key: YARN-7712
> URL: https://issues.apache.org/jira/browse/YARN-7712
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Reporter: Miklos Szegedi
>Assignee: Miklos Szegedi
>
> YARN currently requires and checks the timestamp of localized files and 
> fails, if the file on HDFS does not match to the one requested. This jira 
> adds the ability to ignore the timestamp based on the request of the client.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7717) Add configuration consistency for module.enabled and docker.privileged-containers.enabled

2018-01-10 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16321708#comment-16321708
 ] 

Miklos Szegedi commented on YARN-7717:
--

Thank you for the patch, [~ebadger].
{code}
5   feature.tc.enabled=true
{code}
The default for tc should still be false.
{code}
492   for (itr = file_cmd_vec.begin(); itr != file_cmd_vec.end(); 
++itr) {
493 memset(buff, 0, buff_len);
494 write_command_file(itr->first);
495 ret = read_config(docker_command_file.c_str(), &cmd_cfg);
496 if (ret != 0) {
497   FAIL();
498 }
499 ret = set_privileged(&cmd_cfg, &container_cfg, buff, buff_len);
500 ASSERT_EQ(0, ret);
497   ASSERT_STREQ(itr->second.c_str(), buff);  501   
ASSERT_STREQ(itr->second.c_str(), buff);
502   }
{code}
There is and indentation issue on the last line.


> Add configuration consistency for module.enabled and 
> docker.privileged-containers.enabled
> -
>
> Key: YARN-7717
> URL: https://issues.apache.org/jira/browse/YARN-7717
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Yesha Vora
>Assignee: Eric Badger
> Attachments: YARN-7717.001.patch, YARN-7717.002.patch
>
>
> container-executor.cfg has two properties related to dockerization. 
> 1)  module.enabled = true/false
> 2) docker.privileged-containers.enabled = 1/0
> Here, both property takes different value to enable / disable feature. Module 
> enabled take true/false string while docker.privileged-containers.enabled  
> takes 1/0 integer value. 
> This properties behavior should be consistent. Both properties should have 
> true or false string as value to enable or disable feature/



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7730) Add memory management configs to yarn-default

2018-01-10 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16321685#comment-16321685
 ] 

Miklos Szegedi commented on YARN-7730:
--

[~yeshavora], thank you for reporting this. Please review YARN-7064, the docs 
in question are being updated there together with adding additional 
configuration, so you might need to merge with this jira. I like the additional 
information, that you provided in the description. Please note that I think 
swappiness is not precisely the amount of memory that can be swapped out just 
the aggressiveness of the swapping algorithm.

> Add memory management configs to yarn-default
> -
>
> Key: YARN-7730
> URL: https://issues.apache.org/jira/browse/YARN-7730
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Yesha Vora
>Priority: Minor
>
> Add below configuration and description to yarn-defaults.xml
> {code}
> "yarn.nodemanager.resource.memory.enabled"
> // the default value is false, we need to set to true here to enable the 
> cgroups based memory monitoring.
> "yarn.nodemanager.resource.memory.cgroups.soft-limit-percentage"
> // the default value is 90.0f, which means in memory congestion case, the 
> container can still keep/reserve 90% resource for its claimed value. It 
> cannot be set to above 100 or set as negative value.
> "yarn.nodemanager.resource.memory.cgroups.swappiness"
> // The percentage that memory can be swapped or not. default value is 0, 
> which means container memory cannot be swapped out. If not set, linux cgroup 
> setting by default set to 60 which means 60% of memory can potentially be 
> swapped out when system memory is not enough.{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7064) Use cgroup to get container resource utilization

2018-01-10 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16321676#comment-16321676
 ] 

Miklos Szegedi commented on YARN-7064:
--

The failing unit test is YARN-7734.

> Use cgroup to get container resource utilization
> 
>
> Key: YARN-7064
> URL: https://issues.apache.org/jira/browse/YARN-7064
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Miklos Szegedi
>Assignee: Miklos Szegedi
> Attachments: YARN-7064.000.patch, YARN-7064.001.patch, 
> YARN-7064.002.patch, YARN-7064.003.patch, YARN-7064.004.patch, 
> YARN-7064.005.patch, YARN-7064.007.patch, YARN-7064.008.patch, 
> YARN-7064.009.patch, YARN-7064.010.patch
>
>
> This is an addendum to YARN-6668. What happens is that that jira always wants 
> to rebase patches against YARN-1011 instead of trunk.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7734) YARN-5418 breaks TestContainerLogsPage.testContainerLogPageAccess

2018-01-10 Thread Miklos Szegedi (JIRA)
Miklos Szegedi created YARN-7734:


 Summary: YARN-5418 breaks 
TestContainerLogsPage.testContainerLogPageAccess
 Key: YARN-7734
 URL: https://issues.apache.org/jira/browse/YARN-7734
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Miklos Szegedi
Assignee: Xuan Gong


It adds a call to LogAggregationFileControllerFactory where the context is not 
filled in with the configuration in the mock in the unit test.
{code}
[ERROR] Tests run: 5, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 1.492 s 
<<< FAILURE! - in 
org.apache.hadoop.yarn.server.nodemanager.webapp.TestContainerLogsPage
[ERROR] 
testContainerLogPageAccess(org.apache.hadoop.yarn.server.nodemanager.webapp.TestContainerLogsPage)
  Time elapsed: 0.208 s  <<< ERROR!
java.lang.NullPointerException
at 
org.apache.hadoop.yarn.logaggregation.filecontroller.LogAggregationFileControllerFactory.(LogAggregationFileControllerFactory.java:68)
at 
org.apache.hadoop.yarn.server.nodemanager.webapp.ContainerLogsPage$ContainersLogsBlock.(ContainerLogsPage.java:100)
at 
org.apache.hadoop.yarn.server.nodemanager.webapp.TestContainerLogsPage.testContainerLogPageAccess(TestContainerLogsPage.java:268)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
{code}




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7590) Improve container-executor validation check

2018-01-10 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16321664#comment-16321664
 ] 

Miklos Szegedi commented on YARN-7590:
--

[~eyang], The code I suggested above
{code}
fprintf(LOGFILE, "Error checking file stats for %s %d %s.\n", nm_root, err, 
strerror(err));
{code}
It should be the following:
{code}
fprintf(LOGFILE, "Error checking file stats for %s %d %s.\n", nm_root, err, 
strerror(errno));
{code}
This is my mistake, I apologize. Please update the patch. Also I am inclined to 
wait until YARN-7705 gets checked in and update this patch to call your new 
function there also. What do you think?


> Improve container-executor validation check
> ---
>
> Key: YARN-7590
> URL: https://issues.apache.org/jira/browse/YARN-7590
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: security, yarn
>Affects Versions: 2.0.1-alpha, 2.2.0, 2.3.0, 2.4.0, 2.5.0, 2.6.0, 2.7.0, 
> 2.8.0, 2.8.1, 3.0.0-beta1
>Reporter: Eric Yang
>Assignee: Eric Yang
> Attachments: YARN-7590.001.patch, YARN-7590.002.patch, 
> YARN-7590.003.patch, YARN-7590.004.patch, YARN-7590.005.patch, 
> YARN-7590.006.patch, YARN-7590.007.patch, YARN-7590.008.patch, 
> YARN-7590.009.patch
>
>
> There is minimum check for prefix path for container-executor.  If YARN is 
> compromised, attacker  can use container-executor to change system files 
> ownership:
> {code}
> /usr/local/hadoop/bin/container-executor spark yarn 0 etc /home/yarn/tokens 
> /home/spark / ls
> {code}
> This will change /etc to be owned by spark user:
> {code}
> # ls -ld /etc
> drwxr-s---. 110 spark hadoop 8192 Nov 21 20:00 /etc
> {code}
> Spark user can rewrite /etc files to gain more access.  We can improve this 
> with additional check in container-executor:
> # Make sure the prefix path is owned by the same user as the caller to 
> container-executor.
> # Make sure the log directory prefix is owned by the same user as the caller.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7064) Use cgroup to get container resource utilization

2018-01-10 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16321634#comment-16321634
 ] 

genericqa commented on YARN-7064:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
29s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m  2s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
18s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api in 
trunk has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
43s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
10s{color} | {color:green} root: The patch generated 0 new + 266 unchanged - 3 
fixed = 266 total (was 269) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 54s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
36s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
10s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
41s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m  
4s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 17m  9s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}126m 52s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.nodemanager.webapp.TestContainerLogsPage |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7064 |
| JIRA Patch URL | 
https://issues.apache.

[jira] [Commented] (YARN-7724) yarn application status should support application name

2018-01-10 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7724?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16321615#comment-16321615
 ] 

Jian He commented on YARN-7724:
---

bq. If RM is rebooted, the application status shows errors:
I think you didn't enable RM recovery? If not enabled, this is expected. 
This patch aims to address the problem of having "yarn app -status" also 
support appName. 

> yarn application status should support application name
> ---
>
> Key: YARN-7724
> URL: https://issues.apache.org/jira/browse/YARN-7724
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Reporter: Yesha Vora
>Assignee: Jian He
> Attachments: YARN-7724.01.patch, YARN-7724.02.patch
>
>
> Yarn Service application are tied with app name. Thus, yarn application 
> -status should be able to take yarn service name as argument such as
> yarn application -status 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7724) yarn application status should support application name

2018-01-10 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7724?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16321600#comment-16321600
 ] 

Eric Yang commented on YARN-7724:
-

If RM is rebooted, the application status shows errors:

{code}
./bin/yarn app -status abc
2018-01-11 02:24:58,862 INFO client.RMProxy: Connecting to ResourceManager at 
eyang-1.openstacklocal/172.26.111.17:8050
2018-01-11 02:25:00,086 INFO client.RMProxy: Connecting to ResourceManager at 
eyang-1.openstacklocal/172.26.111.17:8050
2018-01-11 02:25:00,209 INFO utils.ServiceApiUtil: Loading service definition 
from hdfs://eyang-1.openstacklocal:9000/user/hbase/.yarn/services/abc/abc.json
Exception in thread "main" 
org.apache.hadoop.yarn.exceptions.ApplicationNotFoundException: Application 
with id 'application_1515637197124_0001' doesn't exist in RM. Please check that 
the job submission was successful.
at 
org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.getApplicationReport(ClientRMService.java:378)
at 
org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.getApplicationReport(ApplicationClientProtocolPBServiceImpl.java:234)
at 
org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:561)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1965)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675)

at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at 
org.apache.hadoop.yarn.ipc.RPCUtil.instantiateException(RPCUtil.java:53)
at 
org.apache.hadoop.yarn.ipc.RPCUtil.instantiateYarnException(RPCUtil.java:75)
at 
org.apache.hadoop.yarn.ipc.RPCUtil.unwrapAndThrowException(RPCUtil.java:116)
at 
org.apache.hadoop.yarn.api.impl.pb.client.ApplicationClientProtocolPBClientImpl.getApplicationReport(ApplicationClientProtocolPBClientImpl.java:247)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
at com.sun.proxy.$Proxy8.getApplicationReport(Unknown Source)
at 
org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.getApplicationReport(YarnClientImpl.java:518)
at 
org.apache.hadoop.yarn.service.client.ServiceClient.getStatus(ServiceClient.java:936)
at 
org.apache.hadoop.yarn.service.client.ServiceClient.getStatusString(ServiceClient.java:910)
at 
org.apache.hadoop.yarn.client.cli.ApplicationCLI.run(ApplicationCLI.java:310)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
at 
org.apache.hadoop.yarn.client.cli.ApplicationCLI.main(ApplicationCLI.java:111)
Caused by: 
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.yarn.exceptions.ApplicationNotFoundException):
 Application with id 'application_1515637197124_0001' doesn't exist in RM. 
Please check that the job submission was successful.
at 
org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.getApplicationReport(ClientRMService.java:378)
at 
org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.getApplicationReport(ApplicationClientProtocolPBServiceImpl.java:234)
at 
org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2

[jira] [Commented] (YARN-7590) Improve container-executor validation check

2018-01-10 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16321590#comment-16321590
 ] 

genericqa commented on YARN-7590:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
29m 19s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 20s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 18m  
0s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 61m 37s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7590 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12905587/YARN-7590.009.patch |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux 786ebadcdefc 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 12d0645 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/19194/testReport/ |
| Max. process+thread count | 302 (vs. ulimit of 5000) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/19194/console |
| Powered by | Apache Yetus 0.7.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Improve container-executor validation check
> ---
>
> Key: YARN-7590
> URL: https://issues.apache.org/jira/browse/YARN-7590
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: security, yarn
>Affects Versions: 2.0.1-alpha, 2.2.0, 2.3.0, 2.4.0, 2.5.0, 2.6.0, 2.7.0, 
> 2.8.0, 2.8.1, 3.0.0-beta1
>Reporter: Eric Yang
>Assignee: Eric Yang
> Attachments: YARN-7590.001.patch, YARN-7590.002.patch, 
> YARN-7590.003.patch, YARN-7590.004.patch, YARN-7590.005.patch, 
> YARN-7590.006.patch, YARN-7590.007.patch, YARN-7590.008.patch, 
> YARN-7590.009.patch
>
>
> There is minimum check for prefix path for container-executor.  If YARN is 
> compro

[jira] [Commented] (YARN-7479) TestContainerManagerSecurity.testContainerManager[Simple] flaky in trunk

2018-01-10 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16321542#comment-16321542
 ] 

Akira Ajisaka commented on YARN-7479:
-

Hi [~rkanter] and [~templedf], would you review this?

> TestContainerManagerSecurity.testContainerManager[Simple] flaky in trunk
> 
>
> Key: YARN-7479
> URL: https://issues.apache.org/jira/browse/YARN-7479
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: test
>Reporter: Botong Huang
>Assignee: Akira Ajisaka
> Attachments: YARN-7479.001.patch, YARN-7479.002.patch
>
>
> Was waiting for container_1_0001_01_00 to get to state COMPLETE but was 
> in state RUNNING after the timeout
> java.lang.AssertionError: Was waiting for container_1_0001_01_00 to get 
> to state COMPLETE but was in state RUNNING after the timeout
>   at org.junit.Assert.fail(Assert.java:88)
>   at 
> org.apache.hadoop.yarn.server.TestContainerManagerSecurity.waitForContainerToFinishOnNM(TestContainerManagerSecurity.java:431)
>   at 
> org.apache.hadoop.yarn.server.TestContainerManagerSecurity.testNMTokens(TestContainerManagerSecurity.java:360)
>   at 
> org.apache.hadoop.yarn.server.TestContainerManagerSecurity.testContainerManager(TestContainerManagerSecurity.java:171)
> Pasting some exception message during test run here: 
> org.apache.hadoop.security.AccessControlException: SIMPLE authentication is 
> not enabled.  Available:[TOKEN]
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> org.apache.hadoop.yarn.ipc.RPCUtil.instantiateException(RPCUtil.java:53)
>   at 
> org.apache.hadoop.yarn.ipc.RPCUtil.instantiateIOException(RPCUtil.java:80)
>   at 
> org.apache.hadoop.yarn.ipc.RPCUtil.unwrapAndThrowException(RPCUtil.java:119)
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.token.SecretManager$InvalidToken):
>  Given NMToken for application : appattempt_1_0001_01 seems to have been 
> generated illegally.
>   at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1491)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1437)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1347)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.token.SecretManager$InvalidToken):
>  Given NMToken for application : appattempt_1_0001_01 is not valid for 
> current node manager.expected : localhost:46649 found : InvalidHost:1234
>   at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1491)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1437)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1347)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7724) yarn application status should support application name

2018-01-10 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7724?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16321537#comment-16321537
 ] 

genericqa commented on YARN-7724:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  3m 
37s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
49s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  7s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 20s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 24m 
50s{color} | {color:green} hadoop-yarn-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
58s{color} | {color:green} hadoop-yarn-services-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 95m 19s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7724 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12905572/YARN-7724.02.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 3fa34f92da3c 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 12d0645 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://b

[jira] [Commented] (YARN-7705) Create the container log directory with correct sticky bit in C code

2018-01-10 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16321524#comment-16321524
 ] 

genericqa commented on YARN-7705:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
11s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
 9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 44s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red}  0m 47s{color} | 
{color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager
 generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  4s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 16m 17s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 63m 20s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.nodemanager.TestLinuxContainerExecutorWithMocks |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7705 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12905573/YARN-7705.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  cc  |
| uname | Linux 09ab27a479c2 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 12d0645 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| cc | 
https://builds.apache.org/job/PreCommit-YARN-Build/19192/artifact/out/diff-compile-cc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
| unit | 
https://builds.apache.org/job/PreComm

[jira] [Updated] (YARN-7590) Improve container-executor validation check

2018-01-10 Thread Eric Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-7590:

Attachment: YARN-7590.009.patch

[~miklos.szeg...@cloudera.com] Thank you for the review.  Good catch on 
checking directory existence.  I have added mkdir accordingly to support 
standalone mode.

> Improve container-executor validation check
> ---
>
> Key: YARN-7590
> URL: https://issues.apache.org/jira/browse/YARN-7590
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: security, yarn
>Affects Versions: 2.0.1-alpha, 2.2.0, 2.3.0, 2.4.0, 2.5.0, 2.6.0, 2.7.0, 
> 2.8.0, 2.8.1, 3.0.0-beta1
>Reporter: Eric Yang
>Assignee: Eric Yang
> Attachments: YARN-7590.001.patch, YARN-7590.002.patch, 
> YARN-7590.003.patch, YARN-7590.004.patch, YARN-7590.005.patch, 
> YARN-7590.006.patch, YARN-7590.007.patch, YARN-7590.008.patch, 
> YARN-7590.009.patch
>
>
> There is minimum check for prefix path for container-executor.  If YARN is 
> compromised, attacker  can use container-executor to change system files 
> ownership:
> {code}
> /usr/local/hadoop/bin/container-executor spark yarn 0 etc /home/yarn/tokens 
> /home/spark / ls
> {code}
> This will change /etc to be owned by spark user:
> {code}
> # ls -ld /etc
> drwxr-s---. 110 spark hadoop 8192 Nov 21 20:00 /etc
> {code}
> Spark user can rewrite /etc files to gain more access.  We can improve this 
> with additional check in container-executor:
> # Make sure the prefix path is owned by the same user as the caller to 
> container-executor.
> # Make sure the log directory prefix is owned by the same user as the caller.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7064) Use cgroup to get container resource utilization

2018-01-10 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16321513#comment-16321513
 ] 

Miklos Szegedi commented on YARN-7064:
--

Thank you for the review, [~haibochen].
bq. 1) Why are we skipping NM_MEMORY_RESOURCE_PREFIX configurations in 
TestYarnConfigurationFields now?
Because we actually add the documentation that triggered the test failure that 
required these ignores.
bq. 3) In ProcfsBasedProcessTree.java, do we also want to print out the clock 
time as well besides the total jiffies? If so, it appears to me that 
CpuTimeTracker. updateElapsedJiffies() is a more appropriate place to log.
Does not log4j has the option to print out the time? CpuTimeTracker does not 
have any logging. I would defer this to another jira, if someone actually needs 
it.
bq. 4) TestCompareResourceCalculators is more of a functional test. Can we add 
in the class javadoc what its purpose is and why it is ignored by default in 
the code?
It was already documented later in the code but I added to the same 
notification to the class javadoc as well per request.
5)
I like the name CombinedResourceCalculator. It is slower indeed but I am not so 
convinced that it is actually not so accurate as ProcfsBasedProcessTree.
6)
 In CGroupsResourceCalculator, processFile() cannot be static since it refers 
to pid from the actual object that helps debugging.


> Use cgroup to get container resource utilization
> 
>
> Key: YARN-7064
> URL: https://issues.apache.org/jira/browse/YARN-7064
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Miklos Szegedi
>Assignee: Miklos Szegedi
> Attachments: YARN-7064.000.patch, YARN-7064.001.patch, 
> YARN-7064.002.patch, YARN-7064.003.patch, YARN-7064.004.patch, 
> YARN-7064.005.patch, YARN-7064.007.patch, YARN-7064.008.patch, 
> YARN-7064.009.patch, YARN-7064.010.patch
>
>
> This is an addendum to YARN-6668. What happens is that that jira always wants 
> to rebase patches against YARN-1011 instead of trunk.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7064) Use cgroup to get container resource utilization

2018-01-10 Thread Miklos Szegedi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Szegedi updated YARN-7064:
-
Attachment: YARN-7064.010.patch

> Use cgroup to get container resource utilization
> 
>
> Key: YARN-7064
> URL: https://issues.apache.org/jira/browse/YARN-7064
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Miklos Szegedi
>Assignee: Miklos Szegedi
> Attachments: YARN-7064.000.patch, YARN-7064.001.patch, 
> YARN-7064.002.patch, YARN-7064.003.patch, YARN-7064.004.patch, 
> YARN-7064.005.patch, YARN-7064.007.patch, YARN-7064.008.patch, 
> YARN-7064.009.patch, YARN-7064.010.patch
>
>
> This is an addendum to YARN-6668. What happens is that that jira always wants 
> to rebase patches against YARN-1011 instead of trunk.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7731) RegistryDNS should handle upstream DNS returning CNAME

2018-01-10 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16321500#comment-16321500
 ] 

genericqa commented on YARN-7731:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
10s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 43s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 19s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 45s{color} 
| {color:red} hadoop-yarn-registry in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 45m 32s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.registry.server.dns.TestSecureRegistryDNS |
|   | hadoop.registry.server.dns.TestRegistryDNS |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7731 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12905575/YARN-7731.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 8d3c6e96df33 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 12d0645 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| whitespace | 
https://builds.apache.org/job/PreCommit-YARN-Build/19191/artifact/out/whitespace-eol.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/19191/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-registry.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/19191/testReport/ |
| Max. process+thread cou

[jira] [Created] (YARN-7733) Clean up YARN services znode after application is destroyed

2018-01-10 Thread Eric Yang (JIRA)
Eric Yang created YARN-7733:
---

 Summary: Clean up YARN services znode after application is 
destroyed
 Key: YARN-7733
 URL: https://issues.apache.org/jira/browse/YARN-7733
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Eric Yang
Assignee: Eric Yang


Yarn services register znodes in ZooKeeper for Registry DNS to find location of 
the docker containers.  When application is removed, znode does not get 
destroyed and left in ZooKeeper.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7468) Provide means for container network policy control

2018-01-10 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16321489#comment-16321489
 ] 

Wangda Tan commented on YARN-7468:
--

[~xgong] thanks for updating the patch, there's still "parser"s left in the 
patch, could you update? you can find them from 
https://issues.apache.org/jira/secure/attachment/12905564/YARN-7468.trunk.4.patch#file-4
 

Also, javadocs warnings are related.

> Provide means for container network policy control
> --
>
> Key: YARN-7468
> URL: https://issues.apache.org/jira/browse/YARN-7468
> Project: Hadoop YARN
>  Issue Type: Task
>  Components: nodemanager
>Reporter: Clay B.
>Assignee: Xuan Gong
> Attachments: YARN-7468.trunk.1.patch, YARN-7468.trunk.1.patch, 
> YARN-7468.trunk.2.patch, YARN-7468.trunk.2.patch, YARN-7468.trunk.3.patch, 
> YARN-7468.trunk.4.patch, [YARN-7468] [Design] Provide means for container 
> network policy control.pdf
>
>
> To prevent data exfiltration from a YARN cluster, it would be very helpful to 
> have "firewall" rules able to map to a user/queue's containers.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7724) yarn application status should support application name

2018-01-10 Thread Gour Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7724?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16321470#comment-16321470
 ] 

Gour Saha commented on YARN-7724:
-

Patch 02 looks good to me. +1.

> yarn application status should support application name
> ---
>
> Key: YARN-7724
> URL: https://issues.apache.org/jira/browse/YARN-7724
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Reporter: Yesha Vora
>Assignee: Jian He
> Attachments: YARN-7724.01.patch, YARN-7724.02.patch
>
>
> Yarn Service application are tied with app name. Thus, yarn application 
> -status should be able to take yarn service name as argument such as
> yarn application -status 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7468) Provide means for container network policy control

2018-01-10 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16321462#comment-16321462
 ] 

genericqa commented on YARN-7468:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 43s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
14s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api in 
trunk has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m  
3s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 17s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 3 new + 221 unchanged - 0 fixed = 224 total (was 221) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  6s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
27s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
26s{color} | {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager
 generated 3 new + 9 unchanged - 0 fixed = 12 total (was 9) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
40s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 17m 
33s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 87m  4s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7468 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12905564/YARN-7468.trunk.4.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 23d11d4cc214 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provide

[jira] [Commented] (YARN-7696) Add container tags to ContainerTokenIdentifier, api.Container and NMContainerStatus to handle all recovery cases

2018-01-10 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16321457#comment-16321457
 ] 

genericqa commented on YARN-7696:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 18 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-6592 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
23s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
16s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
34s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
28s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  5m 
13s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m 24s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests 
{color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
16s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api in 
YARN-6592 has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
42s{color} | {color:green} YARN-6592 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
 5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 15m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
16s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 21s{color} | {color:orange} root: The patch generated 9 new + 956 unchanged 
- 8 fixed = 965 total (was 964) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  5m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  9s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  8m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
32s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
45s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
18s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
13s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 17m 21s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 65m 19s{color} 

[jira] [Updated] (YARN-7732) Support Pluggable AM Simulator

2018-01-10 Thread Young Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Young Chen updated YARN-7732:
-
Issue Type: Sub-task  (was: Improvement)
Parent: YARN-5065

> Support Pluggable AM Simulator
> --
>
> Key: YARN-7732
> URL: https://issues.apache.org/jira/browse/YARN-7732
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler-load-simulator
>Reporter: Young Chen
>Assignee: Young Chen
>Priority: Minor
>
> Extract the MapReduce specific set-up in the SLSRunner into the 
> MRAMSimulator, and enable support for pluggable AMSimulators



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7732) Support Pluggable AM Simulator

2018-01-10 Thread Young Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Young Chen updated YARN-7732:
-
Summary: Support Pluggable AM Simulator  (was: Support Pluggable AM 
Simulator types)

> Support Pluggable AM Simulator
> --
>
> Key: YARN-7732
> URL: https://issues.apache.org/jira/browse/YARN-7732
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: scheduler-load-simulator
>Reporter: Young Chen
>Assignee: Young Chen
>Priority: Minor
>
> Extract the MapReduce specific set-up in the SLSRunner into the 
> MRAMSimulator, and enable support for pluggable AMSimulators



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7732) Support Pluggable AM Simulator types

2018-01-10 Thread Young Chen (JIRA)
Young Chen created YARN-7732:


 Summary: Support Pluggable AM Simulator types
 Key: YARN-7732
 URL: https://issues.apache.org/jira/browse/YARN-7732
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: scheduler-load-simulator
Reporter: Young Chen
Assignee: Young Chen
Priority: Minor


Extract the MapReduce specific set-up in the SLSRunner into the MRAMSimulator, 
and enable support for pluggable AMSimulators



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-3895) Support ACLs in ATSv2

2018-01-10 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16321429#comment-16321429
 ] 

Vrushali C edited comment on YARN-3895 at 1/11/18 12:22 AM:


Hello [~rohithsharma] [~varun_saxena] [~haibo.chen]

I was thinking a little bit about ACLs and read side authorization. I have some 
thoughts and wanted to share them. Everything is not fully hashed out perfectly 
but I think this might work. 

When the data is written, at that time, we can use hbase cell tags to store the 
allowed users as well as groups. Just like we are storing things right now for 
flow run, we will do the same for entities and applications & subapps. 

While querying, we can pass in the querying user/group info via “Attributes” in 
the Get/Scan. This can be accessed in the coprocessor via “getAttributes” of 
the Get/Scan. Then the coprocessor checks if current user who is querying is 
equal to allowed user or if the current group is part of allowed groups list in 
the cell tags.

We can default to read allowed for all if no tags are present. Also, we could 
indicate that the user who is querying is a yarn_admin user, so allow all 
reads.  

This should work for all our regular tables like entity, application as well as 
sub-application. 

For sub app table, we store AM user as well as do-As user (and their groups) in 
the cell tags. So at query time, we can see if the querying user is one of AM 
user or doAs user. That way we protect the data from other users even if they 
run with the same AM user. 

For the flow run table, we can perhaps do a union or something across all 
entries. I am still thinking over it. 

Here is an old thread in the hbase-users mailing list in which James Taylor 
from Phoenix has also mentioned that Phoenix is (or at least was) doing the 
same thing
http://grokbase.com/t/hbase/user/132pkd5fvb/attributes-basic-question

We can later check with the HBase folks if this much extra data in the cell 
tags could be a concern but my gut feeling is that it’s not. Cell tags are used 
by hbase security as well as Phoenix for passing around information and making 
decisions at server side.




was (Author: vrushalic):
Hello [~rohithsharma] [~varun_saxena] [~haibo.chen]

I was thinking a little bit about ACLs and read side authorization. I have some 
thoughts and wanted to share them. Everything is not fully hashed out perfectly 
but I think this might work. 

When the data is written, at that time, we can use hbase cell tags to store the 
allowed users as well as groups. Just like we are storing things right now for 
flow run, we will do the same for entities and applications & subapps. 

While querying, we can pass in the querying user/group info via “Attributes” in 
the Get/Scan. This can be accessed in the coprocessor via “getAttributes” of 
the Get/Scan. Then the coprocessor checks if current user who is querying is 
equal to allowed user or if the current group is part of allowed groups list in 
the cell tags.

We can default to read allowed for all if no tags are present. Also, we could 
indicate that the user who is querying is a yarn_admin user, so allow all 
reads.  

This should work for all our regular tables like entity, application as well as 
sub-application. 

For sub app table, we store AM user as well as do-As user (and their groups) in 
the cell tags. So at query time, we can see if the querying user is one of AM 
user or doAs user. That way we protect the data from other users even if they 
run with the same AM user. 

For the flow run table, we can perhaps do a union or something across all 
entries. I am still thinking over it. 

Here is an old thread in the hbase-users mailing list in which James Taylor 
from Phoenix has also mentioned that Phoenix is (or at least was) doing the 
same thing 
https://mail-archives.apache.org/mod_mbox/hbase-user/201302.mbox/browser

We can later check with the HBase folks if this much extra data in the cell 
tags could be a concern but my gut feeling is that it’s not. Cell tags are used 
by hbase security as well as Phoenix for passing around information and making 
decisions at server side.



> Support ACLs in ATSv2
> -
>
> Key: YARN-3895
> URL: https://issues.apache.org/jira/browse/YARN-3895
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Varun Saxena
>Assignee: Varun Saxena
>  Labels: YARN-5355
>
> This JIRA is to keep track of authorization support design discussions for 
> both readers and collectors. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn

[jira] [Commented] (YARN-7724) yarn application status should support application name

2018-01-10 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7724?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16321437#comment-16321437
 ] 

genericqa commented on YARN-7724:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
 9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 45s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 26s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 24m 
26s{color} | {color:green} hadoop-yarn-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m  
3s{color} | {color:green} hadoop-yarn-services-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 90m 46s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7724 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12905561/YARN-7724.01.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 30b384d0727a 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 12d0645 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://b

[jira] [Commented] (YARN-7696) Add container tags to ContainerTokenIdentifier, api.Container and NMContainerStatus to handle all recovery cases

2018-01-10 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16321434#comment-16321434
 ] 

Arun Suresh commented on YARN-7696:
---

[~leftnoteasy], thanks for the review
About the constructors, technically, no external application should be even 
allowed to create a ConatinerTokenIdentifier - It would be a security breach. 
In the api.Container object that is returned to an external application, we 
send only an opaque Token object - Only the RM and NM can decode it into a 
ContainerTokenIdentifier. The reason I moved all the constructors out is 
because it used only by tests. Let take this opportunity to clean up the code 
as well.

bq. I'm doubt if application needs to read any fields from token identifier as 
we already have information in Container
You are right, in that you can get it from the Container. But this is needed 
for NM / RM recovery cases though. The NMContainerStatus is populated using the 
ContainerTokenId (which is stored in the state store).


> Add container tags to ContainerTokenIdentifier, api.Container and 
> NMContainerStatus to handle all recovery cases
> 
>
> Key: YARN-7696
> URL: https://issues.apache.org/jira/browse/YARN-7696
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-7696-YARN-6592.001.patch, 
> YARN-7696-YARN-6592.002.patch, YARN-7696-YARN-6592.003.patch
>
>
> The NM needs to persist the Container tags so that on RM recovery, it is sent 
> back to the RM via the NMContainerStatus. The RM would then recover the 
> AllocationTagsManager using this information.
> The api.Container also requires the allocationTags since after AM recovery, 
> we need to provide the AM with previously allocated containers.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3895) Support ACLs in ATSv2

2018-01-10 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16321429#comment-16321429
 ] 

Vrushali C commented on YARN-3895:
--

Hello [~rohithsharma] [~varun_saxena] [~haibo.chen]

I was thinking a little bit about ACLs and read side authorization. I have some 
thoughts and wanted to share them. Everything is not fully hashed out perfectly 
but I think this might work. 

When the data is written, at that time, we can use hbase cell tags to store the 
allowed users as well as groups. Just like we are storing things right now for 
flow run, we will do the same for entities and applications & subapps. 

While querying, we can pass in the querying user/group info via “Attributes” in 
the Get/Scan. This can be accessed in the coprocessor via “getAttributes” of 
the Get/Scan. Then the coprocessor checks if current user who is querying is 
equal to allowed user or if the current group is part of allowed groups list in 
the cell tags.

We can default to read allowed for all if no tags are present. Also, we could 
indicate that the user who is querying is a yarn_admin user, so allow all 
reads.  

This should work for all our regular tables like entity, application as well as 
sub-application. 

For sub app table, we store AM user as well as do-As user (and their groups) in 
the cell tags. So at query time, we can see if the querying user is one of AM 
user or doAs user. That way we protect the data from other users even if they 
run with the same AM user. 

For the flow run table, we can perhaps do a union or something across all 
entries. I am still thinking over it. 

Here is an old thread in the hbase-users mailing list in which James Taylor 
from Phoenix has also mentioned that Phoenix is (or at least was) doing the 
same thing 
https://mail-archives.apache.org/mod_mbox/hbase-user/201302.mbox/browser

We can later check with the HBase folks if this much extra data in the cell 
tags could be a concern but my gut feeling is that it’s not. Cell tags are used 
by hbase security as well as Phoenix for passing around information and making 
decisions at server side.



> Support ACLs in ATSv2
> -
>
> Key: YARN-3895
> URL: https://issues.apache.org/jira/browse/YARN-3895
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Varun Saxena
>Assignee: Varun Saxena
>  Labels: YARN-5355
>
> This JIRA is to keep track of authorization support design discussions for 
> both readers and collectors. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7731) RegistryDNS should handle upstream DNS returning CNAME

2018-01-10 Thread Eric Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7731?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-7731:

Attachment: YARN-7731.001.patch

Add recursive lookup for cname record.

> RegistryDNS should handle upstream DNS returning CNAME
> --
>
> Key: YARN-7731
> URL: https://issues.apache.org/jira/browse/YARN-7731
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Billie Rinaldi
>Assignee: Eric Yang
> Attachments: YARN-7731.001.patch
>
>
> When RegistryDNS performs a lookup in an upstream DNS server and a CNAME 
> record is retrieved, it returns a response with only the CNAME record (there 
> is no A record, meaning no IP address is resolved). RegistryDNS should 
> perform a lookup on the new name from the CNAME record in an attempt to find 
> an A record, which would provide an IP address.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7705) Create the container log directory with correct sticky bit in C code

2018-01-10 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-7705:
---
Attachment: YARN-7705.003.patch

[~miklos.szeg...@cloudera.com], thanks for the review. Uploaded patch v3 for 
your comments.

> Create the container log directory with correct sticky bit in C code
> 
>
> Key: YARN-7705
> URL: https://issues.apache.org/jira/browse/YARN-7705
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.0.0, 3.1.0
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: YARN-7705.001.patch, YARN-7705.002.patch, 
> YARN-7705.003.patch
>
>
> YARN-7363 created the container log directory in Java, which isn't able to 
> set the correct sticky bit because of Java language limitation. Wrong sticky 
> bit of log directory causes failure of reading log files inside the 
> directory. To solve that, we need to do it in C code. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-7724) yarn application status should support application name

2018-01-10 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7724?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He reassigned YARN-7724:
-

Assignee: Jian He

> yarn application status should support application name
> ---
>
> Key: YARN-7724
> URL: https://issues.apache.org/jira/browse/YARN-7724
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Reporter: Yesha Vora
>Assignee: Jian He
> Attachments: YARN-7724.01.patch, YARN-7724.02.patch
>
>
> Yarn Service application are tied with app name. Thus, yarn application 
> -status should be able to take yarn service name as argument such as
> yarn application -status 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7724) yarn application status should support application name

2018-01-10 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7724?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-7724:
--
Attachment: YARN-7724.02.patch

Yeah, makes sense, updated accordingly 

> yarn application status should support application name
> ---
>
> Key: YARN-7724
> URL: https://issues.apache.org/jira/browse/YARN-7724
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Reporter: Yesha Vora
>Assignee: Jian He
> Attachments: YARN-7724.01.patch, YARN-7724.02.patch
>
>
> Yarn Service application are tied with app name. Thus, yarn application 
> -status should be able to take yarn service name as argument such as
> yarn application -status 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7724) yarn application status should support application name

2018-01-10 Thread Gour Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7724?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16321360#comment-16321360
 ] 

Gour Saha commented on YARN-7724:
-

Suggestion:
I think we should leave the output of "yarn app -status " to yarn 
generic status only. This will keep backward compatibility, and avoid existing 
client-side tools to possibly fail while parsing the output. If the  is 
that of a service, clients can call "yarn app -status " again to get 
the service json status.

Note, for a service, if app name is not known but app id is known, clients can 
call the first cli and get the  which is printed in the yarn generic 
status and subsequently call the second cli. Also, for service status, we 
should print the json only. No need to print the string header "Detailed 
Application Status :" and make it an invalid json.

> yarn application status should support application name
> ---
>
> Key: YARN-7724
> URL: https://issues.apache.org/jira/browse/YARN-7724
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Reporter: Yesha Vora
> Attachments: YARN-7724.01.patch
>
>
> Yarn Service application are tied with app name. Thus, yarn application 
> -status should be able to take yarn service name as argument such as
> yarn application -status 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7717) Add configuration consistency for module.enabled and docker.privileged-containers.enabled

2018-01-10 Thread Eric Badger (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16321331#comment-16321331
 ] 

Eric Badger commented on YARN-7717:
---

Ran the test failures locally and they all passed for me. Plus, they're in the 
RM code, which this patch doesn't touch. So, I'd say they're unrelated.

> Add configuration consistency for module.enabled and 
> docker.privileged-containers.enabled
> -
>
> Key: YARN-7717
> URL: https://issues.apache.org/jira/browse/YARN-7717
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Yesha Vora
>Assignee: Eric Badger
> Attachments: YARN-7717.001.patch, YARN-7717.002.patch
>
>
> container-executor.cfg has two properties related to dockerization. 
> 1)  module.enabled = true/false
> 2) docker.privileged-containers.enabled = 1/0
> Here, both property takes different value to enable / disable feature. Module 
> enabled take true/false string while docker.privileged-containers.enabled  
> takes 1/0 integer value. 
> This properties behavior should be consistent. Both properties should have 
> true or false string as value to enable or disable feature/



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7705) Create the container log directory with correct sticky bit in C code

2018-01-10 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16321317#comment-16321317
 ] 

Miklos Szegedi commented on YARN-7705:
--

Thank you for the patch [~yufeigu]!
{code}
1068if (container_log_dir == NULL) {
{code}
I would log here that concatenation failed and what were the parameters. It 
helps with supporting the feature.
{code}
1064  char *any_one_container_log_dir = NULL;
{code}
Just the fact that a log dir was would is represented by this variable. I would 
replace it with a boolean (int). This will help to eliminate the extra logic on 
free().
{code}
1072if (create_directory_for_user(container_log_dir) != 0) {
1073  free(container_log_dir);
1074  return -1;
{code}
Please add some extra logging here as well.
{code}
370 cmd_input.container_id = argv[optind++];
{code}
This requires a bump in the argument number check above to avoid buffer 
overflows.
{code}
822   if (access(container_dir, R_OK) != 0) {
{code}
I would add a check that it is non NULL.

> Create the container log directory with correct sticky bit in C code
> 
>
> Key: YARN-7705
> URL: https://issues.apache.org/jira/browse/YARN-7705
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.0.0, 3.1.0
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: YARN-7705.001.patch, YARN-7705.002.patch
>
>
> YARN-7363 created the container log directory in Java, which isn't able to 
> set the correct sticky bit because of Java language limitation. Wrong sticky 
> bit of log directory causes failure of reading log files inside the 
> directory. To solve that, we need to do it in C code. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7468) Provide means for container network policy control

2018-01-10 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7468?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated YARN-7468:

Attachment: YARN-7468.trunk.4.patch

> Provide means for container network policy control
> --
>
> Key: YARN-7468
> URL: https://issues.apache.org/jira/browse/YARN-7468
> Project: Hadoop YARN
>  Issue Type: Task
>  Components: nodemanager
>Reporter: Clay B.
>Assignee: Xuan Gong
> Attachments: YARN-7468.trunk.1.patch, YARN-7468.trunk.1.patch, 
> YARN-7468.trunk.2.patch, YARN-7468.trunk.2.patch, YARN-7468.trunk.3.patch, 
> YARN-7468.trunk.4.patch, [YARN-7468] [Design] Provide means for container 
> network policy control.pdf
>
>
> To prevent data exfiltration from a YARN cluster, it would be very helpful to 
> have "firewall" rules able to map to a user/queue's containers.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7622) Allow fair-scheduler configuration on HDFS

2018-01-10 Thread Robert Kanter (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16321308#comment-16321308
 ] 

Robert Kanter commented on YARN-7622:
-

+1 on the branch-2 patch

> Allow fair-scheduler configuration on HDFS
> --
>
> Key: YARN-7622
> URL: https://issues.apache.org/jira/browse/YARN-7622
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler, resourcemanager
>Reporter: Greg Phillips
>Assignee: Greg Phillips
>Priority: Minor
> Attachments: YARN-7622-branch-2.006.patch, YARN-7622.001.patch, 
> YARN-7622.002.patch, YARN-7622.003.patch, YARN-7622.004.patch, 
> YARN-7622.005.patch, YARN-7622.006.patch
>
>
> The FairScheduler requires the allocation file to be hosted on the local 
> filesystem on the RM node(s). Allowing HDFS to store the allocation file will 
> provide improved redundancy, more options for scheduler updates, and RM 
> failover consistency in HA.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2185) Use pipes when localizing archives

2018-01-10 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16321307#comment-16321307
 ] 

genericqa commented on YARN-2185:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 28s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
21s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api in 
trunk has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
17s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
15s{color} | {color:green} root: The patch generated 0 new + 367 unchanged - 7 
fixed = 367 total (was 374) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  8s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
19s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
49s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
41s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
14s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}111m 32s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-2185 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12905536/YARN-2185.004.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux 02804c4b94f3 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20

[jira] [Commented] (YARN-7717) Add configuration consistency for module.enabled and docker.privileged-containers.enabled

2018-01-10 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16321304#comment-16321304
 ] 

genericqa commented on YARN-7717:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  9m 
57s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
49s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
38s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  5m 
27s{color} | {color:red} hadoop-yarn in trunk failed. {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
38m 30s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  8m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  8m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  5m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 48s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}111m 38s{color} 
| {color:red} hadoop-yarn in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 22m 
58s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
14s{color} | {color:green} hadoop-yarn-site in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  1m 
36s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}218m 19s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerResizing |
|   | hadoop.yarn.server.resourcemanager.TestRMRestart |
|   | hadoop.yarn.server.resourcemanager.TestRMAdminService |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7717 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12905522/YARN-7717.002.patch |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux 06f2cd55f500 3.13.0-133-generic #182-Ubuntu SMP Tue Sep 19 
15:49:21 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 12d0645 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| compile | 
https://builds.apache.org/job/PreCommit-YARN-Build/19185/artifact/out/branch-compile-hadoop-yarn-project_hadoop-yarn.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/19185/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/19185/testReport/ |
| Max. process+thread count | 888 (vs. ulimit of 5000) |
| modules | C: hadoop-yarn-project/hadoop-yarn 
hadoop-yarn-project/hadoop-yarn/ha

[jira] [Commented] (YARN-7724) yarn application status should support application name

2018-01-10 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7724?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16321293#comment-16321293
 ] 

Jian He commented on YARN-7724:
---

yarn app -status  will print both yarn generic status and app specific 
status
yarn app -status  will print app specific status only

> yarn application status should support application name
> ---
>
> Key: YARN-7724
> URL: https://issues.apache.org/jira/browse/YARN-7724
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Reporter: Yesha Vora
> Attachments: YARN-7724.01.patch
>
>
> Yarn Service application are tied with app name. Thus, yarn application 
> -status should be able to take yarn service name as argument such as
> yarn application -status 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7731) RegistryDNS should handle upstream DNS returning CNAME

2018-01-10 Thread Billie Rinaldi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7731?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Billie Rinaldi updated YARN-7731:
-
Target Version/s: 3.1.0

> RegistryDNS should handle upstream DNS returning CNAME
> --
>
> Key: YARN-7731
> URL: https://issues.apache.org/jira/browse/YARN-7731
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Billie Rinaldi
>Assignee: Eric Yang
>
> When RegistryDNS performs a lookup in an upstream DNS server and a CNAME 
> record is retrieved, it returns a response with only the CNAME record (there 
> is no A record, meaning no IP address is resolved). RegistryDNS should 
> perform a lookup on the new name from the CNAME record in an attempt to find 
> an A record, which would provide an IP address.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7731) RegistryDNS should handle upstream DNS returning CNAME

2018-01-10 Thread Billie Rinaldi (JIRA)
Billie Rinaldi created YARN-7731:


 Summary: RegistryDNS should handle upstream DNS returning CNAME
 Key: YARN-7731
 URL: https://issues.apache.org/jira/browse/YARN-7731
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Billie Rinaldi
Assignee: Eric Yang


When RegistryDNS performs a lookup in an upstream DNS server and a CNAME record 
is retrieved, it returns a response with only the CNAME record (there is no A 
record, meaning no IP address is resolved). RegistryDNS should perform a lookup 
on the new name from the CNAME record in an attempt to find an A record, which 
would provide an IP address.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7724) yarn application status should support application name

2018-01-10 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7724?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-7724:
--
Attachment: YARN-7724.01.patch

> yarn application status should support application name
> ---
>
> Key: YARN-7724
> URL: https://issues.apache.org/jira/browse/YARN-7724
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Reporter: Yesha Vora
> Attachments: YARN-7724.01.patch
>
>
> Yarn Service application are tied with app name. Thus, yarn application 
> -status should be able to take yarn service name as argument such as
> yarn application -status 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7064) Use cgroup to get container resource utilization

2018-01-10 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16321276#comment-16321276
 ] 

Haibo Chen commented on YARN-7064:
--

Looks like YARN-7730 is to expose the cgroup memory related configurations in 
yarn-default.xml

> Use cgroup to get container resource utilization
> 
>
> Key: YARN-7064
> URL: https://issues.apache.org/jira/browse/YARN-7064
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Miklos Szegedi
>Assignee: Miklos Szegedi
> Attachments: YARN-7064.000.patch, YARN-7064.001.patch, 
> YARN-7064.002.patch, YARN-7064.003.patch, YARN-7064.004.patch, 
> YARN-7064.005.patch, YARN-7064.007.patch, YARN-7064.008.patch, 
> YARN-7064.009.patch
>
>
> This is an addendum to YARN-6668. What happens is that that jira always wants 
> to rebase patches against YARN-1011 instead of trunk.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7590) Improve container-executor validation check

2018-01-10 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16321274#comment-16321274
 ] 

Miklos Szegedi commented on YARN-7590:
--

[~eyang], I figured it out.
{code}
  char *local_path = "target";
{code}
This path is incomplete. We should use {{TEST_ROOT "target"}} to follow the 
standard (see the function above this line) and let's do an mkdirs() to make 
sure it exists and the test can be run from any directory. That caused the 
failure on my test machine.

> Improve container-executor validation check
> ---
>
> Key: YARN-7590
> URL: https://issues.apache.org/jira/browse/YARN-7590
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: security, yarn
>Affects Versions: 2.0.1-alpha, 2.2.0, 2.3.0, 2.4.0, 2.5.0, 2.6.0, 2.7.0, 
> 2.8.0, 2.8.1, 3.0.0-beta1
>Reporter: Eric Yang
>Assignee: Eric Yang
> Attachments: YARN-7590.001.patch, YARN-7590.002.patch, 
> YARN-7590.003.patch, YARN-7590.004.patch, YARN-7590.005.patch, 
> YARN-7590.006.patch, YARN-7590.007.patch, YARN-7590.008.patch
>
>
> There is minimum check for prefix path for container-executor.  If YARN is 
> compromised, attacker  can use container-executor to change system files 
> ownership:
> {code}
> /usr/local/hadoop/bin/container-executor spark yarn 0 etc /home/yarn/tokens 
> /home/spark / ls
> {code}
> This will change /etc to be owned by spark user:
> {code}
> # ls -ld /etc
> drwxr-s---. 110 spark hadoop 8192 Nov 21 20:00 /etc
> {code}
> Spark user can rewrite /etc files to gain more access.  We can improve this 
> with additional check in container-executor:
> # Make sure the prefix path is owned by the same user as the caller to 
> container-executor.
> # Make sure the log directory prefix is owned by the same user as the caller.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7696) Add container tags to ContainerTokenIdentifier, api.Container and NMContainerStatus to handle all recovery cases

2018-01-10 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16321253#comment-16321253
 ] 

Wangda Tan commented on YARN-7696:
--

Thanks [~asuresh], just took a look at the patch.

1) ContainerTokenIdentifier, this is public/evolving. I understand that bylaw 
it is allowed however the truth is downstream projects sometimes treat evolving 
same as stable (we saw examples in Hive/Spark). To avoid painful of downstream 
project, I suggest to 
- Keep the original constructor, and mark it to be deprecated.
- Mark the new getAllcationTags to Unstable. (I'm doubt if application needs to 
read any fields from token identifier as we already have information in 
Container)

2) Unused imports in:
- RMContainerTokenSecretManager

> Add container tags to ContainerTokenIdentifier, api.Container and 
> NMContainerStatus to handle all recovery cases
> 
>
> Key: YARN-7696
> URL: https://issues.apache.org/jira/browse/YARN-7696
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-7696-YARN-6592.001.patch, 
> YARN-7696-YARN-6592.002.patch, YARN-7696-YARN-6592.003.patch
>
>
> The NM needs to persist the Container tags so that on RM recovery, it is sent 
> back to the RM via the NMContainerStatus. The RM would then recover the 
> AllocationTagsManager using this information.
> The api.Container also requires the allocationTags since after AM recovery, 
> we need to provide the AM with previously allocated containers.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7730) Add memory management configs to yarn-default

2018-01-10 Thread Yesha Vora (JIRA)
Yesha Vora created YARN-7730:


 Summary: Add memory management configs to yarn-default
 Key: YARN-7730
 URL: https://issues.apache.org/jira/browse/YARN-7730
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Yesha Vora
Priority: Minor


Add below configuration and description to yarn-defaults.xml
{code}
"yarn.nodemanager.resource.memory.enabled"
// the default value is false, we need to set to true here to enable the 
cgroups based memory monitoring.


"yarn.nodemanager.resource.memory.cgroups.soft-limit-percentage"
// the default value is 90.0f, which means in memory congestion case, the 
container can still keep/reserve 90% resource for its claimed value. It cannot 
be set to above 100 or set as negative value.

"yarn.nodemanager.resource.memory.cgroups.swappiness"
// The percentage that memory can be swapped or not. default value is 0, which 
means container memory cannot be swapped out. If not set, linux cgroup setting 
by default set to 60 which means 60% of memory can potentially be swapped out 
when system memory is not enough.{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7722) Rename variables in MockNM, MockRM for better clarity

2018-01-10 Thread lovekesh bansal (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16321192#comment-16321192
 ] 

lovekesh bansal commented on YARN-7722:
---

Thanks [~sunilg] for committing. You are right, it is because of the YARN-7237 
which is merged in trunk but not in 3.0.

> Rename variables in MockNM, MockRM for better clarity
> -
>
> Key: YARN-7722
> URL: https://issues.apache.org/jira/browse/YARN-7722
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: lovekesh bansal
>Assignee: lovekesh bansal
>Priority: Trivial
> Fix For: 3.1.0
>
> Attachments: YARN-7722_trunk.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7696) Add container tags to ContainerTokenIdentifier, api.Container and NMContainerStatus to handle all recovery cases

2018-01-10 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16321178#comment-16321178
 ] 

genericqa commented on YARN-7696:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 17 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-6592 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
51s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
47s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
59s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
51s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m  2s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests 
{color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
6s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api in 
YARN-6592 has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
52s{color} | {color:green} YARN-6592 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 12m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m 
27s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m  8s{color} | {color:orange} root: The patch generated 8 new + 957 unchanged 
- 7 fixed = 965 total (was 964) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 23s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
20s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
43s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
12s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m  
7s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 17m 41s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 65m 
52s{

[jira] [Updated] (YARN-2185) Use pipes when localizing archives

2018-01-10 Thread Miklos Szegedi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2185?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Szegedi updated YARN-2185:
-
Attachment: YARN-2185.004.patch

Fixing findbugs and checkstyle

> Use pipes when localizing archives
> --
>
> Key: YARN-2185
> URL: https://issues.apache.org/jira/browse/YARN-2185
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Affects Versions: 2.4.0
>Reporter: Jason Lowe
>Assignee: Miklos Szegedi
> Attachments: YARN-2185.000.patch, YARN-2185.001.patch, 
> YARN-2185.002.patch, YARN-2185.003.patch, YARN-2185.004.patch
>
>
> Currently the nodemanager downloads an archive to a local file, unpacks it, 
> and then removes it.  It would be more efficient to stream the data as it's 
> being unpacked to avoid both the extra disk space requirements and the 
> additional disk activity from storing the archive.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7468) Provide means for container network policy control

2018-01-10 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16321108#comment-16321108
 ] 

Wangda Tan commented on YARN-7468:
--

Thanks [~xgong], more comments beyond javadocs/findbugs warnings and UT 
failures.

1) Inside ResourceHandlerModule: 
To me, the following changes are incompatible: 
{code}
String handler = conf.get(YarnConfiguration.NM_NETWORK_RESOURCE_HANDLER,
YarnConfiguration.DEFAULT_NM_NETWORK_RESOURCE_HANDLER);
if (handler.equals(TrafficControlBandwidthHandlerImpl.class.getName())) {
  return getOutboundBandwidthResourceHandler(conf);
} else if (handler.equals(
NetworkPacketTaggingHandlerImpl.class.getName())) {
  return getNetworkTaggingHandler(conf);
} else {
  throw new YarnRuntimeException(
  "Unsupported handler specified in the configuraiton:"
  + YarnConfiguration.NM_NETWORK_RESOURCE_HANDLER
  + ". The supported handler could be either "
  + NetworkPacketTaggingHandlerImpl.class.getName() + " or "
  + TrafficControlBandwidthHandlerImpl.class.getName() + ".");
}
{code}
User has to config NM_NETWORK_RESOURCE_HANDLER in order to use 
TrafficControlBandwidthHandlerImpl. We should not touch existing logics to 
initialize TrafficControlBandwidthHandlerImpl, and add a new config like 
NM_NETWORK_TAG_PREFIX + ".enabled" to control tagging implementation.
Since the two classes cannot be used at the same time, an additional check need 
to be added to ResourceHandlerModule to avoid this happen. 

2) A couple of renames:
- NM_NETWORK_TAG_MAPPING_PARSER to NM_NETWORK_TAG_MAPPING_MANAGER/CONVERTER (or 
any better name you prefered). This could be beyond a parser of text file. We 
need to rename related configs/Factories, etc.
- Since cgroup cannot accept an arbitary String as network tag, suggest to 
rename getNetworkTagID to getNetworkTagHexID

3) Other minor comments:
- createNetworkTagMappingParser could be private.
- getBytesSentPerContainer should be removed.
- There're a couple of javadocs inside NetworkPacketTaggingHandlerImpl 
mentioned "bandwidth", which should be removed/updated. 

> Provide means for container network policy control
> --
>
> Key: YARN-7468
> URL: https://issues.apache.org/jira/browse/YARN-7468
> Project: Hadoop YARN
>  Issue Type: Task
>  Components: nodemanager
>Reporter: Clay B.
>Assignee: Xuan Gong
> Attachments: YARN-7468.trunk.1.patch, YARN-7468.trunk.1.patch, 
> YARN-7468.trunk.2.patch, YARN-7468.trunk.2.patch, YARN-7468.trunk.3.patch, 
> [YARN-7468] [Design] Provide means for container network policy control.pdf
>
>
> To prevent data exfiltration from a YARN cluster, it would be very helpful to 
> have "firewall" rules able to map to a user/queue's containers.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5366) Improve handling of the Docker container life cycle

2018-01-10 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16321093#comment-16321093
 ] 

Eric Yang commented on YARN-5366:
-

Docker rm [container-id] already send TERM signal to the spawning process.  I 
don't think fetching pid and manually send signal improves reliability.  I did 
not find retry/sleep mechanism to repeat the signal as suggested in 5).  Docker 
rm -f [container-id] can send kill signal to container as clean up.  I think 
docker commands are more robust without compute PID/signal manually in 
asynchronous calls.

> Improve handling of the Docker container life cycle
> ---
>
> Key: YARN-5366
> URL: https://issues.apache.org/jira/browse/YARN-5366
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
>  Labels: oct16-medium
> Attachments: YARN-5366.001.patch, YARN-5366.002.patch, 
> YARN-5366.003.patch, YARN-5366.004.patch, YARN-5366.005.patch, 
> YARN-5366.006.patch, YARN-5366.007.patch, YARN-5366.008.patch, 
> YARN-5366.009.patch, YARN-5366.010.patch
>
>
> There are several paths that need to be improved with regard to the Docker 
> container lifecycle when running Docker containers on YARN.
> 1) Provide the ability to keep a container on the NodeManager for a set 
> period of time for debugging purposes.
> 2) Support sending signals to the process in the container to allow for 
> triggering stack traces, heap dumps, etc.
> 3) Support for Docker's live restore, which means moving away from the use of 
> {{docker wait}}. (YARN-5818)
> 4) Improve the resiliency of liveliness checks (kill -0) by adding retries.
> 5) Improve the resiliency of container removal by adding retries.
> 6) Only attempt to stop, kill, and remove containers if the current container 
> state allows for it.
> 7) Better handling of short lived containers when the container is stopped 
> before the PID can be retrieved. (YARN-6305)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7696) Add container tags to ContainerTokenIdentifier, api.Container and NMContainerStatus to handle all recovery cases

2018-01-10 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7696?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-7696:
--
Attachment: YARN-7696-YARN-6592.003.patch

Moved ContainerTokenIdBuilder to test folder

> Add container tags to ContainerTokenIdentifier, api.Container and 
> NMContainerStatus to handle all recovery cases
> 
>
> Key: YARN-7696
> URL: https://issues.apache.org/jira/browse/YARN-7696
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-7696-YARN-6592.001.patch, 
> YARN-7696-YARN-6592.002.patch, YARN-7696-YARN-6592.003.patch
>
>
> The NM needs to persist the Container tags so that on RM recovery, it is sent 
> back to the RM via the NMContainerStatus. The RM would then recover the 
> AllocationTagsManager using this information.
> The api.Container also requires the allocationTags since after AM recovery, 
> we need to provide the AM with previously allocated containers.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7723) Avoid using docker volume --format option to compatible to older docker releases

2018-01-10 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16321070#comment-16321070
 ] 

Wangda Tan commented on YARN-7723:
--

Thanks [~ebadger], could you help to check the patch?

> Avoid using docker volume --format option to compatible to older docker 
> releases
> 
>
> Key: YARN-7723
> URL: https://issues.apache.org/jira/browse/YARN-7723
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-7723.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7723) Avoid using docker volume --format option to compatible to older docker releases

2018-01-10 Thread Eric Badger (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16321057#comment-16321057
 ] 

Eric Badger commented on YARN-7723:
---

bq. What I'm trying to use is docker volume ls --format instead of docker 
inspect.
Ahh sorry that's my fault. I misread the command. I thought it was referring to 
a docker volume inspect. Certainly if this isn't in 1.12.6, we should avoid it 
in hadoop, since I believe that 1.12.6 is the most recent version published by 
RedHat. 

> Avoid using docker volume --format option to compatible to older docker 
> releases
> 
>
> Key: YARN-7723
> URL: https://issues.apache.org/jira/browse/YARN-7723
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-7723.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7451) Resources Types should be visible in the Cluster Apps API "resourceRequests" section

2018-01-10 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16321047#comment-16321047
 ] 

genericqa commented on YARN-7451:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 33 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 39s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-client-modules/hadoop-client-minicluster 
hadoop-client-modules/hadoop-client-check-test-invariants {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
11s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m 
46s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 51s{color} | {color:orange} root: The patch generated 154 new + 166 
unchanged - 12 fixed = 320 total (was 178) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
4s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 49s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-client-modules/hadoop-client-minicluster 
hadoop-client-modules/hadoop-client-check-test-invariants {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
13s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 generated 5 new + 0 unchanged - 0 fixed = 5 total (was 0) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
27s{color} | {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager
 generated 1 new + 4 unchanged - 0 fixed = 5 total (was 4) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
14s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 65m 
19s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
27s{color} | {color:green} hadoop-client-min

[jira] [Commented] (YARN-7622) Allow fair-scheduler configuration on HDFS

2018-01-10 Thread Greg Phillips (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16321002#comment-16321002
 ] 

Greg Phillips commented on YARN-7622:
-

[~rkanter] The branch-2 patch remedies the import & lambda issues, let me know 
if there are any additional changes required. 

> Allow fair-scheduler configuration on HDFS
> --
>
> Key: YARN-7622
> URL: https://issues.apache.org/jira/browse/YARN-7622
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler, resourcemanager
>Reporter: Greg Phillips
>Assignee: Greg Phillips
>Priority: Minor
> Attachments: YARN-7622-branch-2.006.patch, YARN-7622.001.patch, 
> YARN-7622.002.patch, YARN-7622.003.patch, YARN-7622.004.patch, 
> YARN-7622.005.patch, YARN-7622.006.patch
>
>
> The FairScheduler requires the allocation file to be hosted on the local 
> filesystem on the RM node(s). Allowing HDFS to store the allocation file will 
> provide improved redundancy, more options for scheduler updates, and RM 
> failover consistency in HA.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6599) Support rich placement constraints in scheduler

2018-01-10 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16320979#comment-16320979
 ] 

Arun Suresh commented on YARN-6599:
---

Another comment:
In YARN-7669, we added support for a {{RejectedSchedulingRequest}}. Maybe you 
use that to notify the AM when you get a 
{{SchedulerInvalidResoureRequestException}}

> Support rich placement constraints in scheduler
> ---
>
> Key: YARN-6599
> URL: https://issues.apache.org/jira/browse/YARN-6599
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-6599-YARN-6592.003.patch, 
> YARN-6599-YARN-6592.004.patch, YARN-6599-YARN-6592.005.patch, 
> YARN-6599-YARN-6592.006.patch, YARN-6599-YARN-6592.007.patch, 
> YARN-6599-YARN-6592.008.patch, YARN-6599-YARN-6592.wip.002.patch, 
> YARN-6599.poc.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6599) Support rich placement constraints in scheduler

2018-01-10 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16320977#comment-16320977
 ] 

Wangda Tan commented on YARN-6599:
--

[~asuresh], thanks, sure will make changes today.

> Support rich placement constraints in scheduler
> ---
>
> Key: YARN-6599
> URL: https://issues.apache.org/jira/browse/YARN-6599
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-6599-YARN-6592.003.patch, 
> YARN-6599-YARN-6592.004.patch, YARN-6599-YARN-6592.005.patch, 
> YARN-6599-YARN-6592.006.patch, YARN-6599-YARN-6592.007.patch, 
> YARN-6599-YARN-6592.008.patch, YARN-6599-YARN-6592.wip.002.patch, 
> YARN-6599.poc.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7723) Avoid using docker volume --format option to compatible to older docker releases

2018-01-10 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16320975#comment-16320975
 ] 

Wangda Tan commented on YARN-7723:
--

[~ebadger], thanks for commenting. I'm not sure if this is correct. What I'm 
trying to use is {{docker volume ls --format}} instead of {{docker inspect}}. 
(We can check if we should use {{docker inspect}} instead.).

The docker version which I hit the issue is docker 1.12, you can see 
{{--format}} is missing from docker volume ls.

{code}
[root@host ~]# docker version
Client:
 Version: 1.12.6
 API version: 1.24
 Package version: docker-1.12.6-68.gitec8512b.el7.centos.x86_64
 Go version:  go1.8.3
 Git commit:  ec8512b/1.12.6
 Built:   Mon Dec 11 16:08:42 2017
 OS/Arch: linux/amd64
Cannot connect to the Docker daemon. Is the docker daemon running on this host?
[root@host ~]# docker volume ls --help

Usage:  docker volume ls [OPTIONS]

List volumes

Aliases:
  ls, list

Options:
  -f, --filter value   Provide filter values (i.e. 'dangling=true') (default [])
  --help   Print usage
  -q, --quiet  Only display volume names
{code}

Is there any way to check when --format added to docker volume ls? I couldn't 
find the information from docker API doc.

> Avoid using docker volume --format option to compatible to older docker 
> releases
> 
>
> Key: YARN-7723
> URL: https://issues.apache.org/jira/browse/YARN-7723
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-7723.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6599) Support rich placement constraints in scheduler

2018-01-10 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16320959#comment-16320959
 ] 

Arun Suresh commented on YARN-6599:
---

{noformat}
Since the whole scheduling request is a new feature, we will include two 
configs:
a. Enable placement processor (YARN-7612 already includes configs)
b. Enable scheduling request handled by app placement allocator.

Both a/b are disabled by default and cannot be enabled at the same time.
{noformat}
Also, as stated above, will you be adding the new config param - and the check 
in this patch ?

> Support rich placement constraints in scheduler
> ---
>
> Key: YARN-6599
> URL: https://issues.apache.org/jira/browse/YARN-6599
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-6599-YARN-6592.003.patch, 
> YARN-6599-YARN-6592.004.patch, YARN-6599-YARN-6592.005.patch, 
> YARN-6599-YARN-6592.006.patch, YARN-6599-YARN-6592.007.patch, 
> YARN-6599-YARN-6592.008.patch, YARN-6599-YARN-6592.wip.002.patch, 
> YARN-6599.poc.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7729) Add support for setting the PID namespace mode

2018-01-10 Thread Shane Kumpf (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shane Kumpf updated YARN-7729:
--
Description: 
Docker has support for allowing containers to share the PID namespace with the 
host or other containers via the {{docker run --pid}} flag.

There are a number of use cases where this is desirable:
* Monitoring tools running in containers that need access to the host level 
PIDs.
* Debug containers that can attach to another container to run strace, gdb, etc.
* Testing Docker on YARN in a container, where the docker socket is bind 
mounted.

Enabling this feature should be considered privileged as it exposes host 
details inside the container.

  was:
Docker has support for allowing containers to share the PID namespace with the 
host or other containers via the {{docker run --pid}} flag.

There are a number of use cases where this is desirable:
* Monitoring tools running in containers that use process IDs.
* Debug containers that can attach to another container to run strace, gdb, etc.
* Testing Docker on YARN in a container, where the docker socket is bind 
mounted.

Enabling this feature should be considered privileged as it exposes host 
details inside the container.


> Add support for setting the PID namespace mode
> --
>
> Key: YARN-7729
> URL: https://issues.apache.org/jira/browse/YARN-7729
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Reporter: Shane Kumpf
>
> Docker has support for allowing containers to share the PID namespace with 
> the host or other containers via the {{docker run --pid}} flag.
> There are a number of use cases where this is desirable:
> * Monitoring tools running in containers that need access to the host level 
> PIDs.
> * Debug containers that can attach to another container to run strace, gdb, 
> etc.
> * Testing Docker on YARN in a container, where the docker socket is bind 
> mounted.
> Enabling this feature should be considered privileged as it exposes host 
> details inside the container.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7729) Add support for setting the PID namespace mode

2018-01-10 Thread Shane Kumpf (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shane Kumpf updated YARN-7729:
--
Description: 
Docker has support for allowing containers to share the PID namespace with the 
host or other containers via the {{docker run --pid}} flag.

There are a number of use cases where this is desirable:
* Monitoring tools running in containers that use process IDs.
* Debug containers that can attach to another container to run strace, gdb, etc.
* Testing Docker on YARN in a container, where the docker socket is bind 
mounted.

Enabling this feature should be considered privileged as it exposes host 
details inside the container.

  was:
Docker has support for allowing containers to share the PID namespace with the 
host or other containers via the {{--pid}} {{docker run}} flag.

There are a number of use cases where this is desirable:
* Monitoring tools running in containers that use process IDs.
* Debug containers that can attach to another container to run strace, gdb, etc.
* Testing Docker on YARN in a container, where the docker socket is bind 
mounted.

Enabling this feature should be considered privileged as it exposes host 
details inside the container.


> Add support for setting the PID namespace mode
> --
>
> Key: YARN-7729
> URL: https://issues.apache.org/jira/browse/YARN-7729
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Reporter: Shane Kumpf
>
> Docker has support for allowing containers to share the PID namespace with 
> the host or other containers via the {{docker run --pid}} flag.
> There are a number of use cases where this is desirable:
> * Monitoring tools running in containers that use process IDs.
> * Debug containers that can attach to another container to run strace, gdb, 
> etc.
> * Testing Docker on YARN in a container, where the docker socket is bind 
> mounted.
> Enabling this feature should be considered privileged as it exposes host 
> details inside the container.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7729) Add support for setting the PID namespace mode

2018-01-10 Thread Shane Kumpf (JIRA)
Shane Kumpf created YARN-7729:
-

 Summary: Add support for setting the PID namespace mode
 Key: YARN-7729
 URL: https://issues.apache.org/jira/browse/YARN-7729
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: nodemanager
Reporter: Shane Kumpf


Docker has support for allowing containers to share the PID namespace with the 
host or other containers via the {{--pid}} {{docker run}} flag.

There are a number of use cases where this is desirable:
* Monitoring tools running in containers that use process IDs.
* Debug containers that can attach to another container to run strace, gdb, etc.
* Testing Docker on YARN in a container, where the docker socket is bind 
mounted.

Enabling this feature should be considered privileged as it exposes host 
details inside the container.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7724) yarn application status should support application name

2018-01-10 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7724?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-7724:
--
Issue Type: Sub-task  (was: Improvement)
Parent: YARN-7054

> yarn application status should support application name
> ---
>
> Key: YARN-7724
> URL: https://issues.apache.org/jira/browse/YARN-7724
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Reporter: Yesha Vora
>
> Yarn Service application are tied with app name. Thus, yarn application 
> -status should be able to take yarn service name as argument such as
> yarn application -status 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-6599) Support rich placement constraints in scheduler

2018-01-10 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16320928#comment-16320928
 ] 

Arun Suresh edited comment on YARN-6599 at 1/10/18 7:29 PM:


[~leftnoteasy], sure - given our discussion on adding the 
{{APPLICATION_LABEL/}} targetKey, can you remove the applicationId tag prefix 
and related code from the latest patch ?

Lets restrict this JIRA to intra-app ({{APPLICATION_LABEL/SELF}} placement) for 
the time-being


was (Author: asuresh):
[~leftnoteasy], sure - given our discussion on adding the 
{{APPLICATION_LABEL/}} targetKey, can you remove the applicationId tag prefix 
and related code from the latest patch ?

> Support rich placement constraints in scheduler
> ---
>
> Key: YARN-6599
> URL: https://issues.apache.org/jira/browse/YARN-6599
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-6599-YARN-6592.003.patch, 
> YARN-6599-YARN-6592.004.patch, YARN-6599-YARN-6592.005.patch, 
> YARN-6599-YARN-6592.006.patch, YARN-6599-YARN-6592.007.patch, 
> YARN-6599-YARN-6592.008.patch, YARN-6599-YARN-6592.wip.002.patch, 
> YARN-6599.poc.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6599) Support rich placement constraints in scheduler

2018-01-10 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16320928#comment-16320928
 ] 

Arun Suresh commented on YARN-6599:
---

[~leftnoteasy], sure - given our discussion on adding the 
{{APPLICATION_LABEL/}} targetKey, can you remove the applicationId tag prefix 
and related code from the latest patch ?

> Support rich placement constraints in scheduler
> ---
>
> Key: YARN-6599
> URL: https://issues.apache.org/jira/browse/YARN-6599
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-6599-YARN-6592.003.patch, 
> YARN-6599-YARN-6592.004.patch, YARN-6599-YARN-6592.005.patch, 
> YARN-6599-YARN-6592.006.patch, YARN-6599-YARN-6592.007.patch, 
> YARN-6599-YARN-6592.008.patch, YARN-6599-YARN-6592.wip.002.patch, 
> YARN-6599.poc.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7717) Add configuration consistency for module.enabled and docker.privileged-containers.enabled

2018-01-10 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16320922#comment-16320922
 ] 

genericqa commented on YARN-7717:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 17m 
37s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
24s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 31m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  9m 
20s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 58m 
46s{color} | {color:red} branch has errors when building and testing our client 
artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
31s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 
 6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 12m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  9m 
20s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 62 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
2s{color} | {color:red} The patch 384 line(s) with tabs. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  1m 
14s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 34s{color} 
| {color:red} hadoop-yarn in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 34s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
44s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}114m 15s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7717 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12905503/YARN-7717.001.patch |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux 9b4505d9ddab 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 1a09da7 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| whitespace | 
https://builds.apache.org/job/PreCommit-YARN-Build/19182/artifact/out/whitespace-eol.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-YARN-Build/19182/artifact/out/whitespace-tabs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/19182/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/19182/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/19182/testReport/ |
| asflicense | 
https://builds.apache.org/job/PreCommit-YARN-Build/19182/artifact/out/patch-asflicense-pro

[jira] [Updated] (YARN-7723) Avoid using docker volume --format option to compatible to older docker releases

2018-01-10 Thread Eric Badger (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7723?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Badger updated YARN-7723:
--
Issue Type: Sub-task  (was: Bug)
Parent: YARN-3611

> Avoid using docker volume --format option to compatible to older docker 
> releases
> 
>
> Key: YARN-7723
> URL: https://issues.apache.org/jira/browse/YARN-7723
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-7723.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7717) Add configuration consistency for module.enabled and docker.privileged-containers.enabled

2018-01-10 Thread Eric Badger (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Badger updated YARN-7717:
--
Attachment: YARN-7717.002.patch

Uploading new patch to address the above feedback.

> Add configuration consistency for module.enabled and 
> docker.privileged-containers.enabled
> -
>
> Key: YARN-7717
> URL: https://issues.apache.org/jira/browse/YARN-7717
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Yesha Vora
>Assignee: Eric Badger
> Attachments: YARN-7717.001.patch, YARN-7717.002.patch
>
>
> container-executor.cfg has two properties related to dockerization. 
> 1)  module.enabled = true/false
> 2) docker.privileged-containers.enabled = 1/0
> Here, both property takes different value to enable / disable feature. Module 
> enabled take true/false string while docker.privileged-containers.enabled  
> takes 1/0 integer value. 
> This properties behavior should be consistent. Both properties should have 
> true or false string as value to enable or disable feature/



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Reopened] (YARN-7540) Convert yarn app cli to call yarn api services

2018-01-10 Thread Eric Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang reopened YARN-7540:
-

The code has been reverted.  Keep this issue reopened and patch available to 
track commit for YARN-7540 and YARN-7605 together.

> Convert yarn app cli to call yarn api services
> --
>
> Key: YARN-7540
> URL: https://issues.apache.org/jira/browse/YARN-7540
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Assignee: Eric Yang
> Fix For: yarn-native-services
>
> Attachments: YARN-7540.001.patch, YARN-7540.002.patch, 
> YARN-7540.003.patch, YARN-7540.004.patch, YARN-7540.005.patch, 
> YARN-7540.006.patch
>
>
> For YARN docker application to launch through CLI, it works differently from 
> launching through REST API.  All application launched through REST API is 
> currently stored in yarn user HDFS home directory.  Application managed 
> through CLI are stored into individual user's HDFS home directory.  For 
> consistency, we want to have yarn app cli to interact with API service to 
> manage applications.  For performance reason, it is easier to implement list 
> all applications from one user's home directory instead of crawling all 
> user's home directories.  For security reason, it is safer to access only one 
> user home directory instead of all users.  Given the reasons above, the 
> proposal is to change how {{yarn app -launch}}, {{yarn app -list}} and {{yarn 
> app -destroy}} work.  Instead of calling HDFS API and RM API to launch 
> containers, CLI will be converted to call API service REST API resides in RM. 
>  RM perform the persist and operations to launch the actual application.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7717) Add configuration consistency for module.enabled and docker.privileged-containers.enabled

2018-01-10 Thread Eric Badger (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16320826#comment-16320826
 ] 

Eric Badger commented on YARN-7717:
---

[~shaneku...@gmail.com], [~eyang], thanks for the comments!

bq. I'm curious if using strcasecmp might give us more flexibility in how users 
define these
Yep, that seems like a good change to make. I'll fix that up

bq. Also, the documentation only refers to 0/1, I think that should be updated 
as well.
Will update that as well

> Add configuration consistency for module.enabled and 
> docker.privileged-containers.enabled
> -
>
> Key: YARN-7717
> URL: https://issues.apache.org/jira/browse/YARN-7717
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Yesha Vora
>Assignee: Eric Badger
> Attachments: YARN-7717.001.patch
>
>
> container-executor.cfg has two properties related to dockerization. 
> 1)  module.enabled = true/false
> 2) docker.privileged-containers.enabled = 1/0
> Here, both property takes different value to enable / disable feature. Module 
> enabled take true/false string while docker.privileged-containers.enabled  
> takes 1/0 integer value. 
> This properties behavior should be consistent. Both properties should have 
> true or false string as value to enable or disable feature/



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7696) Add container tags to ContainerTokenIdentifier, api.Container and NMContainerStatus to handle all recovery cases

2018-01-10 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16320825#comment-16320825
 ] 

Wangda Tan commented on YARN-7696:
--

[~asuresh], sure will take a look by today.

> Add container tags to ContainerTokenIdentifier, api.Container and 
> NMContainerStatus to handle all recovery cases
> 
>
> Key: YARN-7696
> URL: https://issues.apache.org/jira/browse/YARN-7696
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-7696-YARN-6592.001.patch, 
> YARN-7696-YARN-6592.002.patch
>
>
> The NM needs to persist the Container tags so that on RM recovery, it is sent 
> back to the RM via the NMContainerStatus. The RM would then recover the 
> AllocationTagsManager using this information.
> The api.Container also requires the allocationTags since after AM recovery, 
> we need to provide the AM with previously allocated containers.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7717) Add configuration consistency for module.enabled and docker.privileged-containers.enabled

2018-01-10 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16320820#comment-16320820
 ] 

Eric Yang commented on YARN-7717:
-

Can we use 

{code}
#include 
intstrcasecmp(const char *, const char *);
{code}

To capture case insensitive match?  Not sure if there is restriction on using 
this.

> Add configuration consistency for module.enabled and 
> docker.privileged-containers.enabled
> -
>
> Key: YARN-7717
> URL: https://issues.apache.org/jira/browse/YARN-7717
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Yesha Vora
>Assignee: Eric Badger
> Attachments: YARN-7717.001.patch
>
>
> container-executor.cfg has two properties related to dockerization. 
> 1)  module.enabled = true/false
> 2) docker.privileged-containers.enabled = 1/0
> Here, both property takes different value to enable / disable feature. Module 
> enabled take true/false string while docker.privileged-containers.enabled  
> takes 1/0 integer value. 
> This properties behavior should be consistent. Both properties should have 
> true or false string as value to enable or disable feature/



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6599) Support rich placement constraints in scheduler

2018-01-10 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16320821#comment-16320821
 ] 

Wangda Tan commented on YARN-6599:
--

[~asuresh]/[~kkaranasos], could you help to check the latest patch? I want to 
get this done by next Monday.

> Support rich placement constraints in scheduler
> ---
>
> Key: YARN-6599
> URL: https://issues.apache.org/jira/browse/YARN-6599
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-6599-YARN-6592.003.patch, 
> YARN-6599-YARN-6592.004.patch, YARN-6599-YARN-6592.005.patch, 
> YARN-6599-YARN-6592.006.patch, YARN-6599-YARN-6592.007.patch, 
> YARN-6599-YARN-6592.008.patch, YARN-6599-YARN-6592.wip.002.patch, 
> YARN-6599.poc.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7696) Add container tags to ContainerTokenIdentifier, api.Container and NMContainerStatus to handle all recovery cases

2018-01-10 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16320815#comment-16320815
 ] 

Arun Suresh edited comment on YARN-7696 at 1/10/18 6:41 PM:


Updating patch - fixing tests.
[~leftnoteasy], [~kkaranasos], [~cheersyang], do give this a look


was (Author: asuresh):
Updating patch - fixing tests.
[~leftnoteasy], [~kkaranasos], do give this a look

> Add container tags to ContainerTokenIdentifier, api.Container and 
> NMContainerStatus to handle all recovery cases
> 
>
> Key: YARN-7696
> URL: https://issues.apache.org/jira/browse/YARN-7696
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-7696-YARN-6592.001.patch, 
> YARN-7696-YARN-6592.002.patch
>
>
> The NM needs to persist the Container tags so that on RM recovery, it is sent 
> back to the RM via the NMContainerStatus. The RM would then recover the 
> AllocationTagsManager using this information.
> The api.Container also requires the allocationTags since after AM recovery, 
> we need to provide the AM with previously allocated containers.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7696) Add container tags to ContainerTokenIdentifier, api.Container and NMContainerStatus to handle all recovery cases

2018-01-10 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7696?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-7696:
--
Attachment: YARN-7696-YARN-6592.002.patch

Updating patch - fixing tests.
[~leftnoteasy], [~kkaranasos], do give this a look

> Add container tags to ContainerTokenIdentifier, api.Container and 
> NMContainerStatus to handle all recovery cases
> 
>
> Key: YARN-7696
> URL: https://issues.apache.org/jira/browse/YARN-7696
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-7696-YARN-6592.001.patch, 
> YARN-7696-YARN-6592.002.patch
>
>
> The NM needs to persist the Container tags so that on RM recovery, it is sent 
> back to the RM via the NMContainerStatus. The RM would then recover the 
> AllocationTagsManager using this information.
> The api.Container also requires the allocationTags since after AM recovery, 
> we need to provide the AM with previously allocated containers.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7064) Use cgroup to get container resource utilization

2018-01-10 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16320814#comment-16320814
 ] 

Haibo Chen commented on YARN-7064:
--

Thanks [~miklos.szeg...@cloudera.com] for updating the patch. The patch looks 
good to me overall. I have a few minor comments/questions.

1) Why are we skipping  NM_MEMORY_RESOURCE_PREFIX configurations in 
TestYarnConfigurationFields now?

2) I am not sure why exisiting NM_MEMORY_RESOURCE* configuration properties 
(and all the other cgroup related ones) are marked as @Private and not included 
in yarn-site.xml. But I think we need to be consistent with other properties. 
Either we remove @Private annotation or we do not include them in yarn-site.xml

3) In ProcfsBasedProcessTree.java, do we also want to print out the clock time 
as well besides the total jiffies? If so, it appears to me that CpuTimeTracker. 
updateElapsedJiffies() is a more appropriate place to log.

4) TestCompareResourceCalculators is more of a functional test. Can we add in 
the class javadoc what its purpose is and why it is ignored by default in the 
code?

5) CominedRsourceCalculator can probably renamed to HybridResourceCalculator? 
Do we want to delegate to procfs.getProcessTreeDump() int getProcessTreeDump()? 
The most inefficient part of ProcfsBasedProcessTree, IIUC, is 
updateProcessTree() in which we read the `stat` file of all processes in the 
system. I believe the intent of CombinedResourceCalculator is to get the speed 
of CgroupsResourceCalculator and the accuracy of ProcfsBasedProcessTree in its 
virtual memory measurement, but given procfs.updateProcessTree() is always 
called in CombinedResourceCalculator.updateProcessTree(),  will 
CombinedResourceCalculator be faster than ProcfsBasedProcessTree in such case?

6) In CGroupsResourceCalculator, processFile() should be static I think. Given 
that exception in readTotalProcessJiffies() is propagated to the caller chain, 
do you think 'Failed to parse' warning message is necessary to be logged there? 
In the constructor `CGroupsResourceCalculator(String pid, String procfsDir, 
CGroupsHandler cGroupsHandler, Clock clock)`, 
`clock==SystemClock.getInstance()` is used to tell if it is called in unit 
test, which is not a reliable indicator. I think we can create three 
constructors instead, `CGroupsResourceCalculator(String, String, 
CGroupsHandler, Clock, long jiffyLength)`, then we create one for production 
that passes in `SysInfoLinux.JIFFY_LENGTH_IN_MILLIS`, and one that passes in 10 
for unit testing. 

7) ContainersMonitorImpl is missing an import and thus does not compile for me. 
The new code in getResourceCalculatorProcessTree() for 
CgroupsResourceCalculator and CombinedResourceCalculator are very similar. How 
about we simplify it this way: Given the two new resource calculators must be 
configured explicitly by users, we can consolidate the new code with 
ResourceCalculatorProcessTree.getResourceCalculatorProcessTree() just as any 
other custom resource calculators. We can add a new method `initialize()` in 
ResourceCalculatorProcessTree with an empty body, and then override it 
CGroupsResourceCalculator and CombinedResourceCalculator so that 
`CGroupsResourceCalculator.isAvailable()` is checked and setCGroupFilePaths() 
is called.  Another problem with existing approach is that, if the user sets up 
CGroupsResourceCalculator and cgroups is not enabled, `pt` will be null and we 
continue to call  
ResourceCalculatorProcessTree.getResourceCalculatorProcessTree() which returns 
a new instance of CGroupsResourceCalculator. CGroups is not enabled, and yet we 
will run NodeManager with CGroupsResourceCalculator.

> Use cgroup to get container resource utilization
> 
>
> Key: YARN-7064
> URL: https://issues.apache.org/jira/browse/YARN-7064
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Miklos Szegedi
>Assignee: Miklos Szegedi
> Attachments: YARN-7064.000.patch, YARN-7064.001.patch, 
> YARN-7064.002.patch, YARN-7064.003.patch, YARN-7064.004.patch, 
> YARN-7064.005.patch, YARN-7064.007.patch, YARN-7064.008.patch, 
> YARN-7064.009.patch
>
>
> This is an addendum to YARN-6668. What happens is that that jira always wants 
> to rebase patches against YARN-1011 instead of trunk.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7717) Add configuration consistency for module.enabled and docker.privileged-containers.enabled

2018-01-10 Thread Shane Kumpf (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16320810#comment-16320810
 ] 

Shane Kumpf commented on YARN-7717:
---

Thanks for the patch [~ebadger]! Overall it looks good. Couple comments. I'm 
curious if using strcasecmp might give us more flexibility in how users define 
these? does that introduce portability concerns? or is the desire only to 
support true/True? Also, the documentation only refers to 0/1, I think that 
should be updated as well.

> Add configuration consistency for module.enabled and 
> docker.privileged-containers.enabled
> -
>
> Key: YARN-7717
> URL: https://issues.apache.org/jira/browse/YARN-7717
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Yesha Vora
>Assignee: Eric Badger
> Attachments: YARN-7717.001.patch
>
>
> container-executor.cfg has two properties related to dockerization. 
> 1)  module.enabled = true/false
> 2) docker.privileged-containers.enabled = 1/0
> Here, both property takes different value to enable / disable feature. Module 
> enabled take true/false string while docker.privileged-containers.enabled  
> takes 1/0 integer value. 
> This properties behavior should be consistent. Both properties should have 
> true or false string as value to enable or disable feature/



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5366) Improve handling of the Docker container life cycle

2018-01-10 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16320775#comment-16320775
 ] 

Eric Yang commented on YARN-5366:
-

[~shaneku...@gmail.com] The way docker commands are handled and environment 
variable setup play a critical role to ensure smooth integration between YARN 
and docker.  If I am reading this correctly, the current launcher operation 
performs as:

{code}
NodeManager
  container-executor
  docker run ... launch_container.sh
user application unix command
{code}

User defined environment variables and a lot of internal wiring are done in 
{{launcher_container.sh}}.  Would it be possible to change the environment 
variable construction for docker run command to use -e k=v instructions?  This 
would reduce the effort to rewrite code to support ENTRY_POINT for docker.  In 
the ideal case, the pipeline of the execution supposed to be:

{code}
NodeManager
  container-executor
  docker run -e k=v [launcher_command]
{code}

This reduce the reliance of mounting launch_container.sh, and run it inside the 
container.  This would honor docker container to be a standalone unit without 
reliance on Yarn generated script to run, and support docker ENTRY_POINT.  Is 
it possible to improve the launcher bootstrap this way?

> Improve handling of the Docker container life cycle
> ---
>
> Key: YARN-5366
> URL: https://issues.apache.org/jira/browse/YARN-5366
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
>  Labels: oct16-medium
> Attachments: YARN-5366.001.patch, YARN-5366.002.patch, 
> YARN-5366.003.patch, YARN-5366.004.patch, YARN-5366.005.patch, 
> YARN-5366.006.patch, YARN-5366.007.patch, YARN-5366.008.patch, 
> YARN-5366.009.patch, YARN-5366.010.patch
>
>
> There are several paths that need to be improved with regard to the Docker 
> container lifecycle when running Docker containers on YARN.
> 1) Provide the ability to keep a container on the NodeManager for a set 
> period of time for debugging purposes.
> 2) Support sending signals to the process in the container to allow for 
> triggering stack traces, heap dumps, etc.
> 3) Support for Docker's live restore, which means moving away from the use of 
> {{docker wait}}. (YARN-5818)
> 4) Improve the resiliency of liveliness checks (kill -0) by adding retries.
> 5) Improve the resiliency of container removal by adding retries.
> 6) Only attempt to stop, kill, and remove containers if the current container 
> state allows for it.
> 7) Better handling of short lived containers when the container is stopped 
> before the PID can be retrieved. (YARN-6305)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7451) Resources Types should be visible in the Cluster Apps API "resourceRequests" section

2018-01-10 Thread Szilard Nemeth (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szilard Nemeth updated YARN-7451:
-
Attachment: YARN-7451.003.patch

Fix license issues and one finbugs issue

> Resources Types should be visible in the Cluster Apps API "resourceRequests" 
> section
> 
>
> Key: YARN-7451
> URL: https://issues.apache.org/jira/browse/YARN-7451
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager, restapi
>Affects Versions: 3.0.0
>Reporter: Grant Sohn
>Assignee: Szilard Nemeth
> Attachments: YARN-7451.001.patch, YARN-7451.002.patch, 
> YARN-7451.003.patch, 
> YARN-7451__Expose_custom_resource_types_on_RM_scheduler_API_as_flattened_map01_02.patch
>
>
> When running jobs that request resource types the RM Cluster Apps API should 
> include this in the "resourceRequests" object.
> Additionally, when calling the RM scheduler API it returns:
> {noformat}
>  "childQueues": {
> "queue": [
> {
> "allocatedContainers": 101,
> "amMaxResources": {
> "memory": 320390,
> "vCores": 192
> },
> "amUsedResources": {
> "memory": 1024,
> "vCores": 1
> },
> "clusterResources": {
> "memory": 640779,
> "vCores": 384
> },
> "demandResources": {
> "memory": 103424,
> "vCores": 101
> },
> "fairResources": {
> "memory": 640779,
> "vCores": 384
> },
> "maxApps": 2147483647,
> "maxResources": {
> "memory": 640779,
> "vCores": 384
> },
> "minResources": {
> "memory": 0,
> "vCores": 0
> },
> "numActiveApps": 1,
> "numPendingApps": 0,
> "preemptable": true,
> "queueName": "root.users.systest",
> "reservedContainers": 0,
> "reservedResources": {
> "memory": 0,
> "vCores": 0
> },
> "schedulingPolicy": "fair",
> "steadyFairResources": {
> "memory": 320390,
> "vCores": 192
> },
> "type": "fairSchedulerLeafQueueInfo",
> "usedResources": {
> "memory": 103424,
> "vCores": 101
> }
> }
> ]
> {noformat}
> However, the web UI shows resource types usage.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7717) Add configuration consistency for module.enabled and docker.privileged-containers.enabled

2018-01-10 Thread Eric Badger (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Badger updated YARN-7717:
--
Attachment: YARN-7717.001.patch

Attaching a patch that allows for "true" or "True" for 
docker.privileged-containers.enabled and feature.tc.enabled. It also keeps the 
old 1/0 behavior. I tried to keep the code intact as much as possible, so it's 
not beautiful, but I could definitely optimize things more if you all think 
it's necessary. 

> Add configuration consistency for module.enabled and 
> docker.privileged-containers.enabled
> -
>
> Key: YARN-7717
> URL: https://issues.apache.org/jira/browse/YARN-7717
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Yesha Vora
>Assignee: Eric Badger
> Attachments: YARN-7717.001.patch
>
>
> container-executor.cfg has two properties related to dockerization. 
> 1)  module.enabled = true/false
> 2) docker.privileged-containers.enabled = 1/0
> Here, both property takes different value to enable / disable feature. Module 
> enabled take true/false string while docker.privileged-containers.enabled  
> takes 1/0 integer value. 
> This properties behavior should be consistent. Both properties should have 
> true or false string as value to enable or disable feature/



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-7424) Capacity Scheduler Intra-queue preemption: add property to only preempt up to configured MULP

2018-01-10 Thread Eric Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Payne resolved YARN-7424.
--
Resolution: Invalid

bq. In order to create the "desired" behavior, we would have to fundamentally 
change the way the capacity scheduler works,
Closing

> Capacity Scheduler Intra-queue preemption: add property to only preempt up to 
> configured MULP
> -
>
> Key: YARN-7424
> URL: https://issues.apache.org/jira/browse/YARN-7424
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacity scheduler, scheduler preemption
>Affects Versions: 3.0.0-beta1, 2.8.2
>Reporter: Eric Payne
>Assignee: Eric Payne
>
> If the queue's configured minimum user limit percent (MULP) is something 
> small like 1%, all users will max out well over their MULP until 100 users 
> have apps in the queue. Since the intra-queue preemption monitor tries to 
> balance the resource among the users, most of the time in this use case it 
> will be preempting containers on behalf of users that are already over their 
> MULP guarantee.
> This JIRA proposes that a property should be provided so that a queue can be 
> configured to only preempt on behalf of a user until that user has reached 
> its MULP.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7728) Expose and expand container preemptions in Capacity Scheduler queue metrics

2018-01-10 Thread Eric Payne (JIRA)
Eric Payne created YARN-7728:


 Summary: Expose and expand container preemptions in Capacity 
Scheduler queue metrics
 Key: YARN-7728
 URL: https://issues.apache.org/jira/browse/YARN-7728
 Project: Hadoop YARN
  Issue Type: Improvement
Affects Versions: 3.0.0, 2.8.3, 2.9.0
Reporter: Eric Payne
Assignee: Eric Payne


YARN-1047 exposed queue metrics for the number of preempted containers to the 
fair scheduler. I would like to also expose these to the capacity scheduler and 
add metrics for the amount of lost memory seconds and vcore seconds.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7681) Scheduler should double-check placement constraint before actual allocation is made

2018-01-10 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16320644#comment-16320644
 ] 

Arun Suresh commented on YARN-7681:
---

Thanks for the update [~cheersyang] and for the review [~pgaref]
+1
Committing this shortly..

> Scheduler should double-check placement constraint before actual allocation 
> is made
> ---
>
> Key: YARN-7681
> URL: https://issues.apache.org/jira/browse/YARN-7681
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: RM, scheduler
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: YARN-7681-YARN-6592.001.patch, 
> YARN-7681-YARN-6592.002.patch, YARN-7681-YARN-6592.003.patch
>
>
> This JIRA is created based on the discussions under YARN-7612, see comments 
> after [this 
> comment|https://issues.apache.org/jira/browse/YARN-7612?focusedCommentId=16303051&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16303051].
>  AllocationTagsManager maintains tags info that helps to make placement 
> decisions at placement phase, however tags are changing along with 
> container's lifecycle, so it is possible that the placement violates the 
> constraints at the scheduling phase. Propose to add an extra check in the 
> scheduler to make sure constraints are still satisfied during the actual 
> allocation.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7681) Double-check placement constraints in scheduling phase before actual allocation is made

2018-01-10 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-7681:
--
Summary: Double-check placement constraints in scheduling phase before 
actual allocation is made  (was: Scheduler should double-check placement 
constraint before actual allocation is made)

> Double-check placement constraints in scheduling phase before actual 
> allocation is made
> ---
>
> Key: YARN-7681
> URL: https://issues.apache.org/jira/browse/YARN-7681
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: RM, scheduler
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: YARN-7681-YARN-6592.001.patch, 
> YARN-7681-YARN-6592.002.patch, YARN-7681-YARN-6592.003.patch
>
>
> This JIRA is created based on the discussions under YARN-7612, see comments 
> after [this 
> comment|https://issues.apache.org/jira/browse/YARN-7612?focusedCommentId=16303051&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16303051].
>  AllocationTagsManager maintains tags info that helps to make placement 
> decisions at placement phase, however tags are changing along with 
> container's lifecycle, so it is possible that the placement violates the 
> constraints at the scheduling phase. Propose to add an extra check in the 
> scheduler to make sure constraints are still satisfied during the actual 
> allocation.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7622) Allow fair-scheduler configuration on HDFS

2018-01-10 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16320474#comment-16320474
 ] 

genericqa commented on YARN-7622:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 13m  
3s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
31s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
40s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
48s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
22s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} branch-2 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 0 new + 39 unchanged - 3 fixed = 39 total (was 42) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 61m 
46s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 97m 52s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:17213a0 |
| JIRA Issue | YARN-7622 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12905452/YARN-7622-branch-2.006.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 2a51062ee666 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | branch-2 / 66d58d2 |
| maven | version: Apache Maven 3.3.9 
(bb52d8502b132ec0a5a3f4c09453c07478323dc5; 2015-11-10T16:41:47+00:00) |
| Default Java | 1.7.0_151 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/19181/testReport/ |
| Max. process+thread count | 829 (vs. ulimit of 5000) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/19181/console |
| Powered by | Apache Yetus 0.7.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Allow fair-scheduler configuration on HDFS
> --
>
> Key: YARN-7622
> URL: https://issues.apache.

[jira] [Commented] (YARN-7722) Rename variables in MockNM, MockRM for better clarity

2018-01-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16320458#comment-16320458
 ] 

Hudson commented on YARN-7722:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13477 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13477/])
YARN-7722. Rename variables in MockNM, MockRM for better clarity. (sunilg: rev 
afd8caba2730262cb8c5d7c4a5d2d1081b671f1d)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/MockNM.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/MockRM.java


> Rename variables in MockNM, MockRM for better clarity
> -
>
> Key: YARN-7722
> URL: https://issues.apache.org/jira/browse/YARN-7722
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: lovekesh bansal
>Assignee: lovekesh bansal
>Priority: Trivial
> Fix For: 3.1.0
>
> Attachments: YARN-7722_trunk.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7722) Rename variables in MockNM, MockRM for better clarity

2018-01-10 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-7722:
--
Target Version/s: 3.1.0  (was: 3.0.0)

> Rename variables in MockNM, MockRM for better clarity
> -
>
> Key: YARN-7722
> URL: https://issues.apache.org/jira/browse/YARN-7722
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: lovekesh bansal
>Assignee: lovekesh bansal
>Priority: Trivial
> Attachments: YARN-7722_trunk.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7722) Rename variables in MockNM, MockRM for better clarity

2018-01-10 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-7722:
--
Summary: Rename variables in MockNM, MockRM for better clarity  (was: 
Correcting the capability in MockNM, MockRM test classes)

> Rename variables in MockNM, MockRM for better clarity
> -
>
> Key: YARN-7722
> URL: https://issues.apache.org/jira/browse/YARN-7722
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: lovekesh bansal
>Assignee: lovekesh bansal
>Priority: Trivial
> Attachments: YARN-7722_trunk.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7722) Correcting the capability in MockNM, MockRM test classes

2018-01-10 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-7722:
--
Priority: Trivial  (was: Minor)

> Correcting the capability in MockNM, MockRM test classes
> 
>
> Key: YARN-7722
> URL: https://issues.apache.org/jira/browse/YARN-7722
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: lovekesh bansal
>Assignee: lovekesh bansal
>Priority: Trivial
> Attachments: YARN-7722_trunk.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7622) Allow fair-scheduler configuration on HDFS

2018-01-10 Thread Greg Phillips (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7622?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Greg Phillips updated YARN-7622:

Attachment: (was: YARN-7622-branch-2.006.patch)

> Allow fair-scheduler configuration on HDFS
> --
>
> Key: YARN-7622
> URL: https://issues.apache.org/jira/browse/YARN-7622
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler, resourcemanager
>Reporter: Greg Phillips
>Assignee: Greg Phillips
>Priority: Minor
> Attachments: YARN-7622-branch-2.006.patch, YARN-7622.001.patch, 
> YARN-7622.002.patch, YARN-7622.003.patch, YARN-7622.004.patch, 
> YARN-7622.005.patch, YARN-7622.006.patch
>
>
> The FairScheduler requires the allocation file to be hosted on the local 
> filesystem on the RM node(s). Allowing HDFS to store the allocation file will 
> provide improved redundancy, more options for scheduler updates, and RM 
> failover consistency in HA.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7622) Allow fair-scheduler configuration on HDFS

2018-01-10 Thread Greg Phillips (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7622?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Greg Phillips updated YARN-7622:

Attachment: YARN-7622-branch-2.006.patch

> Allow fair-scheduler configuration on HDFS
> --
>
> Key: YARN-7622
> URL: https://issues.apache.org/jira/browse/YARN-7622
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler, resourcemanager
>Reporter: Greg Phillips
>Assignee: Greg Phillips
>Priority: Minor
> Attachments: YARN-7622-branch-2.006.patch, YARN-7622.001.patch, 
> YARN-7622.002.patch, YARN-7622.003.patch, YARN-7622.004.patch, 
> YARN-7622.005.patch, YARN-7622.006.patch
>
>
> The FairScheduler requires the allocation file to be hosted on the local 
> filesystem on the RM node(s). Allowing HDFS to store the allocation file will 
> provide improved redundancy, more options for scheduler updates, and RM 
> failover consistency in HA.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7451) Resources Types should be visible in the Cluster Apps API "resourceRequests" section

2018-01-10 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16320195#comment-16320195
 ] 

genericqa commented on YARN-7451:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 33 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 55s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-client-modules/hadoop-client-minicluster 
hadoop-client-modules/hadoop-client-check-test-invariants {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
21s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m 
17s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m  4s{color} | {color:orange} root: The patch generated 153 new + 167 
unchanged - 11 fixed = 320 total (was 178) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
5s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  0s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-client-modules/hadoop-client-minicluster 
hadoop-client-modules/hadoop-client-check-test-invariants {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
23s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 generated 5 new + 0 unchanged - 0 fixed = 5 total (was 0) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
32s{color} | {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager
 generated 1 new + 4 unchanged - 0 fixed = 5 total (was 4) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
20s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 64m 18s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
24s{color} | {color:green} hadoop-client-minicluster 

  1   2   >