[jira] [Commented] (YARN-5754) Null check missing for earliest in FifoPolicy

2016-10-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15604491#comment-15604491
 ] 

Hudson commented on YARN-5754:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10671 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10671/])
YARN-5754. Null check missing for earliest in FifoPolicy. (Yufei Gu via (kasha: 
rev a71fc81655cd5382d354674dd06570ba49753688)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/policies/FifoPolicy.java


> Null check missing for earliest in FifoPolicy
> -
>
> Key: YARN-5754
> URL: https://issues.apache.org/jira/browse/YARN-5754
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 3.0.0-alpha1
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Fix For: 2.9.0
>
> Attachments: YARN-5754.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5777) TestLogsCLI#testFetchApplictionLogsAsAnotherUser fails

2016-10-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15604501#comment-15604501
 ] 

Hadoop QA commented on YARN-5777:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
43s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 26s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
19s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 29s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
54s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 28s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
24s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 24s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 24s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 27s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 0s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 24s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 35s 
{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
15s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 16m 20s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12835072/YARN-5777.01.patch |
| JIRA Issue | YARN-5777 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux bf2fe1b69240 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / dbd2057 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/13496/testReport/ |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/13496/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> TestLogsCLI#testFetchApplictionLogsAsAnotherUser fails
> --
>
> Key: YARN-5777
> URL: https://issues.apache.org/jira/browse/YARN-5777
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: test
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
> Attachments: YARN-5777.01.patch
>
>
> {noformat}
> Running org.apache.hadoop.yarn.cli

[jira] [Commented] (YARN-5559) Analyse 2.8.0/3.0.0 jdiff reports and fix any issues

2016-10-25 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15604551#comment-15604551
 ] 

Akira Ajisaka commented on YARN-5559:
-

Mostly looks good to me. I checked what issue broke the compatibility.

bq. 1) Incompatible changes in GetClusterNodeLabelResponse.java (signature 
changed)
YARN-3413 (fixed in 2.8.0)

bq. 3) One of ContainerTokenIdentifier newInstance method removed
YARN-2581 (fixed in 2.6.0)

I'm thinking we should separate the patch into (3) and the rest. (3) should be 
fixed in branch-2.6 and above.

> Analyse 2.8.0/3.0.0 jdiff reports and fix any issues
> 
>
> Key: YARN-5559
> URL: https://issues.apache.org/jira/browse/YARN-5559
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Blocker
> Attachments: YARN-5559.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5774) MR Job stuck in ACCEPTED status without any progress in Fair Scheduler if set yarn.scheduler.minimum-allocation-mb to 0.

2016-10-25 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-5774:
---
Attachment: YARN-5774.001.patch

> MR Job stuck in ACCEPTED status without any progress in Fair Scheduler if set 
> yarn.scheduler.minimum-allocation-mb to 0.
> 
>
> Key: YARN-5774
> URL: https://issues.apache.org/jira/browse/YARN-5774
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 3.0.0-alpha1
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: YARN-5774.001.patch
>
>
> MR Job stuck in ACCEPTED status without any progress in Fair Scheduler 
> because there is no resource request for the AM. This happened when you 
> configure {{yarn.scheduler.minimum-allocation-mb}} to zero.
> The problem is in the code used by both Capacity Scheduler and Fair 
> Scheduler. {{scheduler.increment-allocation-mb}} is a concept in FS, but not 
> CS. So the common code in class RMAppManager passes the 
> {{yarn.scheduler.minimum-allocation-mb}} as incremental one because there is 
> no incremental one for CS when it tried to normalize the resource requests.
> {code}
>  SchedulerUtils.normalizeRequest(amReq, scheduler.getResourceCalculator(),
>   scheduler.getClusterResource(),
>   scheduler.getMinimumResourceCapability(),
>   scheduler.getMaximumResourceCapability(),
>   scheduler.getMinimumResourceCapability());  --> incrementResource 
> should be passed here.
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5559) Analyse 2.8.0/3.0.0 jdiff reports and fix any issues

2016-10-25 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15604561#comment-15604561
 ] 

Akira Ajisaka commented on YARN-5559:
-

Additional comment: Would you add javadoc to document the replacements when 
deprecating APIs?

> Analyse 2.8.0/3.0.0 jdiff reports and fix any issues
> 
>
> Key: YARN-5559
> URL: https://issues.apache.org/jira/browse/YARN-5559
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Blocker
> Attachments: YARN-5559.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5545) App submit failure on queue with label when default queue partition capacity is zero

2016-10-25 Thread Bibin A Chundatt (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bibin A Chundatt updated YARN-5545:
---
Attachment: YARN-5545.004.patch

> App submit failure on queue with label when default queue partition capacity 
> is zero
> 
>
> Key: YARN-5545
> URL: https://issues.apache.org/jira/browse/YARN-5545
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
> Attachments: YARN-5545.0001.patch, YARN-5545.0002.patch, 
> YARN-5545.0003.patch, YARN-5545.004.patch, capacity-scheduler.xml
>
>
> Configure capacity scheduler 
> yarn.scheduler.capacity.root.default.capacity=0
> yarn.scheduler.capacity.root.queue1.accessible-node-labels.labelx.capacity=50
> yarn.scheduler.capacity.root.default.accessible-node-labels.labelx.capacity=50
> Submit application as below
> ./yarn jar 
> ../share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.0.0-alpha2-SNAPSHOT-tests.jar
>  sleep -Dmapreduce.job.node-label-expression=labelx 
> -Dmapreduce.job.queuename=default -m 1 -r 1 -mt 1000 -rt 1
> {noformat}
> 2016-08-21 18:21:31,375 INFO mapreduce.JobSubmitter: Cleaning up the staging 
> area /tmp/hadoop-yarn/staging/root/.staging/job_1471670113386_0001
> java.io.IOException: org.apache.hadoop.yarn.exceptions.YarnException: Failed 
> to submit application_1471670113386_0001 to YARN : 
> org.apache.hadoop.security.AccessControlException: Queue root.default already 
> has 0 applications, cannot accept submission of application: 
> application_1471670113386_0001
>   at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:316)
>   at 
> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:255)
>   at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1344)
>   at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1341)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1790)
>   at org.apache.hadoop.mapreduce.Job.submit(Job.java:1341)
>   at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1362)
>   at org.apache.hadoop.mapreduce.SleepJob.run(SleepJob.java:273)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.mapreduce.SleepJob.main(SleepJob.java:194)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at 
> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
>   at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
>   at 
> org.apache.hadoop.test.MapredTestDriver.run(MapredTestDriver.java:136)
>   at 
> org.apache.hadoop.test.MapredTestDriver.main(MapredTestDriver.java:144)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at org.apache.hadoop.util.RunJar.run(RunJar.java:239)
>   at org.apache.hadoop.util.RunJar.main(RunJar.java:153)
> Caused by: org.apache.hadoop.yarn.exceptions.YarnException: Failed to submit 
> application_1471670113386_0001 to YARN : 
> org.apache.hadoop.security.AccessControlException: Queue root.default already 
> has 0 applications, cannot accept submission of application: 
> application_1471670113386_0001
>   at 
> org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.submitApplication(YarnClientImpl.java:286)
>   at 
> org.apache.hadoop.mapred.ResourceMgrDelegate.submitApplication(ResourceMgrDelegate.java:296)
>   at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:301)
>   ... 25 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5545) App submit failure on queue with label when default queue partition capacity is zero

2016-10-25 Thread Bibin A Chundatt (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15604684#comment-15604684
 ] 

Bibin A Chundatt commented on YARN-5545:


Thank you [~sunilg]/[~naganarasimha...@apache.org]/[~jlowe] for discussion and 
comments

Attaching patch based on discussion. 
# Added new configuration 
{{yarn.scheduler.capacity.global-queue-max-application}} property 
# Testcase added for application submit for handling user limit and application 
limit for queue

> App submit failure on queue with label when default queue partition capacity 
> is zero
> 
>
> Key: YARN-5545
> URL: https://issues.apache.org/jira/browse/YARN-5545
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
> Attachments: YARN-5545.0001.patch, YARN-5545.0002.patch, 
> YARN-5545.0003.patch, YARN-5545.004.patch, capacity-scheduler.xml
>
>
> Configure capacity scheduler 
> yarn.scheduler.capacity.root.default.capacity=0
> yarn.scheduler.capacity.root.queue1.accessible-node-labels.labelx.capacity=50
> yarn.scheduler.capacity.root.default.accessible-node-labels.labelx.capacity=50
> Submit application as below
> ./yarn jar 
> ../share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.0.0-alpha2-SNAPSHOT-tests.jar
>  sleep -Dmapreduce.job.node-label-expression=labelx 
> -Dmapreduce.job.queuename=default -m 1 -r 1 -mt 1000 -rt 1
> {noformat}
> 2016-08-21 18:21:31,375 INFO mapreduce.JobSubmitter: Cleaning up the staging 
> area /tmp/hadoop-yarn/staging/root/.staging/job_1471670113386_0001
> java.io.IOException: org.apache.hadoop.yarn.exceptions.YarnException: Failed 
> to submit application_1471670113386_0001 to YARN : 
> org.apache.hadoop.security.AccessControlException: Queue root.default already 
> has 0 applications, cannot accept submission of application: 
> application_1471670113386_0001
>   at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:316)
>   at 
> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:255)
>   at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1344)
>   at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1341)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1790)
>   at org.apache.hadoop.mapreduce.Job.submit(Job.java:1341)
>   at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1362)
>   at org.apache.hadoop.mapreduce.SleepJob.run(SleepJob.java:273)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.mapreduce.SleepJob.main(SleepJob.java:194)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at 
> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
>   at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
>   at 
> org.apache.hadoop.test.MapredTestDriver.run(MapredTestDriver.java:136)
>   at 
> org.apache.hadoop.test.MapredTestDriver.main(MapredTestDriver.java:144)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at org.apache.hadoop.util.RunJar.run(RunJar.java:239)
>   at org.apache.hadoop.util.RunJar.main(RunJar.java:153)
> Caused by: org.apache.hadoop.yarn.exceptions.YarnException: Failed to submit 
> application_1471670113386_0001 to YARN : 
> org.apache.hadoop.security.AccessControlException: Queue root.default already 
> has 0 applications, cannot accept submission of application: 
> application_1471670113386_0001
>   at 
> org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.submitApplication(YarnClientImpl.java:286)
>   at 
> org.apache.hadoop.mapred.ResourceMgrDelegate.submitApplication(ResourceMgrDelegate.java:296)
>   at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:301)
>   ... 25 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h..

[jira] [Commented] (YARN-5774) MR Job stuck in ACCEPTED status without any progress in Fair Scheduler if set yarn.scheduler.minimum-allocation-mb to 0.

2016-10-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15604700#comment-15604700
 ] 

Hadoop QA commented on YARN-5774:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
44s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 32s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
24s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 38s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
57s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
30s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 30s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 30s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 21s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 1 new + 295 unchanged - 9 fixed = 296 total (was 304) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 35s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 2s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s 
{color} | {color:green} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager
 generated 0 new + 925 unchanged - 13 fixed = 925 total (was 938) {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 39m 8s 
{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
15s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 53m 42s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12835082/YARN-5774.001.patch |
| JIRA Issue | YARN-5774 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 0d31235e371b 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / dbd2057 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/13497/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/13497/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/13497/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> MR Job stuck in ACCEPTED status wit

[jira] [Comment Edited] (YARN-5773) RM recovery too slow due to LeafQueue#activateApplication()

2016-10-25 Thread Bibin A Chundatt (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15604710#comment-15604710
 ] 

Bibin A Chundatt edited comment on YARN-5773 at 10/25/16 8:57 AM:
--

Thank you [~sunilg],Wangda,Varun 

Recovery scenario will handle based on  
[comment|https://issues.apache.org/jira/browse/YARN-5773?focusedCommentId=15603947&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15603947]
  along with logLevel change .
Optimizing activateApplication() can be handled in new JIRA.
Thoughts?


was (Author: bibinchundatt):
Thank you [~sunilg],Wangda,Varun 

Recovery scenario will handle based on  
[comment|https://issues.apache.org/jira/browse/YARN-5773?focusedCommentId=15603947&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15603947]
  along with logLevel change .
Optimizing activateApplication() can be handled in new JIRA.


> RM recovery too slow due to LeafQueue#activateApplication()
> ---
>
> Key: YARN-5773
> URL: https://issues.apache.org/jira/browse/YARN-5773
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>Priority: Critical
> Attachments: YARN-5773.0001.patch, YARN-5773.0002.patch
>
>
> # Submit application 10K application to default queue.
> # All applications are in accepted state
> # Now restart resourcemanager
> For each application recovery {{LeafQueue#activateApplications()}} is 
> invoked.Resulting in AM limit check to be done even before Node managers are 
> getting registered.
> Total iteration for N application is about {{N(N+1)/2}} for {{10K}} 
> application   {{5000}} iterations causing time take for Rm to be active 
> more than 10 min.
> Since NM resources are not yet added to during recovery we should skip 
> {{activateApplicaiton()}} 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5773) RM recovery too slow due to LeafQueue#activateApplication()

2016-10-25 Thread Bibin A Chundatt (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15604710#comment-15604710
 ] 

Bibin A Chundatt commented on YARN-5773:


Thank you [~sunilg],Wangda,Varun 

Recovery scenario will handle based on  
[comment|https://issues.apache.org/jira/browse/YARN-5773?focusedCommentId=15603947&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15603947]
  along with logLevel change .
Optimizing activateApplication() can be handled in new JIRA.


> RM recovery too slow due to LeafQueue#activateApplication()
> ---
>
> Key: YARN-5773
> URL: https://issues.apache.org/jira/browse/YARN-5773
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>Priority: Critical
> Attachments: YARN-5773.0001.patch, YARN-5773.0002.patch
>
>
> # Submit application 10K application to default queue.
> # All applications are in accepted state
> # Now restart resourcemanager
> For each application recovery {{LeafQueue#activateApplications()}} is 
> invoked.Resulting in AM limit check to be done even before Node managers are 
> getting registered.
> Total iteration for N application is about {{N(N+1)/2}} for {{10K}} 
> application   {{5000}} iterations causing time take for Rm to be active 
> more than 10 min.
> Since NM resources are not yet added to during recovery we should skip 
> {{activateApplicaiton()}} 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5773) RM recovery too slow due to LeafQueue#activateApplication()

2016-10-25 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15604718#comment-15604718
 ] 

Varun Saxena commented on YARN-5773:


Is there any need to activate applications on recovery ? Cluster resources will 
anyways be 0 on recovery as resource tracker service has not yet started.
We can however check for cluster resources or user limit right in the beginning 
while activating applications and come out of it if applicable resources are 0. 
That will have same impact on recovery.

Overall i.e. in normal flow, to optimize activateApplications, Wangda's 
suggestion sounds good. But ordering policy will have to be maintained as well. 
Right ?

> RM recovery too slow due to LeafQueue#activateApplication()
> ---
>
> Key: YARN-5773
> URL: https://issues.apache.org/jira/browse/YARN-5773
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>Priority: Critical
> Attachments: YARN-5773.0001.patch, YARN-5773.0002.patch
>
>
> # Submit application 10K application to default queue.
> # All applications are in accepted state
> # Now restart resourcemanager
> For each application recovery {{LeafQueue#activateApplications()}} is 
> invoked.Resulting in AM limit check to be done even before Node managers are 
> getting registered.
> Total iteration for N application is about {{N(N+1)/2}} for {{10K}} 
> application   {{5000}} iterations causing time take for Rm to be active 
> more than 10 min.
> Since NM resources are not yet added to during recovery we should skip 
> {{activateApplicaiton()}} 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5773) RM recovery too slow due to LeafQueue#activateApplication()

2016-10-25 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15604720#comment-15604720
 ] 

Varun Saxena commented on YARN-5773:


bq. Optimizing activateApplication() can be handled in new JIRA. Thoughts ?
Agree. I think it should be handled separately.

> RM recovery too slow due to LeafQueue#activateApplication()
> ---
>
> Key: YARN-5773
> URL: https://issues.apache.org/jira/browse/YARN-5773
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>Priority: Critical
> Attachments: YARN-5773.0001.patch, YARN-5773.0002.patch
>
>
> # Submit application 10K application to default queue.
> # All applications are in accepted state
> # Now restart resourcemanager
> For each application recovery {{LeafQueue#activateApplications()}} is 
> invoked.Resulting in AM limit check to be done even before Node managers are 
> getting registered.
> Total iteration for N application is about {{N(N+1)/2}} for {{10K}} 
> application   {{5000}} iterations causing time take for Rm to be active 
> more than 10 min.
> Since NM resources are not yet added to during recovery we should skip 
> {{activateApplicaiton()}} 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-5773) RM recovery too slow due to LeafQueue#activateApplication()

2016-10-25 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15604718#comment-15604718
 ] 

Varun Saxena edited comment on YARN-5773 at 10/25/16 9:09 AM:
--

Is there any need to activate applications on recovery ? Cluster resources will 
anyways be 0 on recovery as resource tracker service has not yet started. Maybe 
pass it in the event so that scheduler knows that recovery is happening while 
adding attempt.
We can however check for cluster resources or user limit right in the beginning 
while activating applications and come out of it if applicable resources are 0. 
That will have same impact on recovery.

Overall i.e. in normal flow, to optimize activateApplications, Wangda's 
suggestion sounds good. But ordering policy will have to be maintained as well. 
Right ?


was (Author: varun_saxena):
Is there any need to activate applications on recovery ? Cluster resources will 
anyways be 0 on recovery as resource tracker service has not yet started.
We can however check for cluster resources or user limit right in the beginning 
while activating applications and come out of it if applicable resources are 0. 
That will have same impact on recovery.

Overall i.e. in normal flow, to optimize activateApplications, Wangda's 
suggestion sounds good. But ordering policy will have to be maintained as well. 
Right ?

> RM recovery too slow due to LeafQueue#activateApplication()
> ---
>
> Key: YARN-5773
> URL: https://issues.apache.org/jira/browse/YARN-5773
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>Priority: Critical
> Attachments: YARN-5773.0001.patch, YARN-5773.0002.patch
>
>
> # Submit application 10K application to default queue.
> # All applications are in accepted state
> # Now restart resourcemanager
> For each application recovery {{LeafQueue#activateApplications()}} is 
> invoked.Resulting in AM limit check to be done even before Node managers are 
> getting registered.
> Total iteration for N application is about {{N(N+1)/2}} for {{10K}} 
> application   {{5000}} iterations causing time take for Rm to be active 
> more than 10 min.
> Since NM resources are not yet added to during recovery we should skip 
> {{activateApplicaiton()}} 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5545) App submit failure on queue with label when default queue partition capacity is zero

2016-10-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15604793#comment-15604793
 ] 

Hadoop QA commented on YARN-5545:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
38s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 32s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
23s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 37s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
17s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
56s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 20s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
31s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 29s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 29s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
20s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 35s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 2s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 36m 26s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
15s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 50m 43s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.resourcemanager.TestRMRestart |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12835090/YARN-5545.004.patch |
| JIRA Issue | YARN-5545 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 6de9ebf193ce 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / dbd2057 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/13498/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-YARN-Build/13498/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/13498/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/13498/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> App submit fai

[jira] [Updated] (YARN-5773) RM recovery too slow due to LeafQueue#activateApplication()

2016-10-25 Thread Bibin A Chundatt (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bibin A Chundatt updated YARN-5773:
---
Attachment: YARN-5773.003.patch

Attaching  patch to handle only recovery

> RM recovery too slow due to LeafQueue#activateApplication()
> ---
>
> Key: YARN-5773
> URL: https://issues.apache.org/jira/browse/YARN-5773
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>Priority: Critical
> Attachments: YARN-5773.0001.patch, YARN-5773.0002.patch, 
> YARN-5773.003.patch
>
>
> # Submit application 10K application to default queue.
> # All applications are in accepted state
> # Now restart resourcemanager
> For each application recovery {{LeafQueue#activateApplications()}} is 
> invoked.Resulting in AM limit check to be done even before Node managers are 
> getting registered.
> Total iteration for N application is about {{N(N+1)/2}} for {{10K}} 
> application   {{5000}} iterations causing time take for Rm to be active 
> more than 10 min.
> Since NM resources are not yet added to during recovery we should skip 
> {{activateApplicaiton()}} 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4743) ResourceManager crash because TimSort

2016-10-25 Thread Zephyr Guo (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zephyr Guo updated YARN-4743:
-
Attachment: YARN-4743-v3.patch

> ResourceManager crash because TimSort
> -
>
> Key: YARN-4743
> URL: https://issues.apache.org/jira/browse/YARN-4743
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 3.0.0-alpha1
>Reporter: Zephyr Guo
>Assignee: Zephyr Guo
> Attachments: YARN-4743-v1.patch, YARN-4743-v2.patch, 
> YARN-4743-v3.patch, timsort.log
>
>
> {code}
> 2016-02-26 14:08:50,821 FATAL 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error in 
> handling event type NODE_UPDATE to the scheduler
> java.lang.IllegalArgumentException: Comparison method violates its general 
> contract!
>  at java.util.TimSort.mergeHi(TimSort.java:868)
>  at java.util.TimSort.mergeAt(TimSort.java:485)
>  at java.util.TimSort.mergeCollapse(TimSort.java:410)
>  at java.util.TimSort.sort(TimSort.java:214)
>  at java.util.TimSort.sort(TimSort.java:173)
>  at java.util.Arrays.sort(Arrays.java:659)
>  at java.util.Collections.sort(Collections.java:217)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSLeafQueue.assignContainer(FSLeafQueue.java:316)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSParentQueue.assignContainer(FSParentQueue.java:240)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.attemptScheduling(FairScheduler.java:1091)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.nodeUpdate(FairScheduler.java:989)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.handle(FairScheduler.java:1185)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.handle(FairScheduler.java:112)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$SchedulerEventDispatcher$EventProcessor.run(ResourceManager.java:684)
>  at java.lang.Thread.run(Thread.java:745)
> 2016-02-26 14:08:50,822 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Exiting, bbye..
> {code}
> Actually, this bug found in 2.6.0-cdh. {{FairShareComparator}} is not 
> transitive.
> We get NaN when memorySize=0 and weight=0.
> {code:title=FairSharePolicy.java}
> useToWeightRatio1 = s1.getResourceUsage().getMemorySize() /
>   s1.getWeights().getWeight(ResourceType.MEMORY)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-2009) Priority support for preemption in ProportionalCapacityPreemptionPolicy

2016-10-25 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-2009:
--
Attachment: YARN-2009.0015.patch

Thank you [~leftnoteasy]. You are correct. we didnt have that metrics yet. I am 
now calculating from running apps, so for preemption it will be fine. Kindly 
help to check.

> Priority support for preemption in ProportionalCapacityPreemptionPolicy
> ---
>
> Key: YARN-2009
> URL: https://issues.apache.org/jira/browse/YARN-2009
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler
>Reporter: Devaraj K
>Assignee: Sunil G
> Attachments: YARN-2009.0001.patch, YARN-2009.0002.patch, 
> YARN-2009.0003.patch, YARN-2009.0004.patch, YARN-2009.0005.patch, 
> YARN-2009.0006.patch, YARN-2009.0007.patch, YARN-2009.0008.patch, 
> YARN-2009.0009.patch, YARN-2009.0010.patch, YARN-2009.0011.patch, 
> YARN-2009.0012.patch, YARN-2009.0013.patch, YARN-2009.0014.patch, 
> YARN-2009.0015.patch
>
>
> While preempting containers based on the queue ideal assignment, we may need 
> to consider preempting the low priority application containers first.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4743) ResourceManager crash because TimSort

2016-10-25 Thread Zephyr Guo (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zephyr Guo updated YARN-4743:
-
Affects Version/s: (was: 3.0.0-alpha1)

> ResourceManager crash because TimSort
> -
>
> Key: YARN-4743
> URL: https://issues.apache.org/jira/browse/YARN-4743
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 3.0.0-alpha2
>Reporter: Zephyr Guo
>Assignee: Zephyr Guo
> Fix For: 3.0.0-alpha2
>
> Attachments: YARN-4743-v1.patch, YARN-4743-v2.patch, 
> YARN-4743-v3.patch, timsort.log
>
>
> {code}
> 2016-02-26 14:08:50,821 FATAL 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error in 
> handling event type NODE_UPDATE to the scheduler
> java.lang.IllegalArgumentException: Comparison method violates its general 
> contract!
>  at java.util.TimSort.mergeHi(TimSort.java:868)
>  at java.util.TimSort.mergeAt(TimSort.java:485)
>  at java.util.TimSort.mergeCollapse(TimSort.java:410)
>  at java.util.TimSort.sort(TimSort.java:214)
>  at java.util.TimSort.sort(TimSort.java:173)
>  at java.util.Arrays.sort(Arrays.java:659)
>  at java.util.Collections.sort(Collections.java:217)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSLeafQueue.assignContainer(FSLeafQueue.java:316)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSParentQueue.assignContainer(FSParentQueue.java:240)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.attemptScheduling(FairScheduler.java:1091)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.nodeUpdate(FairScheduler.java:989)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.handle(FairScheduler.java:1185)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.handle(FairScheduler.java:112)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$SchedulerEventDispatcher$EventProcessor.run(ResourceManager.java:684)
>  at java.lang.Thread.run(Thread.java:745)
> 2016-02-26 14:08:50,822 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Exiting, bbye..
> {code}
> Actually, this bug found in 2.6.0-cdh5.4.7. {{FairShareComparator}} is not 
> transitive.
> We get NaN when memorySize=0 and weight=0.
> {code:title=FairSharePolicy.java}
> useToWeightRatio1 = s1.getResourceUsage().getMemorySize() /
>   s1.getWeights().getWeight(ResourceType.MEMORY)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4743) ResourceManager crash because TimSort

2016-10-25 Thread Zephyr Guo (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zephyr Guo updated YARN-4743:
-
Description: 
{code}
2016-02-26 14:08:50,821 FATAL 
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error in 
handling event type NODE_UPDATE to the scheduler
java.lang.IllegalArgumentException: Comparison method violates its general 
contract!
 at java.util.TimSort.mergeHi(TimSort.java:868)
 at java.util.TimSort.mergeAt(TimSort.java:485)
 at java.util.TimSort.mergeCollapse(TimSort.java:410)
 at java.util.TimSort.sort(TimSort.java:214)
 at java.util.TimSort.sort(TimSort.java:173)
 at java.util.Arrays.sort(Arrays.java:659)
 at java.util.Collections.sort(Collections.java:217)
 at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSLeafQueue.assignContainer(FSLeafQueue.java:316)
 at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSParentQueue.assignContainer(FSParentQueue.java:240)
 at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.attemptScheduling(FairScheduler.java:1091)
 at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.nodeUpdate(FairScheduler.java:989)
 at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.handle(FairScheduler.java:1185)
 at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.handle(FairScheduler.java:112)
 at 
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$SchedulerEventDispatcher$EventProcessor.run(ResourceManager.java:684)
 at java.lang.Thread.run(Thread.java:745)
2016-02-26 14:08:50,822 INFO 
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Exiting, bbye..
{code}

Actually, this bug found in 2.6.0-cdh5.4.7. {{FairShareComparator}} is not 
transitive.

We get NaN when memorySize=0 and weight=0.
{code:title=FairSharePolicy.java}
useToWeightRatio1 = s1.getResourceUsage().getMemorySize() /
  s1.getWeights().getWeight(ResourceType.MEMORY)
{code}


  was:
{code}
2016-02-26 14:08:50,821 FATAL 
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error in 
handling event type NODE_UPDATE to the scheduler
java.lang.IllegalArgumentException: Comparison method violates its general 
contract!
 at java.util.TimSort.mergeHi(TimSort.java:868)
 at java.util.TimSort.mergeAt(TimSort.java:485)
 at java.util.TimSort.mergeCollapse(TimSort.java:410)
 at java.util.TimSort.sort(TimSort.java:214)
 at java.util.TimSort.sort(TimSort.java:173)
 at java.util.Arrays.sort(Arrays.java:659)
 at java.util.Collections.sort(Collections.java:217)
 at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSLeafQueue.assignContainer(FSLeafQueue.java:316)
 at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSParentQueue.assignContainer(FSParentQueue.java:240)
 at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.attemptScheduling(FairScheduler.java:1091)
 at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.nodeUpdate(FairScheduler.java:989)
 at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.handle(FairScheduler.java:1185)
 at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.handle(FairScheduler.java:112)
 at 
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$SchedulerEventDispatcher$EventProcessor.run(ResourceManager.java:684)
 at java.lang.Thread.run(Thread.java:745)
2016-02-26 14:08:50,822 INFO 
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Exiting, bbye..
{code}

Actually, this bug found in 2.6.0-cdh. {{FairShareComparator}} is not 
transitive.

We get NaN when memorySize=0 and weight=0.
{code:title=FairSharePolicy.java}
useToWeightRatio1 = s1.getResourceUsage().getMemorySize() /
  s1.getWeights().getWeight(ResourceType.MEMORY)
{code}



> ResourceManager crash because TimSort
> -
>
> Key: YARN-4743
> URL: https://issues.apache.org/jira/browse/YARN-4743
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 3.0.0-alpha2
>Reporter: Zephyr Guo
>Assignee: Zephyr Guo
> Fix For: 3.0.0-alpha2
>
> Attachments: YARN-4743-v1.patch, YARN-4743-v2.patch, 
> YARN-4743-v3.patch, timsort.log
>
>
> {code}
> 2016-02-26 14:08:50,821 FATAL 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error in 
> handling event type NODE_UPDATE to the scheduler
> java.lang.IllegalArgumentException: Comparison method violates its general 
> contract!
>  at java.util.TimSort.mergeHi(TimSor

[jira] [Updated] (YARN-4743) ResourceManager crash because TimSort

2016-10-25 Thread Zephyr Guo (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zephyr Guo updated YARN-4743:
-
Affects Version/s: 3.0.0-alpha2

> ResourceManager crash because TimSort
> -
>
> Key: YARN-4743
> URL: https://issues.apache.org/jira/browse/YARN-4743
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 3.0.0-alpha2
>Reporter: Zephyr Guo
>Assignee: Zephyr Guo
> Fix For: 3.0.0-alpha2
>
> Attachments: YARN-4743-v1.patch, YARN-4743-v2.patch, 
> YARN-4743-v3.patch, timsort.log
>
>
> {code}
> 2016-02-26 14:08:50,821 FATAL 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error in 
> handling event type NODE_UPDATE to the scheduler
> java.lang.IllegalArgumentException: Comparison method violates its general 
> contract!
>  at java.util.TimSort.mergeHi(TimSort.java:868)
>  at java.util.TimSort.mergeAt(TimSort.java:485)
>  at java.util.TimSort.mergeCollapse(TimSort.java:410)
>  at java.util.TimSort.sort(TimSort.java:214)
>  at java.util.TimSort.sort(TimSort.java:173)
>  at java.util.Arrays.sort(Arrays.java:659)
>  at java.util.Collections.sort(Collections.java:217)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSLeafQueue.assignContainer(FSLeafQueue.java:316)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSParentQueue.assignContainer(FSParentQueue.java:240)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.attemptScheduling(FairScheduler.java:1091)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.nodeUpdate(FairScheduler.java:989)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.handle(FairScheduler.java:1185)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.handle(FairScheduler.java:112)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$SchedulerEventDispatcher$EventProcessor.run(ResourceManager.java:684)
>  at java.lang.Thread.run(Thread.java:745)
> 2016-02-26 14:08:50,822 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Exiting, bbye..
> {code}
> Actually, this bug found in 2.6.0-cdh5.4.7. {{FairShareComparator}} is not 
> transitive.
> We get NaN when memorySize=0 and weight=0.
> {code:title=FairSharePolicy.java}
> useToWeightRatio1 = s1.getResourceUsage().getMemorySize() /
>   s1.getWeights().getWeight(ResourceType.MEMORY)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4743) ResourceManager crash because TimSort

2016-10-25 Thread Zephyr Guo (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zephyr Guo updated YARN-4743:
-
Fix Version/s: 3.0.0-alpha2

> ResourceManager crash because TimSort
> -
>
> Key: YARN-4743
> URL: https://issues.apache.org/jira/browse/YARN-4743
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 3.0.0-alpha2
>Reporter: Zephyr Guo
>Assignee: Zephyr Guo
> Fix For: 3.0.0-alpha2
>
> Attachments: YARN-4743-v1.patch, YARN-4743-v2.patch, 
> YARN-4743-v3.patch, timsort.log
>
>
> {code}
> 2016-02-26 14:08:50,821 FATAL 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error in 
> handling event type NODE_UPDATE to the scheduler
> java.lang.IllegalArgumentException: Comparison method violates its general 
> contract!
>  at java.util.TimSort.mergeHi(TimSort.java:868)
>  at java.util.TimSort.mergeAt(TimSort.java:485)
>  at java.util.TimSort.mergeCollapse(TimSort.java:410)
>  at java.util.TimSort.sort(TimSort.java:214)
>  at java.util.TimSort.sort(TimSort.java:173)
>  at java.util.Arrays.sort(Arrays.java:659)
>  at java.util.Collections.sort(Collections.java:217)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSLeafQueue.assignContainer(FSLeafQueue.java:316)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSParentQueue.assignContainer(FSParentQueue.java:240)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.attemptScheduling(FairScheduler.java:1091)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.nodeUpdate(FairScheduler.java:989)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.handle(FairScheduler.java:1185)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.handle(FairScheduler.java:112)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$SchedulerEventDispatcher$EventProcessor.run(ResourceManager.java:684)
>  at java.lang.Thread.run(Thread.java:745)
> 2016-02-26 14:08:50,822 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Exiting, bbye..
> {code}
> Actually, this bug found in 2.6.0-cdh5.4.7. {{FairShareComparator}} is not 
> transitive.
> We get NaN when memorySize=0 and weight=0.
> {code:title=FairSharePolicy.java}
> useToWeightRatio1 = s1.getResourceUsage().getMemorySize() /
>   s1.getWeights().getWeight(ResourceType.MEMORY)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4743) ResourceManager crash because TimSort

2016-10-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15605233#comment-15605233
 ] 

Hadoop QA commented on YARN-4743:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
37s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 31s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
20s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 38s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
55s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
31s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 29s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 29s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 18s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 1 new + 12 unchanged - 0 fixed = 13 total (was 12) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 35s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 6s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 17s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 35m 50s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
15s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 50m 3s {color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.resourcemanager.TestRMRestart |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12835106/YARN-4743-v3.patch |
| JIRA Issue | YARN-4743 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux be86ad9498f5 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / dbd2057 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/13500/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/13500/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-YARN-Build/13500/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/13500/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/had

[jira] [Commented] (YARN-2009) Priority support for preemption in ProportionalCapacityPreemptionPolicy

2016-10-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15605241#comment-15605241
 ] 

Hadoop QA commented on YARN-2009:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
40s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 32s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
23s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 37s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
17s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
58s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
31s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 30s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 30s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 20s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 26 new + 175 unchanged - 33 fixed = 201 total (was 208) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 34s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 3s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s 
{color} | {color:green} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager
 generated 0 new + 937 unchanged - 1 fixed = 937 total (was 938) {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 39m 19s 
{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
15s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 53m 52s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12835107/YARN-2009.0015.patch |
| JIRA Issue | YARN-2009 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux eeb2c9457c32 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / dbd2057 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/13499/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/13499/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/13499/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Priority support for preemption 

[jira] [Updated] (YARN-5375) invoke MockRM#drainEvents implicitly in MockRM methods to reduce test failures

2016-10-25 Thread sandflee (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sandflee updated YARN-5375:
---
Attachment: YARN-5375.08.patch

> invoke MockRM#drainEvents implicitly in MockRM methods to reduce test failures
> --
>
> Key: YARN-5375
> URL: https://issues.apache.org/jira/browse/YARN-5375
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: sandflee
>Assignee: sandflee
> Attachments: YARN-5375.01.patch, YARN-5375.03.patch, 
> YARN-5375.04.patch, YARN-5375.05.patch, YARN-5375.06.patch, 
> YARN-5375.07-drain-statestore.patch, YARN-5375.07-sync-statestore.patch, 
> YARN-5375.08.patch
>
>
> seen many test failures related to RMApp/RMAppattempt comes to some state but 
> some event are not processed in rm event queue or scheduler event queue, 
> cause test failure, seems we could implicitly invokes drainEvents(should also 
> drain sheduler event) in some mockRM method like waitForState



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5575) Many classes use bare yarn. properties instead of the defined constants

2016-10-25 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5575?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15605281#comment-15605281
 ] 

Daniel Templeton commented on YARN-5575:


Thanks, [~ajisakaa].  I'd rather not address those two issues as I'd be making 
awkward line splits to break up 81-character lines.  I think in those two cases 
the cure is worse than the disease.

> Many classes use bare yarn. properties instead of the defined constants
> ---
>
> Key: YARN-5575
> URL: https://issues.apache.org/jira/browse/YARN-5575
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
> Attachments: YARN-5575.001.patch, YARN-5575.002.patch, 
> YARN-5575.003.patch
>
>
> MAPREDUCE-5870 introduced the following line:
> {code}
>   conf.setInt("yarn.cluster.max-application-priority", 10);
> {code}
> It should instead be:
> {code}
>   conf.setInt(YarnConfiguration.MAX_CLUSTER_LEVEL_APPLICATION_PRIORITY, 
> 10);
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5375) invoke MockRM#drainEvents implicitly in MockRM methods to reduce test failures

2016-10-25 Thread sandflee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15605288#comment-15605288
 ] 

sandflee commented on YARN-5375:


update YARN-5375.08.patch to address the comment of [~rohithsharma], and 
1. add MockRMNullStateStore since NullStateStore is mostly used
2. simple fix TestFairScheduler  deadlock

> invoke MockRM#drainEvents implicitly in MockRM methods to reduce test failures
> --
>
> Key: YARN-5375
> URL: https://issues.apache.org/jira/browse/YARN-5375
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: sandflee
>Assignee: sandflee
> Attachments: YARN-5375.01.patch, YARN-5375.03.patch, 
> YARN-5375.04.patch, YARN-5375.05.patch, YARN-5375.06.patch, 
> YARN-5375.07-drain-statestore.patch, YARN-5375.07-sync-statestore.patch, 
> YARN-5375.08.patch
>
>
> seen many test failures related to RMApp/RMAppattempt comes to some state but 
> some event are not processed in rm event queue or scheduler event queue, 
> cause test failure, seems we could implicitly invokes drainEvents(should also 
> drain sheduler event) in some mockRM method like waitForState



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5545) App submit failure on queue with label when default queue partition capacity is zero

2016-10-25 Thread Bibin A Chundatt (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15605290#comment-15605290
 ] 

Bibin A Chundatt commented on YARN-5545:


Testcase failure is not related to patch attached. YARN-5548 is already 
available to track the same. 
[~varun_saxena] can u have at look at YARN-5548 too.

> App submit failure on queue with label when default queue partition capacity 
> is zero
> 
>
> Key: YARN-5545
> URL: https://issues.apache.org/jira/browse/YARN-5545
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
> Attachments: YARN-5545.0001.patch, YARN-5545.0002.patch, 
> YARN-5545.0003.patch, YARN-5545.004.patch, capacity-scheduler.xml
>
>
> Configure capacity scheduler 
> yarn.scheduler.capacity.root.default.capacity=0
> yarn.scheduler.capacity.root.queue1.accessible-node-labels.labelx.capacity=50
> yarn.scheduler.capacity.root.default.accessible-node-labels.labelx.capacity=50
> Submit application as below
> ./yarn jar 
> ../share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.0.0-alpha2-SNAPSHOT-tests.jar
>  sleep -Dmapreduce.job.node-label-expression=labelx 
> -Dmapreduce.job.queuename=default -m 1 -r 1 -mt 1000 -rt 1
> {noformat}
> 2016-08-21 18:21:31,375 INFO mapreduce.JobSubmitter: Cleaning up the staging 
> area /tmp/hadoop-yarn/staging/root/.staging/job_1471670113386_0001
> java.io.IOException: org.apache.hadoop.yarn.exceptions.YarnException: Failed 
> to submit application_1471670113386_0001 to YARN : 
> org.apache.hadoop.security.AccessControlException: Queue root.default already 
> has 0 applications, cannot accept submission of application: 
> application_1471670113386_0001
>   at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:316)
>   at 
> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:255)
>   at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1344)
>   at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1341)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1790)
>   at org.apache.hadoop.mapreduce.Job.submit(Job.java:1341)
>   at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1362)
>   at org.apache.hadoop.mapreduce.SleepJob.run(SleepJob.java:273)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.mapreduce.SleepJob.main(SleepJob.java:194)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at 
> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
>   at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
>   at 
> org.apache.hadoop.test.MapredTestDriver.run(MapredTestDriver.java:136)
>   at 
> org.apache.hadoop.test.MapredTestDriver.main(MapredTestDriver.java:144)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at org.apache.hadoop.util.RunJar.run(RunJar.java:239)
>   at org.apache.hadoop.util.RunJar.main(RunJar.java:153)
> Caused by: org.apache.hadoop.yarn.exceptions.YarnException: Failed to submit 
> application_1471670113386_0001 to YARN : 
> org.apache.hadoop.security.AccessControlException: Queue root.default already 
> has 0 applications, cannot accept submission of application: 
> application_1471670113386_0001
>   at 
> org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.submitApplication(YarnClientImpl.java:286)
>   at 
> org.apache.hadoop.mapred.ResourceMgrDelegate.submitApplication(ResourceMgrDelegate.java:296)
>   at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:301)
>   ... 25 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5556) Support for deleting queues without requiring a RM restart

2016-10-25 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15605299#comment-15605299
 ] 

Daniel Templeton commented on YARN-5556:


That'll work for me, except for one tiny typo:

{code}  + " cannot be deleted as atleast one of its children 
has"{code}

atleast/at least


> Support for deleting queues without requiring a RM restart
> --
>
> Key: YARN-5556
> URL: https://issues.apache.org/jira/browse/YARN-5556
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Xuan Gong
>Assignee: Naganarasimha G R
> Attachments: YARN-5556.v1.001.patch, YARN-5556.v1.002.patch, 
> YARN-5556.v1.003.patch
>
>
> Today, we could add or modify queues without restarting the RM, via a CS 
> refresh. But for deleting queue, we have to restart the ResourceManager. We 
> could support for deleting queues without requiring a RM restart



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5765) LinuxContainerExecutor creates appcache and its subdirectories with wrong group owner.

2016-10-25 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15605347#comment-15605347
 ] 

Jason Lowe commented on YARN-5765:
--

Linking the two tickets.  YARN-5287 went into 2.8, but this is marked as only 
affecting 3.0.0-alpha1.  Should this be a blocker for 2.8 as well?

> LinuxContainerExecutor creates appcache and its subdirectories with wrong 
> group owner.
> --
>
> Key: YARN-5765
> URL: https://issues.apache.org/jira/browse/YARN-5765
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha1
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>Priority: Blocker
>
> LinuxContainerExecutor creates usercache/\{userId\}/appcache/\{appId\} with 
> wrong group owner, causing Log aggregation and ShuffleHandler to fail because 
> node manager process does not have permission to read the files under the 
> directory.
> This can be easily reproduced by enabling LCE and submitting a MR example 
> job. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5765) LinuxContainerExecutor creates appcache and its subdirectories with wrong group owner.

2016-10-25 Thread Varun Vasudev (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15605372#comment-15605372
 ] 

Varun Vasudev commented on YARN-5765:
-

Definitely a blocker for 2.8 as well. cc [~naganarasimha...@apache.org] and 
[~Ying Zhang] who worked on YARN-5287.

> LinuxContainerExecutor creates appcache and its subdirectories with wrong 
> group owner.
> --
>
> Key: YARN-5765
> URL: https://issues.apache.org/jira/browse/YARN-5765
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha1
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>Priority: Blocker
>
> LinuxContainerExecutor creates usercache/\{userId\}/appcache/\{appId\} with 
> wrong group owner, causing Log aggregation and ShuffleHandler to fail because 
> node manager process does not have permission to read the files under the 
> directory.
> This can be easily reproduced by enabling LCE and submitting a MR example 
> job. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5765) LinuxContainerExecutor creates appcache and its subdirectories with wrong group owner.

2016-10-25 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5765?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe updated YARN-5765:
-
Affects Version/s: 2.8.0
 Target Version/s: 2.8.0  (was: 3.0.0-alpha2)

> LinuxContainerExecutor creates appcache and its subdirectories with wrong 
> group owner.
> --
>
> Key: YARN-5765
> URL: https://issues.apache.org/jira/browse/YARN-5765
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.8.0, 3.0.0-alpha1
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>Priority: Blocker
>
> LinuxContainerExecutor creates usercache/\{userId\}/appcache/\{appId\} with 
> wrong group owner, causing Log aggregation and ShuffleHandler to fail because 
> node manager process does not have permission to read the files under the 
> directory.
> This can be easily reproduced by enabling LCE and submitting a MR example 
> job. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5723) Proxy Server unresponsive after printing 'accessing unchecked' multiple times

2016-10-25 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15605380#comment-15605380
 ] 

Daniel Templeton commented on YARN-5723:


Could it be related to YARN-4767?

> Proxy Server unresponsive after printing 'accessing unchecked' multiple times
> -
>
> Key: YARN-5723
> URL: https://issues.apache.org/jira/browse/YARN-5723
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: webapp
> Environment: Latest alpha2-SNAPSHOT running in Ubuntu 16.04 Docker 
> containers.
>Reporter: Zoltán Zvara
>
> After the first request to the Proxy Server against Spark 2.0.0 web UI, the 
> following logs appear:
> {{2016-10-12 14:43:54,998 INFO 
> org.apache.hadoop.yarn.server.webproxy.WebAppProxyServlet: dr.who is 
> accessing unchecked http://10.1.1.240:4040 which is the app master GUI of 
> application_1476283324979_0002 owned by Ehnalis
> 2016-10-12 14:43:55,007 INFO 
> org.apache.hadoop.yarn.server.webproxy.WebAppProxyServlet: dr.who is 
> accessing unchecked http://10.1.1.240:4040 which is the app master GUI of 
> application_1476283324979_0002 owned by Ehnalis
> 2016-10-12 14:43:55,015 INFO 
> org.apache.hadoop.yarn.server.webproxy.WebAppProxyServlet: dr.who is 
> accessing unchecked http://10.1.1.240:4040 which is the app master GUI of 
> application_1476283324979_0002 owned by Ehnalis
> 2016-10-12 14:43:55,024 INFO 
> org.apache.hadoop.yarn.server.webproxy.WebAppProxyServlet: dr.who is 
> accessing unchecked http://10.1.1.240:4040 which is the app master GUI of 
> application_1476283324979_0002 owned by Ehnalis
> 2016-10-12 14:43:55,033 INFO 
> org.apache.hadoop.yarn.server.webproxy.WebAppProxyServlet: dr.who is 
> accessing unchecked http://10.1.1.240:4040 which is the app master GUI of 
> application_1476283324979_0002 owned by Ehnalis
> 2016-10-12 14:43:55,042 INFO 
> org.apache.hadoop.yarn.server.webproxy.WebAppProxyServlet: dr.who is 
> accessing unchecked http://10.1.1.240:4040 which is the app master GUI of 
> application_1476283324979_0002 owned by Ehnalis
> 2016-10-12 14:43:55,051 INFO 
> org.apache.hadoop.yarn.server.webproxy.WebAppProxyServlet: dr.who is 
> accessing unchecked http://10.1.1.240:4040 which is the app master GUI of 
> application_1476283324979_0002 owned by Ehnalis
> 2016-10-12 14:43:55,059 INFO 
> org.apache.hadoop.yarn.server.webproxy.WebAppProxyServlet: dr.who is 
> accessing unchecked http://10.1.1.240:4040 which is the app master GUI of 
> application_1476283324979_0002 owned by Ehnalis}}
> The Proxy Server is going to be unresponsive after these messages. If the 
> Proxy Server is not on a standalone port, but instead with the RM, the 
> default YARN web UI goes unresponsive as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5765) LinuxContainerExecutor creates appcache and its subdirectories with wrong group owner.

2016-10-25 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15605458#comment-15605458
 ] 

Naganarasimha G R commented on YARN-5765:
-

My Bad, did not mark the release versions after committing the patch to 2.8 and 
2.9.

> LinuxContainerExecutor creates appcache and its subdirectories with wrong 
> group owner.
> --
>
> Key: YARN-5765
> URL: https://issues.apache.org/jira/browse/YARN-5765
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.8.0, 3.0.0-alpha1
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>Priority: Blocker
>
> LinuxContainerExecutor creates usercache/\{userId\}/appcache/\{appId\} with 
> wrong group owner, causing Log aggregation and ShuffleHandler to fail because 
> node manager process does not have permission to read the files under the 
> directory.
> This can be easily reproduced by enabling LCE and submitting a MR example 
> job. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5587) Add support for resource profiles

2016-10-25 Thread Varun Vasudev (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15605509#comment-15605509
 ] 

Varun Vasudev commented on YARN-5587:
-

bq. In Resources, you moved the Suppress deprecation warning from the 
setMemorySize(long) method to the setMemory(int). Was that intentional ?

Yes - setMemory is deprecated not setMemorySize. Findbugs complains if I don't 
supress the warning.

{quote}
AMRMClient::ContainerRequest : Wondering if we need to allow a Container 
request to specify both a profile name and a Resource (capability). If they do 
specify both, what does that mean ?
Similarly, in the RemoteRequestTable, the RR should be keyed using the 
Resource (capability) derived from the profileName.
{quote}

Good question - my opinion is that profile + capability means take the profile 
and override it with the capability. So for example if I say I want profile 
'large' with capability <4096M, 2 cores> - that means take all the values from 
profile large but use 4096M for memory and 2 cores for cpu. This leads to a 
follow up question - what if you ask for profile 'large' with capability 
<4096M, 0 cores>(this will happen because the Resource object doesn't have 
'null' values for memory and vcores)? In this case my proposal is to only use 
non-zero overrides.

The reason I went with this approach is that most users are used to running 
spark and MR jobs with memory and cores specified. They'll continue to run 
these applications the same way. This approach allows an easy way for admins to 
turn on resource profiles without affecting users. However I also accept that 
the 'only consider non-zero' values for overrides might seem hackish - I'm 
absolutely open to alternatives.

> Add support for resource profiles
> -
>
> Key: YARN-5587
> URL: https://issues.apache.org/jira/browse/YARN-5587
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Varun Vasudev
>Assignee: Varun Vasudev
> Attachments: YARN-5587-YARN-3926.001.patch, 
> YARN-5587-YARN-3926.002.patch, YARN-5587-YARN-3926.003.patch, 
> YARN-5587-YARN-3926.004.patch, YARN-5587-YARN-3926.005.patch
>
>
> Add support for resource profiles on the RM side to allow users to use 
> shorthands to specify resource requirements.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5375) invoke MockRM#drainEvents implicitly in MockRM methods to reduce test failures

2016-10-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15605453#comment-15605453
 ] 

Hadoop QA commented on YARN-5375:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 9 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
42s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 18s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
43s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 8s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
29s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
50s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 47s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
56s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 13s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 13s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 40s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn: The patch generated 4 
new + 414 unchanged - 3 fixed = 418 total (was 417) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 5s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
25s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 3s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 43s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 37s 
{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 39m 17s 
{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
17s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 65m 39s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12835111/YARN-5375.08.patch |
| JIRA Issue | YARN-5375 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 827e9d9d63c5 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / dbd2057 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/13501/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/13501/testReport/ |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: hadoop-yarn-project/hadoop-yarn |
| Console output | 
https://builds.apache.o

[jira] [Updated] (YARN-5148) [YARN-3368] Add page to new YARN UI to view server side configurations/logs/JVM-metrics

2016-10-25 Thread Kai Sasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5148?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Sasaki updated YARN-5148:
-
Attachment: YARN-5148-YARN-3368.05.patch

> [YARN-3368] Add page to new YARN UI to view server side 
> configurations/logs/JVM-metrics
> ---
>
> Key: YARN-5148
> URL: https://issues.apache.org/jira/browse/YARN-5148
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Kai Sasaki
> Attachments: Screen Shot 2016-09-11 at 23.28.31.png, Screen Shot 
> 2016-09-13 at 22.27.00.png, YARN-5148-YARN-3368.01.patch, 
> YARN-5148-YARN-3368.02.patch, YARN-5148-YARN-3368.03.patch, 
> YARN-5148-YARN-3368.04.patch, YARN-5148-YARN-3368.05.patch, yarn-conf.png, 
> yarn-tools.png
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4463) Container launch failure when yarn.nodemanager.log-dirs directory path contains space

2016-10-25 Thread Bibin A Chundatt (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4463?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15605532#comment-15605532
 ] 

Bibin A Chundatt commented on YARN-4463:


[~sunilg]/[~rohithsharma]
Another option can we to skip all those paths with space. For both nmlocal-dir 
and nm-log-dir

> Container launch failure when yarn.nodemanager.log-dirs directory path 
> contains space
> -
>
> Key: YARN-4463
> URL: https://issues.apache.org/jira/browse/YARN-4463
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>
> If the container log directory path contains space container-launch fails
> Even with DEBUG logs are enabled only log able to get is 
> {noformat}
> Container id: container_e32_1450233925719_0009_01_22
> Exit code: 1
> Stack trace: ExitCodeException exitCode=1:
> at org.apache.hadoop.util.Shell.runCommand(Shell.java:912)
> at org.apache.hadoop.util.Shell.run(Shell.java:823)
> at 
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:1102)
> at 
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:225)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:304)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:84)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> We can support container-launch to support nmlog directory path with space.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5148) [YARN-3368] Add page to new YARN UI to view server side configurations/logs/JVM-metrics

2016-10-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15605535#comment-15605535
 ] 

Hadoop QA commented on YARN-5148:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch 91 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
16s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 0m 52s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:5a4801a |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12835141/YARN-5148-YARN-3368.05.patch
 |
| JIRA Issue | YARN-5148 |
| Optional Tests |  asflicense  |
| uname | Linux 726ce6f07f61 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | YARN-3368 / 9690f29 |
| whitespace | 
https://builds.apache.org/job/PreCommit-YARN-Build/13502/artifact/patchprocess/whitespace-tabs.txt
 |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/13502/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> [YARN-3368] Add page to new YARN UI to view server side 
> configurations/logs/JVM-metrics
> ---
>
> Key: YARN-5148
> URL: https://issues.apache.org/jira/browse/YARN-5148
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Kai Sasaki
> Attachments: Screen Shot 2016-09-11 at 23.28.31.png, Screen Shot 
> 2016-09-13 at 22.27.00.png, YARN-5148-YARN-3368.01.patch, 
> YARN-5148-YARN-3368.02.patch, YARN-5148-YARN-3368.03.patch, 
> YARN-5148-YARN-3368.04.patch, YARN-5148-YARN-3368.05.patch, yarn-conf.png, 
> yarn-tools.png
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5765) LinuxContainerExecutor creates appcache and its subdirectories with wrong group owner.

2016-10-25 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15605561#comment-15605561
 ] 

Naganarasimha G R commented on YARN-5765:
-

Thanks [~haibochen], for giving some insights on the chmod command, was not 
aware of it,  So at first glance As initially suggested it would be be ideal to 
set umask as {{ umask(0027);}} would ideal than chmod in 
{{create_validate_dir}} but before making the system call *mkdir* .This will 
ensure that if the cluster has been configured with a restrictive umask "e.g.: 
umask 077" still the directories are set with proper rights. or another 
approach i can think of is setting in this way but not sure whether it works ...
{code}
} else {
 // Explicitly set permission after creating the directory in case
 // umask has been set to a restrictive value, i.e., 0077.
 perm = perm | S_ISGID;
 if (chmod(npath, perm) != 0) {
  int permInt = perm & (S_IRWXU | S_IRWXG | S_IRWXO);
  fprintf(LOGFILE, "Can't chmod %s to the required permission %o - %s\n",
npath, permInt, strerror(errno));
   return -1;
}
{code}
Thoughts ? 

> LinuxContainerExecutor creates appcache and its subdirectories with wrong 
> group owner.
> --
>
> Key: YARN-5765
> URL: https://issues.apache.org/jira/browse/YARN-5765
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.8.0, 3.0.0-alpha1
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>Priority: Blocker
>
> LinuxContainerExecutor creates usercache/\{userId\}/appcache/\{appId\} with 
> wrong group owner, causing Log aggregation and ShuffleHandler to fail because 
> node manager process does not have permission to read the files under the 
> directory.
> This can be easily reproduced by enabling LCE and submitting a MR example 
> job. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5716) Add global scheduler interface definition and update CapacityScheduler to use it.

2016-10-25 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15605683#comment-15605683
 ] 

Sunil G commented on YARN-5716:
---

Thanks [~leftnoteasy] for the update.

Few more general comments:

1. {{computeUserLimitAndSetHeadroom}} is invoked invoked from *accept* call 
chain and from *assigneContainers*. I think its fine provided the both gives 
similar computed data. With user-limit preemption improvement, i think if 
compute is done outside these scheduler call flow, such user-limit computation 
costs can be brought down.
2. In {{LeafQueue.accept}}, 
{code}
Resources.subtractFrom(usedResource,
request.getTotalReleasedResource());
{code}
totalReleasedResource also include reserved resources too, correct. Do we need 
to decrement that? I am not very clear in this point here.  

3. In {{CS#allocateContainerOnSingleNode}}, *node.getReservedContainer()* is 
checked two times (second time to handle error case). Could we handle in first 
block itself.
4. In {{FiCaSchedulerApp.accept}},
  - a reference for {{resourceRequests}} is kept and under failure updated back 
to schedulingInfo object. I do not feel its a clean implementation. I remember 
this change, but I feel we need not have to take a reference in app level, may 
be we can keep it in schedulingInfo and can be used the same during recovery.
  - Method is very lengthy. We could take out logic like increase request etc. 
So it ll be more easier.
  - {{readLock}} is mostly needed for increased request and for 
appSchedulingInfo. I think some optimizations here could be done separately in 
another ticket as improvement.
  - preferring *equals* instead of {{if (fromReservedContainer != 
reservedContainerOnNode)}}

Also I have checked reservation continue scheduling and lazy-preemption in 
particular in this round. Generally both looks fine for me.

> Add global scheduler interface definition and update CapacityScheduler to use 
> it.
> -
>
> Key: YARN-5716
> URL: https://issues.apache.org/jira/browse/YARN-5716
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-5716.001.patch, YARN-5716.002.patch, 
> YARN-5716.003.patch, YARN-5716.004.patch, YARN-5716.005.patch, 
> YARN-5716.006.patch, YARN-5716.007.patch, YARN-5716.008.patch
>
>
> Target of this JIRA:
> - Definition of interfaces / objects which will be used by global scheduling, 
> this will be shared by different schedulers.
> - Modify CapacityScheduler to use it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4743) ResourceManager crash because TimSort

2016-10-25 Thread Zephyr Guo (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zephyr Guo updated YARN-4743:
-
Attachment: YARN-4743-v4.patch

> ResourceManager crash because TimSort
> -
>
> Key: YARN-4743
> URL: https://issues.apache.org/jira/browse/YARN-4743
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 3.0.0-alpha2
>Reporter: Zephyr Guo
>Assignee: Zephyr Guo
> Fix For: 3.0.0-alpha2
>
> Attachments: YARN-4743-v1.patch, YARN-4743-v2.patch, 
> YARN-4743-v3.patch, YARN-4743-v4.patch, timsort.log
>
>
> {code}
> 2016-02-26 14:08:50,821 FATAL 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error in 
> handling event type NODE_UPDATE to the scheduler
> java.lang.IllegalArgumentException: Comparison method violates its general 
> contract!
>  at java.util.TimSort.mergeHi(TimSort.java:868)
>  at java.util.TimSort.mergeAt(TimSort.java:485)
>  at java.util.TimSort.mergeCollapse(TimSort.java:410)
>  at java.util.TimSort.sort(TimSort.java:214)
>  at java.util.TimSort.sort(TimSort.java:173)
>  at java.util.Arrays.sort(Arrays.java:659)
>  at java.util.Collections.sort(Collections.java:217)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSLeafQueue.assignContainer(FSLeafQueue.java:316)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSParentQueue.assignContainer(FSParentQueue.java:240)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.attemptScheduling(FairScheduler.java:1091)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.nodeUpdate(FairScheduler.java:989)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.handle(FairScheduler.java:1185)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.handle(FairScheduler.java:112)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$SchedulerEventDispatcher$EventProcessor.run(ResourceManager.java:684)
>  at java.lang.Thread.run(Thread.java:745)
> 2016-02-26 14:08:50,822 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Exiting, bbye..
> {code}
> Actually, this bug found in 2.6.0-cdh5.4.7. {{FairShareComparator}} is not 
> transitive.
> We get NaN when memorySize=0 and weight=0.
> {code:title=FairSharePolicy.java}
> useToWeightRatio1 = s1.getResourceUsage().getMemorySize() /
>   s1.getWeights().getWeight(ResourceType.MEMORY)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5548) Random test failure TestRMRestart#testFinishedAppRemovalAfterRMRestart

2016-10-25 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15605769#comment-15605769
 ] 

Sunil G commented on YARN-5548:
---

YARN-5375 is having a state-store based solution now. Hence we can ensure that 
any events which are fired from RM to state-store and subsequent events back to 
RM from state-store are clearly handling in linear way. So we could now try to 
avoid few waitForStates.Given YARN-5375 is going in as per current form, do we 
still need this new waitForState?

> Random test failure TestRMRestart#testFinishedAppRemovalAfterRMRestart
> --
>
> Key: YARN-5548
> URL: https://issues.apache.org/jira/browse/YARN-5548
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
> Attachments: YARN-5548.0001.patch, YARN-5548.0002.patch, 
> YARN-5548.0003.patch
>
>
> https://builds.apache.org/job/PreCommit-YARN-Build/12850/testReport/org.apache.hadoop.yarn.server.resourcemanager/TestRMRestart/testFinishedAppRemovalAfterRMRestart/
> {noformat}
> Error Message
> Stacktrace
> java.lang.AssertionError: expected null, but was: application_submission_context { application_id { id: 1 cluster_timestamp: 
> 1471885197388 } application_name: "" queue: "default" priority { priority: 0 
> } am_container_spec { } cancel_tokens_when_complete: true maxAppAttempts: 2 
> resource { memory: 1024 virtual_cores: 1 } applicationType: "YARN" 
> keep_containers_across_application_attempts: false 
> attempt_failures_validity_interval: 0 am_container_resource_request { 
> priority { priority: 0 } resource_name: "*" capability { memory: 1024 
> virtual_cores: 1 } num_containers: 0 relax_locality: true 
> node_label_expression: "" execution_type_request { execution_type: GUARANTEED 
> enforce_execution_type: false } } } user: "jenkins" start_time: 1471885197417 
> application_state: RMAPP_FINISHED finish_time: 1471885197478>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotNull(Assert.java:664)
>   at org.junit.Assert.assertNull(Assert.java:646)
>   at org.junit.Assert.assertNull(Assert.java:656)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.TestRMRestart.testFinishedAppRemovalAfterRMRestart(TestRMRestart.java:1656)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5778) Add .keep file for yarn native services AM web app

2016-10-25 Thread Billie Rinaldi (JIRA)
Billie Rinaldi created YARN-5778:


 Summary: Add .keep file for yarn native services AM web app
 Key: YARN-5778
 URL: https://issues.apache.org/jira/browse/YARN-5778
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Billie Rinaldi
Assignee: Billie Rinaldi


The empty .keep file is needed to preserve the directory structure for the yarn 
native services AM web app. To add this file, run "git add -f 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-slider/hadoop-yarn-slider-core/src/main/resources/webapps/slideram/.keep"
 before doing the git commit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5778) Add .keep file for yarn native services AM web app

2016-10-25 Thread Billie Rinaldi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Billie Rinaldi updated YARN-5778:
-
Attachment: YARN-5778-yarn-native-services.001.patch

> Add .keep file for yarn native services AM web app
> --
>
> Key: YARN-5778
> URL: https://issues.apache.org/jira/browse/YARN-5778
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Billie Rinaldi
>Assignee: Billie Rinaldi
> Attachments: YARN-5778-yarn-native-services.001.patch
>
>
> The empty .keep file is needed to preserve the directory structure for the 
> yarn native services AM web app. To add this file, run "git add -f 
> hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-slider/hadoop-yarn-slider-core/src/main/resources/webapps/slideram/.keep"
>  before doing the git commit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5778) Add .keep file for yarn native services AM web app

2016-10-25 Thread Gour Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15605868#comment-15605868
 ] 

Gour Saha commented on YARN-5778:
-

Looks good. +1 for the patch.

> Add .keep file for yarn native services AM web app
> --
>
> Key: YARN-5778
> URL: https://issues.apache.org/jira/browse/YARN-5778
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Billie Rinaldi
>Assignee: Billie Rinaldi
> Attachments: YARN-5778-yarn-native-services.001.patch
>
>
> The empty .keep file is needed to preserve the directory structure for the 
> yarn native services AM web app. To add this file, run "git add -f 
> hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-slider/hadoop-yarn-slider-core/src/main/resources/webapps/slideram/.keep"
>  before doing the git commit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4734) Merge branch:YARN-3368 to trunk

2016-10-25 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15605875#comment-15605875
 ] 

Allen Wittenauer commented on YARN-4734:


No, I saw it.  I just opted to ignore it. We had already started reworking the 
YARN web initialization since a) it's code differed from the rest of Hadoop and 
b) the rest of Hadoop actually works.  But then we got distracted with other 
stuff and put it on the shelf.  The problem itself should be easy to reproduce 
by you folks though. You just need to stuff a class into 
hadoop.http.authentication.type that implements AltKerberos.  

> Merge branch:YARN-3368 to trunk
> ---
>
> Key: YARN-4734
> URL: https://issues.apache.org/jira/browse/YARN-4734
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-4734.1.patch, YARN-4734.10-NOT_READY.patch, 
> YARN-4734.11-NOT_READY.patch, YARN-4734.12-NOT_READY.patch, 
> YARN-4734.13.patch, YARN-4734.14.patch, YARN-4734.15.patch, 
> YARN-4734.2.patch, YARN-4734.3.patch, YARN-4734.4.patch, YARN-4734.5.patch, 
> YARN-4734.6.patch, YARN-4734.7.patch, YARN-4734.8.patch, 
> YARN-4734.9-NOT_READY.patch
>
>
> YARN-2928 branch is planned to merge back to trunk shortly, it depends on 
> changes of YARN-3368. This JIRA is to track the merging task.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5775) Bug fixes in swagger definition

2016-10-25 Thread Billie Rinaldi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15605886#comment-15605886
 ] 

Billie Rinaldi commented on YARN-5775:
--

+1, straightforward patch.

> Bug fixes in swagger definition
> ---
>
> Key: YARN-5775
> URL: https://issues.apache.org/jira/browse/YARN-5775
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gour Saha
>Assignee: Gour Saha
> Fix For: yarn-native-services
>
> Attachments: YARN-5775-yarn-native-services.001.patch
>
>
> All enums have been listed in lowercase. Need to convert all of them to 
> uppercase.
> For e.g. ContainerState:
> {noformat}
> enum:
>   - init
>   - ready
> {noformat}
> needs to be changed to -
> {noformat}
> enum:
>   - INIT
>   - READY
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5778) Add .keep file for yarn native services AM web app

2016-10-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15605887#comment-15605887
 ] 

Hadoop QA commented on YARN-5778:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
32s {color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 28s 
{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 31s 
{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} yarn-native-services passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 29s 
{color} | {color:red} hadoop-yarn-slider-core in yarn-native-services failed. 
{color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
26s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 26s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 26s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 28s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 29s 
{color} | {color:red} hadoop-yarn-slider-core in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 1m 30s {color} 
| {color:red} hadoop-yarn-slider-core in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 17s 
{color} | {color:red} The patch generated 11 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 14m 47s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
slider.core.registry.docstore.TestPublishedConfigurationOutputter |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12835161/YARN-5778-yarn-native-services.001.patch
 |
| JIRA Issue | YARN-5778 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  |
| uname | Linux e878c4f22871 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 
21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | yarn-native-services / 023be93 |
| Default Java | 1.8.0_101 |
| javadoc | 
https://builds.apache.org/job/PreCommit-YARN-Build/13504/artifact/patchprocess/branch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-slider_hadoop-yarn-slider-core.txt
 |
| javadoc | 
https://builds.apache.org/job/PreCommit-YARN-Build/13504/artifact/patchprocess/patch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-slider_hadoop-yarn-slider-core.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/13504/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-slider_hadoop-yarn-slider-core.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-YARN-Build/13504/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-slider_hadoop-yarn-slider-core.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/13504/testReport/ |
| asflicense | 
https://builds.apache.org/job/PreCommit-YARN-Build/13504/artifact/patchprocess/patch-asflicense-problems.txt
 |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-slider/hadoop-yarn-slider-core
 U: 
hadoop-yarn

[jira] [Commented] (YARN-5770) Performance improvement of native-services REST API service

2016-10-25 Thread Billie Rinaldi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5770?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15605889#comment-15605889
 ] 

Billie Rinaldi commented on YARN-5770:
--

This patch looks good to me. I will test it out locally before completing my 
review.

> Performance improvement of native-services REST API service
> ---
>
> Key: YARN-5770
> URL: https://issues.apache.org/jira/browse/YARN-5770
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gour Saha
>Assignee: Gour Saha
> Fix For: yarn-native-services
>
> Attachments: YARN-5770-yarn-native-services.phase1.001.patch, 
> YARN-5770-yarn-native-services.phase1.002.patch
>
>
> Make enhancements and bug-fixes to eliminate frequent full GC of the REST API 
> Service. Dependent on few Slider fixes like SLIDER-1168 as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4743) ResourceManager crash because TimSort

2016-10-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15605908#comment-15605908
 ] 

Hadoop QA commented on YARN-4743:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
45s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 32s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
20s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 37s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
17s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
58s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
31s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 28s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 28s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
18s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 35s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 0s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 39m 30s 
{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
15s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 53m 55s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12835157/YARN-4743-v4.patch |
| JIRA Issue | YARN-4743 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux f2990e7beca8 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / dbd2057 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/13503/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/13503/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> ResourceManager crash because TimSort
> -
>
> Key: YARN-4743
> URL: https://issues.apache.org/jira/browse/YARN-4743
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 3.0.0-alpha2
>Reporter: Zephyr Guo
>Assignee: Zephyr Guo
> Fix For: 3.0.0-alpha2
>
> Attachments: YARN-4743-v1.patch, YARN-4743-v2.patch, 
> YARN-4743-v3.patch, Y

[jira] [Commented] (YARN-4734) Merge branch:YARN-3368 to trunk

2016-10-25 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15605982#comment-15605982
 ] 

Wangda Tan commented on YARN-4734:
--

[~aw], thanks. I think we can move the AltKerberos discussions to YARN-4006, it 
is not too much related to this work. 

Anything else for the merge? I plan to send vote thread before Thu, just let me 
know if you have any comments before that. Thanks.

> Merge branch:YARN-3368 to trunk
> ---
>
> Key: YARN-4734
> URL: https://issues.apache.org/jira/browse/YARN-4734
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-4734.1.patch, YARN-4734.10-NOT_READY.patch, 
> YARN-4734.11-NOT_READY.patch, YARN-4734.12-NOT_READY.patch, 
> YARN-4734.13.patch, YARN-4734.14.patch, YARN-4734.15.patch, 
> YARN-4734.2.patch, YARN-4734.3.patch, YARN-4734.4.patch, YARN-4734.5.patch, 
> YARN-4734.6.patch, YARN-4734.7.patch, YARN-4734.8.patch, 
> YARN-4734.9-NOT_READY.patch
>
>
> YARN-2928 branch is planned to merge back to trunk shortly, it depends on 
> changes of YARN-3368. This JIRA is to track the merging task.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5779) [YARN-3368] Document limits/notes of the new YARN UI

2016-10-25 Thread Wangda Tan (JIRA)
Wangda Tan created YARN-5779:


 Summary: [YARN-3368] Document limits/notes of the new YARN UI
 Key: YARN-5779
 URL: https://issues.apache.org/jira/browse/YARN-5779
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Wangda Tan
Assignee: Wangda Tan


For example, we don't make sure it's able to run on security enabled 
environment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3649) Allow configurable prefix for hbase table names (like prod, exp, test etc)

2016-10-25 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15606006#comment-15606006
 ] 

Vrushali C commented on YARN-3649:
--

Thanks Varun, making the changes, will upload a patch shortly.

> Allow configurable prefix for hbase table names (like prod, exp, test etc)
> --
>
> Key: YARN-3649
> URL: https://issues.apache.org/jira/browse/YARN-3649
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Vrushali C
>  Labels: YARN-5355
> Attachments: YARN-3649-YARN-2928.01.patch, 
> YARN-3649-YARN-5355.002.patch, YARN-3649-YARN-5355.003.patch, 
> YARN-3649-YARN-5355.004.patch, YARN-3649-YARN-5355.01.patch
>
>
> As per [~jrottinghuis]'s suggestion in YARN-3411, it will be a good idea to 
> have a configurable prefix for hbase table names.  
> This way we can easily run a staging, a test, a production and whatever setup 
> in the same HBase instance / without having to override every single table in 
> the config.
> One could simply overwrite the default prefix and you're off and running.
> For prefix, potential candidates are "tst" "prod" "exp" etc. Once can then 
> still override one tablename if needed, but managing one whole setup will be 
> easier.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5428) Allow for specifying the docker client configuration directory

2016-10-25 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15606009#comment-15606009
 ] 

Allen Wittenauer commented on YARN-5428:


Nope.  Given that probably no one is going to use YARN as a replacement for k8s 
or Mesos or whatever, it doesn't really matter.

> Allow for specifying the docker client configuration directory
> --
>
> Key: YARN-5428
> URL: https://issues.apache.org/jira/browse/YARN-5428
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
> Attachments: YARN-5428.001.patch, YARN-5428.002.patch, 
> YARN-5428.003.patch, YARN-5428.004.patch, 
> YARN-5428Allowforspecifyingthedockerclientconfigurationdirectory.pdf
>
>
> The docker client allows for specifying a configuration directory that 
> contains the docker client's configuration. It is common to store "docker 
> login" credentials in this config, to avoid the need to docker login on each 
> cluster member. 
> By default the docker client config is $HOME/.docker/config.json on Linux. 
> However, this does not work with the current container executor user 
> switching and it may also be desirable to centralize this configuration 
> beyond the single user's home directory.
> Note that the command line arg is for the configuration directory NOT the 
> configuration file.
> This change will be needed to allow YARN to automatically pull images at 
> localization time or within container executor.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5366) Add support for toggling the removal of completed and failed docker containers

2016-10-25 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15606015#comment-15606015
 ] 

Allen Wittenauer commented on YARN-5366:


I think you misunderstood what I was pointing out.  If the yarn user is part of 
the docker group, this gives the docker command access to the docker daemon 
bits to the point that c-e is no longer needed to exec docker.  Given that 
there are currently zero protections to how YARN invokes docker, this doesn't 
change the security profile at all.



> Add support for toggling the removal of completed and failed docker containers
> --
>
> Key: YARN-5366
> URL: https://issues.apache.org/jira/browse/YARN-5366
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
> Attachments: YARN-5366.001.patch, YARN-5366.002.patch, 
> YARN-5366.003.patch, YARN-5366.004.patch, YARN-5366.005.patch, 
> YARN-5366.006.patch
>
>
> Currently, completed and failed docker containers are removed by 
> container-executor. Add a job level environment variable to 
> DockerLinuxContainerRuntime to allow the user to toggle whether they want the 
> container deleted or not and remove the logic from container-executor.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4173) Ensure the final values for metrics/events are emitted/stored at APP completion time

2016-10-25 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15606017#comment-15606017
 ] 

Vrushali C commented on YARN-4173:
--

In YARN-5747, the final aggregation is being done explicitly, so need to think 
over further what else we need to do here. I believe what needs to be confirmed 
is that these are tagged with FINAL tags when they are written. 

> Ensure the final values for metrics/events are emitted/stored at APP 
> completion time
> 
>
> Key: YARN-4173
> URL: https://issues.apache.org/jira/browse/YARN-4173
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Vrushali C
>  Labels: YARN-5355
>
> When an application is finishing, the final values of metrics/events need to 
> be written to the backend as final values from the all AM/RM/NM processes for 
> that app.
> For the flow run table (YARN-3901), we need to know which values are the 
> final ones for metrics so that they can be tagged accordingly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5739) Provide timeline reader API to list available timeline entity types for one application

2016-10-25 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15606036#comment-15606036
 ] 

Vrushali C commented on YARN-5739:
--

Actually, this is not that hard to do. It requires making two queries:
- first to get the flow from AppToFlow table (given the cluster and appid, look 
up the flowrun row key in the AppToFlow table).
- second, query the entities tables with row key
{code}
userId! clusterId ! flowName ! flowRunId ! appId
{code}

All the entities that belong to this appId are part of the scan for this row 
key prefix.



> Provide timeline reader API to list available timeline entity types for one 
> application
> ---
>
> Key: YARN-5739
> URL: https://issues.apache.org/jira/browse/YARN-5739
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Li Lu
>Assignee: Li Lu
>
> Right now we only show a part of available timeline entity data in the new 
> YARN UI. However, some data (especially library specific data) are not 
> possible to be queried out by the web UI. It will be appealing for the UI to 
> provide an "entity browser" for each YARN application. Actually, simply 
> dumping out available timeline entities (with proper pagination, of course) 
> would be pretty helpful for UI users. 
> On timeline side, we're not far away from this goal. Right now I believe the 
> only thing missing is to list all available entity types within one 
> application. The challenge here is that we're not storing this data for each 
> application, but given this kind of call is relatively rare (compare to 
> writes and updates) we can perform some scanning during the read time. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5775) Convert enums in swagger definition to uppercase

2016-10-25 Thread Billie Rinaldi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Billie Rinaldi updated YARN-5775:
-
Summary: Convert enums in swagger definition to uppercase  (was: Bug fixes 
in swagger definition)

> Convert enums in swagger definition to uppercase
> 
>
> Key: YARN-5775
> URL: https://issues.apache.org/jira/browse/YARN-5775
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gour Saha
>Assignee: Gour Saha
> Fix For: yarn-native-services
>
> Attachments: YARN-5775-yarn-native-services.001.patch
>
>
> All enums have been listed in lowercase. Need to convert all of them to 
> uppercase.
> For e.g. ContainerState:
> {noformat}
> enum:
>   - init
>   - ready
> {noformat}
> needs to be changed to -
> {noformat}
> enum:
>   - INIT
>   - READY
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-3649) Allow configurable prefix for hbase table names (like prod, exp, test etc)

2016-10-25 Thread Vrushali C (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C updated YARN-3649:
-
Attachment: YARN-3649-YARN-5355.005.patch

Uploading v5 that addresses Varun's suggestions.

Diff between this patch and the one before this:
{noformat}

$ diff YARN-3649-YARN-5355.004.patch YARN-3649-YARN-5355.005.patch
2c2
< index 3bb73f5..9e95d68 100644
---
> index 3bb73f5..925f626 100644
28c28
< +  TIMELINE_SERVICE_PREFIX + "schema.prefix";
---
> +  TIMELINE_SERVICE_PREFIX + "hbase-schema.prefix";
32a33,55
> diff --git 
> hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
>  
> hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
> index ed220c0..f1fd85b 100644
> --- 
> hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
> +++ 
> hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
> @@ -2258,6 +2258,18 @@
>  25920
>
>
> +  
> +
> +The value of this parameter sets the prefix for all tables that are part 
> of
> +timeline service in the hbase storage schema. It can be set to "dev."
> +or "staging." if it is to be used for development or staging instances.
> +This way the data in production tables stays in a separate set of tables
> +prefixed by "prod.".
> +
> +yarn.timeline-service.hbase-schema.prefix
> +prod.
> +  
> +
>
>
>
$

{noformat}


> Allow configurable prefix for hbase table names (like prod, exp, test etc)
> --
>
> Key: YARN-3649
> URL: https://issues.apache.org/jira/browse/YARN-3649
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Vrushali C
>  Labels: YARN-5355
> Attachments: YARN-3649-YARN-2928.01.patch, 
> YARN-3649-YARN-5355.002.patch, YARN-3649-YARN-5355.003.patch, 
> YARN-3649-YARN-5355.004.patch, YARN-3649-YARN-5355.005.patch, 
> YARN-3649-YARN-5355.01.patch
>
>
> As per [~jrottinghuis]'s suggestion in YARN-3411, it will be a good idea to 
> have a configurable prefix for hbase table names.  
> This way we can easily run a staging, a test, a production and whatever setup 
> in the same HBase instance / without having to override every single table in 
> the config.
> One could simply overwrite the default prefix and you're off and running.
> For prefix, potential candidates are "tst" "prod" "exp" etc. Once can then 
> still override one tablename if needed, but managing one whole setup will be 
> easier.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5548) Random test failure TestRMRestart#testFinishedAppRemovalAfterRMRestart

2016-10-25 Thread Bibin A Chundatt (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15606080#comment-15606080
 ] 

Bibin A Chundatt commented on YARN-5548:


If there a delay in getting YARN-5375 we can push this jira. recently for 
YARN--5545   failure of same tc we observed.

> Random test failure TestRMRestart#testFinishedAppRemovalAfterRMRestart
> --
>
> Key: YARN-5548
> URL: https://issues.apache.org/jira/browse/YARN-5548
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
> Attachments: YARN-5548.0001.patch, YARN-5548.0002.patch, 
> YARN-5548.0003.patch
>
>
> https://builds.apache.org/job/PreCommit-YARN-Build/12850/testReport/org.apache.hadoop.yarn.server.resourcemanager/TestRMRestart/testFinishedAppRemovalAfterRMRestart/
> {noformat}
> Error Message
> Stacktrace
> java.lang.AssertionError: expected null, but was: application_submission_context { application_id { id: 1 cluster_timestamp: 
> 1471885197388 } application_name: "" queue: "default" priority { priority: 0 
> } am_container_spec { } cancel_tokens_when_complete: true maxAppAttempts: 2 
> resource { memory: 1024 virtual_cores: 1 } applicationType: "YARN" 
> keep_containers_across_application_attempts: false 
> attempt_failures_validity_interval: 0 am_container_resource_request { 
> priority { priority: 0 } resource_name: "*" capability { memory: 1024 
> virtual_cores: 1 } num_containers: 0 relax_locality: true 
> node_label_expression: "" execution_type_request { execution_type: GUARANTEED 
> enforce_execution_type: false } } } user: "jenkins" start_time: 1471885197417 
> application_state: RMAPP_FINISHED finish_time: 1471885197478>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotNull(Assert.java:664)
>   at org.junit.Assert.assertNull(Assert.java:646)
>   at org.junit.Assert.assertNull(Assert.java:656)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.TestRMRestart.testFinishedAppRemovalAfterRMRestart(TestRMRestart.java:1656)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5777) TestLogsCLI#testFetchApplictionLogsAsAnotherUser fails

2016-10-25 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15606081#comment-15606081
 ] 

John Zhuge commented on YARN-5777:
--

+1 (non-binding) The change looks good to me. Thanks [~ajisakaa] for 
investigation and fix.

> TestLogsCLI#testFetchApplictionLogsAsAnotherUser fails
> --
>
> Key: YARN-5777
> URL: https://issues.apache.org/jira/browse/YARN-5777
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: test
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
> Attachments: YARN-5777.01.patch
>
>
> {noformat}
> Running org.apache.hadoop.yarn.client.cli.TestLogsCLI
> Tests run: 14, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 5.876 sec 
> <<< FAILURE! - in org.apache.hadoop.yarn.client.cli.TestLogsCLI
> testFetchApplictionLogsAsAnotherUser(org.apache.hadoop.yarn.client.cli.TestLogsCLI)
>   Time elapsed: 0.199 sec  <<< ERROR!
> java.io.IOException: Invalid directory or I/O error occurred for dir: 
> /Users/aajisaka/git/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/target/logs/priority/logs/application_1477371285256_1000
> at org.apache.hadoop.fs.FileUtil.list(FileUtil.java:1148)
> at 
> org.apache.hadoop.fs.RawLocalFileSystem.listStatus(RawLocalFileSystem.java:469)
> at 
> org.apache.hadoop.fs.DelegateToFileSystem.listStatus(DelegateToFileSystem.java:169)
> at org.apache.hadoop.fs.ChecksumFs.listStatus(ChecksumFs.java:519)
> at 
> org.apache.hadoop.fs.AbstractFileSystem$1.(AbstractFileSystem.java:890)
> at 
> org.apache.hadoop.fs.AbstractFileSystem.listStatusIterator(AbstractFileSystem.java:888)
> at org.apache.hadoop.fs.FileContext$22.next(FileContext.java:1492)
> at org.apache.hadoop.fs.FileContext$22.next(FileContext.java:1487)
> at org.apache.hadoop.fs.FSLinkResolver.resolve(FSLinkResolver.java:90)
> at org.apache.hadoop.fs.FileContext.listStatus(FileContext.java:1494)
> at 
> org.apache.hadoop.yarn.logaggregation.LogCLIHelpers.getRemoteNodeFileDir(LogCLIHelpers.java:592)
> at 
> org.apache.hadoop.yarn.logaggregation.LogCLIHelpers.dumpAllContainersLogs(LogCLIHelpers.java:348)
> at 
> org.apache.hadoop.yarn.client.cli.LogsCLI.fetchApplicationLogs(LogsCLI.java:971)
> at 
> org.apache.hadoop.yarn.client.cli.LogsCLI.runCommand(LogsCLI.java:299)
> at org.apache.hadoop.yarn.client.cli.LogsCLI.run(LogsCLI.java:106)
> at 
> org.apache.hadoop.yarn.client.cli.TestLogsCLI.testFetchApplictionLogsAsAnotherUser(TestLogsCLI.java:868)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5694) ZKRMStateStore should always start its verification thread to prevent accidental state store corruption

2016-10-25 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15606155#comment-15606155
 ] 

Karthik Kambatla commented on YARN-5694:


I have minor comments (nits) on the trunk patch:
# ResourceManager: Since the method is now not just transitioning to standby, 
it should likely be named differently. How about {{handleFatalFailure()}}?
# ZKRMStateStore: In closeInternal, I would still prefer the null check on 
verifyActiveStatusThread. 

On the branch-2.7 patch, does it also include the YARN-5677 patch? 

> ZKRMStateStore should always start its verification thread to prevent 
> accidental state store corruption
> ---
>
> Key: YARN-5694
> URL: https://issues.apache.org/jira/browse/YARN-5694
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 3.0.0-alpha1
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Critical
> Attachments: YARN-5694.001.patch, YARN-5694.002.patch, 
> YARN-5694.003.patch, YARN-5694.004.patch, YARN-5694.004.patch, 
> YARN-5694.005.patch, YARN-5694.branch-2.7.001.patch, 
> YARN-5694.branch-2.7.002.patch
>
>
> There are two cases.  In branch-2.7, the 
> {{ZKRMStateStore.VerifyActiveStatusThread}} is always started, even when 
> using embedded or Curator failover.  In branch-2.8, the 
> {{ZKRMStateStore.VerifyActiveStatusThread}} is only started when HA is 
> disabled, which makes no sense.  Based on the JIRA that introduced that 
> change (YARN-4559), I believe the intent was to start it only when embedded 
> failover is disabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5545) App submit failure on queue with label when default queue partition capacity is zero

2016-10-25 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15606182#comment-15606182
 ] 

Sunil G commented on YARN-5545:
---

Currently we are trying to invoke {{activateApplications}} while recovering 
each application. Yes, as of now nodes are getting registered later in the 
flow. But for scheduler, we need not have to consider such timing cases from 
RMAppManager/RM end. Being said that, its important to separate 2 issues out 
here
- Recovery call flow for each app in Scheduler should not invoke 
{{activateApplications}} every time
- {{activateApplications}} itself could be improved by considering AM head 
room. But that could be done in another ticket, as this one is focusing on 
fixing recovery call flow.

To address issue 1, we could only invoke {{activateApplications}} once after 
recovering all apps. By this, we can remove the timing dependency from RM end 
for recovery. With this change, even if there is a change in RM recovery model, 
scheduler would have done its complete recovery flow w/o causing any 
performance issue or waiting for resourceTrackerService to register nodes. 
Thanks [~leftnoteasy].

Thoughts?

> App submit failure on queue with label when default queue partition capacity 
> is zero
> 
>
> Key: YARN-5545
> URL: https://issues.apache.org/jira/browse/YARN-5545
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
> Attachments: YARN-5545.0001.patch, YARN-5545.0002.patch, 
> YARN-5545.0003.patch, YARN-5545.004.patch, capacity-scheduler.xml
>
>
> Configure capacity scheduler 
> yarn.scheduler.capacity.root.default.capacity=0
> yarn.scheduler.capacity.root.queue1.accessible-node-labels.labelx.capacity=50
> yarn.scheduler.capacity.root.default.accessible-node-labels.labelx.capacity=50
> Submit application as below
> ./yarn jar 
> ../share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.0.0-alpha2-SNAPSHOT-tests.jar
>  sleep -Dmapreduce.job.node-label-expression=labelx 
> -Dmapreduce.job.queuename=default -m 1 -r 1 -mt 1000 -rt 1
> {noformat}
> 2016-08-21 18:21:31,375 INFO mapreduce.JobSubmitter: Cleaning up the staging 
> area /tmp/hadoop-yarn/staging/root/.staging/job_1471670113386_0001
> java.io.IOException: org.apache.hadoop.yarn.exceptions.YarnException: Failed 
> to submit application_1471670113386_0001 to YARN : 
> org.apache.hadoop.security.AccessControlException: Queue root.default already 
> has 0 applications, cannot accept submission of application: 
> application_1471670113386_0001
>   at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:316)
>   at 
> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:255)
>   at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1344)
>   at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1341)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1790)
>   at org.apache.hadoop.mapreduce.Job.submit(Job.java:1341)
>   at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1362)
>   at org.apache.hadoop.mapreduce.SleepJob.run(SleepJob.java:273)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.mapreduce.SleepJob.main(SleepJob.java:194)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at 
> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
>   at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
>   at 
> org.apache.hadoop.test.MapredTestDriver.run(MapredTestDriver.java:136)
>   at 
> org.apache.hadoop.test.MapredTestDriver.main(MapredTestDriver.java:144)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at org.apache.hadoop.util.RunJar.run(RunJar.java:239)
>   at org.apache.hadoop.util.RunJar.main(RunJar.java:153)
> Caused by: org.apache.hadoop.yarn.exceptions.YarnException: Failed to submit 
> application_1471670113386_0001 to YARN : 
> org.apache.hadoop.security.AccessControlException: Queue root.default alr

[jira] [Commented] (YARN-3649) Allow configurable prefix for hbase table names (like prod, exp, test etc)

2016-10-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15606192#comment-15606192
 ] 

Hadoop QA commented on YARN-3649:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 55s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
48s {color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 19s 
{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
39s {color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 54s 
{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
4s {color} | {color:green} YARN-5355 passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
50s {color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 18s 
{color} | {color:green} YARN-5355 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
28s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 14s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 14s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
37s {color} | {color:green} hadoop-yarn-project/hadoop-yarn: The patch 
generated 0 new + 209 unchanged - 1 fixed = 209 total (was 210) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 43s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
53s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 10s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 23s 
{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 14s 
{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 46s 
{color} | {color:green} hadoop-yarn-server-timelineservice in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 6m 10s 
{color} | {color:green} hadoop-yarn-server-timelineservice-hbase-tests in the 
patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 7s 
{color} | {color:green} hadoop-yarn-site in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
18s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 40m 34s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/No

[jira] [Updated] (YARN-5716) Add global scheduler interface definition and update CapacityScheduler to use it.

2016-10-25 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-5716:
-
Attachment: YARN-5716.009.patch

Thanks [~sunilg] for comments:

For 1), yes it looks like we're doing UL check twice, however, the second time 
only check one application instead of thousands of apps.
So the time spent on UL check for accept/apply is ignoreable from my 
performance tests.

For 2), we will not check allocation-from-reserved container by the outter if 
(...) check

For 3), Jian asked the same question, see my answer 
https://issues.apache.org/jira/browse/YARN-5716?focusedCommentId=15593292&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15593292:
bq. No we cannot do this merge, because it is possible that in the previous 
reservedContainer != null if, we reserve a new container so the check is not 
valid.

For 4):
bq. a reference for resourceRequests is kept and under failure updated back to 
schedulingInfo object. I do not feel its a clean implementation... 
Actually we clone the ResourceRequest, and we need to check and make sure 
ResourceRequest is not changed or still required by the apps.

bq. Method is very lengthy. We could take out logic like increase request etc. 
So it ll be more easier.
Done

bq. readLock is mostly needed for increased request and for appSchedulingInfo. 
I think some optimizations here could be done separately in another ticket as 
improvement.
Yeah I agree, I would prefer to keep it as-is unless we find any performance 
issues.

bq. preferring equals instead of if (fromReservedContainer != 
reservedContainerOnNode)
I intended to do that, I want to make sure two reserved container are pointed 
to the same instance.

Please check ver.9 patch

> Add global scheduler interface definition and update CapacityScheduler to use 
> it.
> -
>
> Key: YARN-5716
> URL: https://issues.apache.org/jira/browse/YARN-5716
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-5716.001.patch, YARN-5716.002.patch, 
> YARN-5716.003.patch, YARN-5716.004.patch, YARN-5716.005.patch, 
> YARN-5716.006.patch, YARN-5716.007.patch, YARN-5716.008.patch, 
> YARN-5716.009.patch
>
>
> Target of this JIRA:
> - Definition of interfaces / objects which will be used by global scheduling, 
> this will be shared by different schedulers.
> - Modify CapacityScheduler to use it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5777) TestLogsCLI#testFetchApplictionLogsAsAnotherUser fails

2016-10-25 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15606203#comment-15606203
 ] 

Xiao Chen commented on YARN-5777:
-

Thanks [~ajisakaa] for the catch. The analysis make sense, +1. Committing this.

> TestLogsCLI#testFetchApplictionLogsAsAnotherUser fails
> --
>
> Key: YARN-5777
> URL: https://issues.apache.org/jira/browse/YARN-5777
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: test
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
> Attachments: YARN-5777.01.patch
>
>
> {noformat}
> Running org.apache.hadoop.yarn.client.cli.TestLogsCLI
> Tests run: 14, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 5.876 sec 
> <<< FAILURE! - in org.apache.hadoop.yarn.client.cli.TestLogsCLI
> testFetchApplictionLogsAsAnotherUser(org.apache.hadoop.yarn.client.cli.TestLogsCLI)
>   Time elapsed: 0.199 sec  <<< ERROR!
> java.io.IOException: Invalid directory or I/O error occurred for dir: 
> /Users/aajisaka/git/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/target/logs/priority/logs/application_1477371285256_1000
> at org.apache.hadoop.fs.FileUtil.list(FileUtil.java:1148)
> at 
> org.apache.hadoop.fs.RawLocalFileSystem.listStatus(RawLocalFileSystem.java:469)
> at 
> org.apache.hadoop.fs.DelegateToFileSystem.listStatus(DelegateToFileSystem.java:169)
> at org.apache.hadoop.fs.ChecksumFs.listStatus(ChecksumFs.java:519)
> at 
> org.apache.hadoop.fs.AbstractFileSystem$1.(AbstractFileSystem.java:890)
> at 
> org.apache.hadoop.fs.AbstractFileSystem.listStatusIterator(AbstractFileSystem.java:888)
> at org.apache.hadoop.fs.FileContext$22.next(FileContext.java:1492)
> at org.apache.hadoop.fs.FileContext$22.next(FileContext.java:1487)
> at org.apache.hadoop.fs.FSLinkResolver.resolve(FSLinkResolver.java:90)
> at org.apache.hadoop.fs.FileContext.listStatus(FileContext.java:1494)
> at 
> org.apache.hadoop.yarn.logaggregation.LogCLIHelpers.getRemoteNodeFileDir(LogCLIHelpers.java:592)
> at 
> org.apache.hadoop.yarn.logaggregation.LogCLIHelpers.dumpAllContainersLogs(LogCLIHelpers.java:348)
> at 
> org.apache.hadoop.yarn.client.cli.LogsCLI.fetchApplicationLogs(LogsCLI.java:971)
> at 
> org.apache.hadoop.yarn.client.cli.LogsCLI.runCommand(LogsCLI.java:299)
> at org.apache.hadoop.yarn.client.cli.LogsCLI.run(LogsCLI.java:106)
> at 
> org.apache.hadoop.yarn.client.cli.TestLogsCLI.testFetchApplictionLogsAsAnotherUser(TestLogsCLI.java:868)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5779) [YARN-3368] Document limits/notes of the new YARN UI

2016-10-25 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-5779:
-
Attachment: YARN-5779-YARN-3368.01.patch

Attached ver.1 patch, [~sunilg] plz review.

> [YARN-3368] Document limits/notes of the new YARN UI
> 
>
> Key: YARN-5779
> URL: https://issues.apache.org/jira/browse/YARN-5779
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-5779-YARN-3368.01.patch
>
>
> For example, we don't make sure it's able to run on security enabled 
> environment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4734) Merge branch:YARN-3368 to trunk

2016-10-25 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4734?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-4734:
-
Attachment: YARN-4734.16.patch

Attached ver.16 included notes of security environment in doc.

> Merge branch:YARN-3368 to trunk
> ---
>
> Key: YARN-4734
> URL: https://issues.apache.org/jira/browse/YARN-4734
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-4734.1.patch, YARN-4734.10-NOT_READY.patch, 
> YARN-4734.11-NOT_READY.patch, YARN-4734.12-NOT_READY.patch, 
> YARN-4734.13.patch, YARN-4734.14.patch, YARN-4734.15.patch, 
> YARN-4734.16.patch, YARN-4734.2.patch, YARN-4734.3.patch, YARN-4734.4.patch, 
> YARN-4734.5.patch, YARN-4734.6.patch, YARN-4734.7.patch, YARN-4734.8.patch, 
> YARN-4734.9-NOT_READY.patch
>
>
> YARN-2928 branch is planned to merge back to trunk shortly, it depends on 
> changes of YARN-3368. This JIRA is to track the merging task.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4734) Merge branch:YARN-3368 to trunk

2016-10-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15606279#comment-15606279
 ] 

Hadoop QA commented on YARN-4734:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 5s {color} 
| {color:red} YARN-4734 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12835184/YARN-4734.16.patch |
| JIRA Issue | YARN-4734 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/13507/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Merge branch:YARN-3368 to trunk
> ---
>
> Key: YARN-4734
> URL: https://issues.apache.org/jira/browse/YARN-4734
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-4734.1.patch, YARN-4734.10-NOT_READY.patch, 
> YARN-4734.11-NOT_READY.patch, YARN-4734.12-NOT_READY.patch, 
> YARN-4734.13.patch, YARN-4734.14.patch, YARN-4734.15.patch, 
> YARN-4734.16.patch, YARN-4734.2.patch, YARN-4734.3.patch, YARN-4734.4.patch, 
> YARN-4734.5.patch, YARN-4734.6.patch, YARN-4734.7.patch, YARN-4734.8.patch, 
> YARN-4734.9-NOT_READY.patch
>
>
> YARN-2928 branch is planned to merge back to trunk shortly, it depends on 
> changes of YARN-3368. This JIRA is to track the merging task.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4734) Merge branch:YARN-3368 to trunk

2016-10-25 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4734?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-4734:
-
Attachment: YARN-4734.17.patch

Oops, just used wrong way to generate the patch, attach the correct one (ver.17)

> Merge branch:YARN-3368 to trunk
> ---
>
> Key: YARN-4734
> URL: https://issues.apache.org/jira/browse/YARN-4734
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-4734.1.patch, YARN-4734.10-NOT_READY.patch, 
> YARN-4734.11-NOT_READY.patch, YARN-4734.12-NOT_READY.patch, 
> YARN-4734.13.patch, YARN-4734.14.patch, YARN-4734.15.patch, 
> YARN-4734.16.patch, YARN-4734.17.patch, YARN-4734.2.patch, YARN-4734.3.patch, 
> YARN-4734.4.patch, YARN-4734.5.patch, YARN-4734.6.patch, YARN-4734.7.patch, 
> YARN-4734.8.patch, YARN-4734.9-NOT_READY.patch
>
>
> YARN-2928 branch is planned to merge back to trunk shortly, it depends on 
> changes of YARN-3368. This JIRA is to track the merging task.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4734) Merge branch:YARN-3368 to trunk

2016-10-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15606304#comment-15606304
 ] 

Hadoop QA commented on YARN-4734:
-

(!) A patch to the testing environment has been detected. 
Re-executing against the patched versions to perform further tests. 
The console is at 
https://builds.apache.org/job/PreCommit-YARN-Build/13508/console in case of 
problems.


> Merge branch:YARN-3368 to trunk
> ---
>
> Key: YARN-4734
> URL: https://issues.apache.org/jira/browse/YARN-4734
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-4734.1.patch, YARN-4734.10-NOT_READY.patch, 
> YARN-4734.11-NOT_READY.patch, YARN-4734.12-NOT_READY.patch, 
> YARN-4734.13.patch, YARN-4734.14.patch, YARN-4734.15.patch, 
> YARN-4734.16.patch, YARN-4734.17.patch, YARN-4734.2.patch, YARN-4734.3.patch, 
> YARN-4734.4.patch, YARN-4734.5.patch, YARN-4734.6.patch, YARN-4734.7.patch, 
> YARN-4734.8.patch, YARN-4734.9-NOT_READY.patch
>
>
> YARN-2928 branch is planned to merge back to trunk shortly, it depends on 
> changes of YARN-3368. This JIRA is to track the merging task.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5777) TestLogsCLI#testFetchApplictionLogsAsAnotherUser fails

2016-10-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15606302#comment-15606302
 ] 

Hudson commented on YARN-5777:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10676 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10676/])
YARN-5777. TestLogsCLI#testFetchApplictionLogsAsAnotherUser fails. (xiao: rev 
c88c1dc50c0ec4521bc93f39726248026e68063a)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/logaggregation/LogCLIHelpers.java


> TestLogsCLI#testFetchApplictionLogsAsAnotherUser fails
> --
>
> Key: YARN-5777
> URL: https://issues.apache.org/jira/browse/YARN-5777
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: test
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
> Fix For: 3.0.0-alpha2
>
> Attachments: YARN-5777.01.patch
>
>
> {noformat}
> Running org.apache.hadoop.yarn.client.cli.TestLogsCLI
> Tests run: 14, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 5.876 sec 
> <<< FAILURE! - in org.apache.hadoop.yarn.client.cli.TestLogsCLI
> testFetchApplictionLogsAsAnotherUser(org.apache.hadoop.yarn.client.cli.TestLogsCLI)
>   Time elapsed: 0.199 sec  <<< ERROR!
> java.io.IOException: Invalid directory or I/O error occurred for dir: 
> /Users/aajisaka/git/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/target/logs/priority/logs/application_1477371285256_1000
> at org.apache.hadoop.fs.FileUtil.list(FileUtil.java:1148)
> at 
> org.apache.hadoop.fs.RawLocalFileSystem.listStatus(RawLocalFileSystem.java:469)
> at 
> org.apache.hadoop.fs.DelegateToFileSystem.listStatus(DelegateToFileSystem.java:169)
> at org.apache.hadoop.fs.ChecksumFs.listStatus(ChecksumFs.java:519)
> at 
> org.apache.hadoop.fs.AbstractFileSystem$1.(AbstractFileSystem.java:890)
> at 
> org.apache.hadoop.fs.AbstractFileSystem.listStatusIterator(AbstractFileSystem.java:888)
> at org.apache.hadoop.fs.FileContext$22.next(FileContext.java:1492)
> at org.apache.hadoop.fs.FileContext$22.next(FileContext.java:1487)
> at org.apache.hadoop.fs.FSLinkResolver.resolve(FSLinkResolver.java:90)
> at org.apache.hadoop.fs.FileContext.listStatus(FileContext.java:1494)
> at 
> org.apache.hadoop.yarn.logaggregation.LogCLIHelpers.getRemoteNodeFileDir(LogCLIHelpers.java:592)
> at 
> org.apache.hadoop.yarn.logaggregation.LogCLIHelpers.dumpAllContainersLogs(LogCLIHelpers.java:348)
> at 
> org.apache.hadoop.yarn.client.cli.LogsCLI.fetchApplicationLogs(LogsCLI.java:971)
> at 
> org.apache.hadoop.yarn.client.cli.LogsCLI.runCommand(LogsCLI.java:299)
> at org.apache.hadoop.yarn.client.cli.LogsCLI.run(LogsCLI.java:106)
> at 
> org.apache.hadoop.yarn.client.cli.TestLogsCLI.testFetchApplictionLogsAsAnotherUser(TestLogsCLI.java:868)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5739) Provide timeline reader API to list available timeline entity types for one application

2016-10-25 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15606313#comment-15606313
 ] 

Li Lu commented on YARN-5739:
-

Thanks [~vrushalic]! Yes. One concern we had was the scan may need to go 
through every record in an app, which is potentially a big scan. However, the 
number of different entity types may be limited. So we can construct a few 
scans that "jump" in different entity types. I don't think we need to provide a 
separate filter since this may introduce some overhead in deployments. 

> Provide timeline reader API to list available timeline entity types for one 
> application
> ---
>
> Key: YARN-5739
> URL: https://issues.apache.org/jira/browse/YARN-5739
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Li Lu
>Assignee: Li Lu
>
> Right now we only show a part of available timeline entity data in the new 
> YARN UI. However, some data (especially library specific data) are not 
> possible to be queried out by the web UI. It will be appealing for the UI to 
> provide an "entity browser" for each YARN application. Actually, simply 
> dumping out available timeline entities (with proper pagination, of course) 
> would be pretty helpful for UI users. 
> On timeline side, we're not far away from this goal. Right now I believe the 
> only thing missing is to list all available entity types within one 
> application. The challenge here is that we're not storing this data for each 
> application, but given this kind of call is relatively rare (compare to 
> writes and updates) we can perform some scanning during the read time. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5765) LinuxContainerExecutor creates appcache and its subdirectories with wrong group owner.

2016-10-25 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15606322#comment-15606322
 ] 

Haibo Chen commented on YARN-5765:
--

I think doing "perm = perm | S_ISGID" will set the setGID bit regardless of the 
original permission, which may not be something we want.

> LinuxContainerExecutor creates appcache and its subdirectories with wrong 
> group owner.
> --
>
> Key: YARN-5765
> URL: https://issues.apache.org/jira/browse/YARN-5765
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.8.0, 3.0.0-alpha1
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>Priority: Blocker
>
> LinuxContainerExecutor creates usercache/\{userId\}/appcache/\{appId\} with 
> wrong group owner, causing Log aggregation and ShuffleHandler to fail because 
> node manager process does not have permission to read the files under the 
> directory.
> This can be easily reproduced by enabling LCE and submitting a MR example 
> job. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5739) Provide timeline reader API to list available timeline entity types for one application

2016-10-25 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15606332#comment-15606332
 ] 

Vrushali C commented on YARN-5739:
--

So if we need only some entity types, then it's even easier I think. The row 
key already contains the entity type after the app id. The entity table row key 
is
{code} userId! clusterId ! flowName ! flowRunId ! appId ! entity type ! entity 
id {code}

So, if we know which entity types we are looking for, we can just add these to 
the scan's while match filter for row key prefix and that should do this 
"jumping" for us. 

> Provide timeline reader API to list available timeline entity types for one 
> application
> ---
>
> Key: YARN-5739
> URL: https://issues.apache.org/jira/browse/YARN-5739
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Li Lu
>Assignee: Li Lu
>
> Right now we only show a part of available timeline entity data in the new 
> YARN UI. However, some data (especially library specific data) are not 
> possible to be queried out by the web UI. It will be appealing for the UI to 
> provide an "entity browser" for each YARN application. Actually, simply 
> dumping out available timeline entities (with proper pagination, of course) 
> would be pretty helpful for UI users. 
> On timeline side, we're not far away from this goal. Right now I believe the 
> only thing missing is to list all available entity types within one 
> application. The challenge here is that we're not storing this data for each 
> application, but given this kind of call is relatively rare (compare to 
> writes and updates) we can perform some scanning during the read time. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5770) Performance improvement of native-services REST API service

2016-10-25 Thread Billie Rinaldi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5770?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15606339#comment-15606339
 ] 

Billie Rinaldi commented on YARN-5770:
--

[~gsaha], I built with the patch, ran unit tests, and tried starting services 
using the API. The unit test failure is due to a different issue and will be 
fixed in YARN-5690. One suggestion I have is that the new actionStatus method 
should probably be added to the SliderClientAPI interface. Maybe in the future 
we could separate this into two client interfaces, one designed for CLI and one 
for Java clients.

> Performance improvement of native-services REST API service
> ---
>
> Key: YARN-5770
> URL: https://issues.apache.org/jira/browse/YARN-5770
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gour Saha
>Assignee: Gour Saha
> Fix For: yarn-native-services
>
> Attachments: YARN-5770-yarn-native-services.phase1.001.patch, 
> YARN-5770-yarn-native-services.phase1.002.patch
>
>
> Make enhancements and bug-fixes to eliminate frequent full GC of the REST API 
> Service. Dependent on few Slider fixes like SLIDER-1168 as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5433) Audit dependencies for Category-X

2016-10-25 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15606346#comment-15606346
 ] 

Sangjin Lee commented on YARN-5433:
---

I have done a more comprehensive analysis, and am building a spreadsheet 
similar to HADOOP-12893: 
https://docs.google.com/spreadsheets/d/1D6aDHOUbQmF3SDtVj-3n4GhWlu6V8t4jD4z8HQivsq8/edit?usp=sharing

So far almost all of the newly added dependencies appear to be ASLv.2, BSD, and 
MIT. It seems only ANTLR and sqlline may need to be added to NOTICES.txt, but 
I'm not 100% certain.

cc [~xiaochen] to see if he has feedback. Thanks!

> Audit dependencies for Category-X
> -
>
> Key: YARN-5433
> URL: https://issues.apache.org/jira/browse/YARN-5433
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: timelineserver
>Affects Versions: 3.0.0-alpha1
>Reporter: Sean Busbey
>Assignee: Sangjin Lee
>Priority: Blocker
>
> Recently phoenix has found some category-x dependencies in their build 
> (PHOENIX-3084, PHOENIX-3091), which also showed some problems in HBase 
> (HBASE-16260).
> Since the Timeline Server work brought in both of these as dependencies, we 
> should make sure we don't have any cat-x dependencies either. From what I've 
> seen in those projects, our choice of HBase version shouldn't be impacted but 
> our Phoenix one is.
> Greping our current dependency list for the timeline server component shows 
> some LGPL:
> {code}
> ...
> [INFO]net.sourceforge.findbugs:annotations:jar:1.3.2:compile
> ...
> {code}
> I haven't checked the rest of the dependencies that have changed since 
> HADOOP-12893 went in, so ATM I've filed this against YARN since that's where 
> this one example came in.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5770) Performance improvement of native-services REST API service

2016-10-25 Thread Gour Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5770?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15606354#comment-15606354
 ] 

Gour Saha commented on YARN-5770:
-

Makes sense. Let me add it to SliderClientAPI  so that when we split it, it 
does not fall through the cracks. I will upload a new patch with this change.

> Performance improvement of native-services REST API service
> ---
>
> Key: YARN-5770
> URL: https://issues.apache.org/jira/browse/YARN-5770
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gour Saha
>Assignee: Gour Saha
> Fix For: yarn-native-services
>
> Attachments: YARN-5770-yarn-native-services.phase1.001.patch, 
> YARN-5770-yarn-native-services.phase1.002.patch
>
>
> Make enhancements and bug-fixes to eliminate frequent full GC of the REST API 
> Service. Dependent on few Slider fixes like SLIDER-1168 as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5388) MAPREDUCE-6719 requires changes to DockerContainerExecutor

2016-10-25 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15606374#comment-15606374
 ] 

Karthik Kambatla commented on YARN-5388:


My bad. I missed the fix parts of the code in the rest of the unrelated 
improvements. +1. Checking both in..

(For the unrelated code readability etc. improvements, I always wonder if we 
should do it in a separate JIRA to avoid confusing folks looking at pulling 
this patch in later.)

> MAPREDUCE-6719 requires changes to DockerContainerExecutor
> --
>
> Key: YARN-5388
> URL: https://issues.apache.org/jira/browse/YARN-5388
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Critical
> Attachments: YARN-5388.001.patch, YARN-5388.002.patch, 
> YARN-5388.003.patch, YARN-5388.branch-2.001.patch, 
> YARN-5388.branch-2.002.patch, YARN-5388.branch-2.003.patch
>
>
> Because the {{DockerContainerExecuter}} overrides the {{writeLaunchEnv()}} 
> method, it must also have the wildcard processing logic from 
> YARN-4958/YARN-5373 added to it.  Without it, the use of -libjars will fail 
> unless wildcarding is disabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5694) ZKRMStateStore should always start its verification thread to prevent accidental state store corruption

2016-10-25 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton updated YARN-5694:
---
Attachment: YARN-5694.006.patch

Here's a patch to add the null check back.  I don't think we need to rename the 
{{handleTransitionToStandby()}} method, because the exit happens only if we 
can't transition to standby, i.e. we're not in HA mode.  To my way of thinking, 
that doesn't change the purpose of the method.

> ZKRMStateStore should always start its verification thread to prevent 
> accidental state store corruption
> ---
>
> Key: YARN-5694
> URL: https://issues.apache.org/jira/browse/YARN-5694
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 3.0.0-alpha1
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Critical
> Attachments: YARN-5694.001.patch, YARN-5694.002.patch, 
> YARN-5694.003.patch, YARN-5694.004.patch, YARN-5694.004.patch, 
> YARN-5694.005.patch, YARN-5694.006.patch, YARN-5694.branch-2.7.001.patch, 
> YARN-5694.branch-2.7.002.patch
>
>
> There are two cases.  In branch-2.7, the 
> {{ZKRMStateStore.VerifyActiveStatusThread}} is always started, even when 
> using embedded or Curator failover.  In branch-2.8, the 
> {{ZKRMStateStore.VerifyActiveStatusThread}} is only started when HA is 
> disabled, which makes no sense.  Based on the JIRA that introduced that 
> change (YARN-4559), I believe the intent was to start it only when embedded 
> failover is disabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5388) Deprecate and remove DockerContainerExecutor

2016-10-25 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated YARN-5388:
---
Summary: Deprecate and remove DockerContainerExecutor  (was: MAPREDUCE-6719 
requires changes to DockerContainerExecutor)

> Deprecate and remove DockerContainerExecutor
> 
>
> Key: YARN-5388
> URL: https://issues.apache.org/jira/browse/YARN-5388
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Critical
> Attachments: YARN-5388.001.patch, YARN-5388.002.patch, 
> YARN-5388.003.patch, YARN-5388.branch-2.001.patch, 
> YARN-5388.branch-2.002.patch, YARN-5388.branch-2.003.patch
>
>
> Because the {{DockerContainerExecuter}} overrides the {{writeLaunchEnv()}} 
> method, it must also have the wildcard processing logic from 
> YARN-4958/YARN-5373 added to it.  Without it, the use of -libjars will fail 
> unless wildcarding is disabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5694) ZKRMStateStore should always start its verification thread to prevent accidental state store corruption

2016-10-25 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15606382#comment-15606382
 ] 

Daniel Templeton commented on YARN-5694:


The branch-2.7 patch is out of date.  I stopped updating it until we settle on 
the trunk patch.

> ZKRMStateStore should always start its verification thread to prevent 
> accidental state store corruption
> ---
>
> Key: YARN-5694
> URL: https://issues.apache.org/jira/browse/YARN-5694
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 3.0.0-alpha1
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Critical
> Attachments: YARN-5694.001.patch, YARN-5694.002.patch, 
> YARN-5694.003.patch, YARN-5694.004.patch, YARN-5694.004.patch, 
> YARN-5694.005.patch, YARN-5694.006.patch, YARN-5694.branch-2.7.001.patch, 
> YARN-5694.branch-2.7.002.patch
>
>
> There are two cases.  In branch-2.7, the 
> {{ZKRMStateStore.VerifyActiveStatusThread}} is always started, even when 
> using embedded or Curator failover.  In branch-2.8, the 
> {{ZKRMStateStore.VerifyActiveStatusThread}} is only started when HA is 
> disabled, which makes no sense.  Based on the JIRA that introduced that 
> change (YARN-4559), I believe the intent was to start it only when embedded 
> failover is disabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4330) MiniYARNCluster is showing multiple Failed to instantiate default resource calculator warning messages.

2016-10-25 Thread Eric Badger (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15606417#comment-15606417
 ] 

Eric Badger commented on YARN-4330:
---

[~ste...@apache.org], there's talk on the mailing list of releasing 2.8. Is 
this ready to go in? If not, should we mark the target version to 2.8.0? 

cc [~ajisakaa] for 2.8 tracking purposes

> MiniYARNCluster is showing multiple  Failed to instantiate default resource 
> calculator warning messages.
> 
>
> Key: YARN-4330
> URL: https://issues.apache.org/jira/browse/YARN-4330
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: test, yarn
>Affects Versions: 2.8.0
> Environment: OSX, JUnit
>Reporter: Steve Loughran
>Assignee: Varun Saxena
>Priority: Blocker
> Attachments: YARN-4330.01.patch
>
>
> Whenever I try to start a MiniYARNCluster on Branch-2 (commit #0b61cca), I 
> see multiple stack traces warning me that a resource calculator plugin could 
> not be created
> {code}
> (ResourceCalculatorPlugin.java:getResourceCalculatorPlugin(184)) - 
> java.lang.UnsupportedOperationException: Could not determine OS: Failed to 
> instantiate default resource calculator.
> java.lang.UnsupportedOperationException: Could not determine OS
> {code}
> This is a minicluster. It doesn't need resource calculation. It certainly 
> doesn't need test logs being cluttered with even more stack traces which will 
> only generate false alarms about tests failing. 
> There needs to be a way to turn this off, and the minicluster should have it 
> that way by default.
> Being ruthless and marking as a blocker, because its a fairly major 
> regression for anyone testing with the minicluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4330) MiniYARNCluster is showing multiple Failed to instantiate default resource calculator warning messages.

2016-10-25 Thread Eric Badger (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15606445#comment-15606445
 ] 

Eric Badger commented on YARN-4330:
---

I applied the most recent patch the branch-2.8 and ran 
{{TestMRTimelineEventHandling}}, which uses the MiniYARNCluster (I'm running 
MacOS Sierra). 2 of the 3 tests in the class consistently fail with the 
stacktraces as shown below. I didn't dig into the tests any further, but I 
imagine that this is reproducible on other machines. 

{noformat}
Running org.apache.hadoop.mapred.TestMRTimelineEventHandling
Tests run: 3, Failures: 2, Errors: 0, Skipped: 0, Time elapsed: 93.809 sec <<< 
FAILURE! - in org.apache.hadoop.mapred.TestMRTimelineEventHandling
testMRTimelineEventHandling(org.apache.hadoop.mapred.TestMRTimelineEventHandling)
  Time elapsed: 35.821 sec  <<< FAILURE!
java.lang.AssertionError: expected:<2> but was:<3>
at 
org.apache.hadoop.mapred.TestMRTimelineEventHandling.testMRTimelineEventHandling(TestMRTimelineEventHandling.java:105)

testMapreduceJobTimelineServiceEnabled(org.apache.hadoop.mapred.TestMRTimelineEventHandling)
  Time elapsed: 32.703 sec  <<< FAILURE!
java.lang.AssertionError: expected:<2> but was:<3>
at 
org.apache.hadoop.mapred.TestMRTimelineEventHandling.testMapreduceJobTimelineServiceEnabled(TestMRTimelineEventHandling.java:162)
{noformat}

> MiniYARNCluster is showing multiple  Failed to instantiate default resource 
> calculator warning messages.
> 
>
> Key: YARN-4330
> URL: https://issues.apache.org/jira/browse/YARN-4330
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: test, yarn
>Affects Versions: 2.8.0
> Environment: OSX, JUnit
>Reporter: Steve Loughran
>Assignee: Varun Saxena
>Priority: Blocker
> Attachments: YARN-4330.01.patch
>
>
> Whenever I try to start a MiniYARNCluster on Branch-2 (commit #0b61cca), I 
> see multiple stack traces warning me that a resource calculator plugin could 
> not be created
> {code}
> (ResourceCalculatorPlugin.java:getResourceCalculatorPlugin(184)) - 
> java.lang.UnsupportedOperationException: Could not determine OS: Failed to 
> instantiate default resource calculator.
> java.lang.UnsupportedOperationException: Could not determine OS
> {code}
> This is a minicluster. It doesn't need resource calculation. It certainly 
> doesn't need test logs being cluttered with even more stack traces which will 
> only generate false alarms about tests failing. 
> There needs to be a way to turn this off, and the minicluster should have it 
> that way by default.
> Being ruthless and marking as a blocker, because its a fairly major 
> regression for anyone testing with the minicluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4330) MiniYARNCluster is showing multiple Failed to instantiate default resource calculator warning messages.

2016-10-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15606444#comment-15606444
 ] 

Hadoop QA commented on YARN-4330:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 4s {color} 
| {color:red} YARN-4330 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12771077/YARN-4330.01.patch |
| JIRA Issue | YARN-4330 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/13510/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> MiniYARNCluster is showing multiple  Failed to instantiate default resource 
> calculator warning messages.
> 
>
> Key: YARN-4330
> URL: https://issues.apache.org/jira/browse/YARN-4330
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: test, yarn
>Affects Versions: 2.8.0
> Environment: OSX, JUnit
>Reporter: Steve Loughran
>Assignee: Varun Saxena
>Priority: Blocker
> Attachments: YARN-4330.01.patch
>
>
> Whenever I try to start a MiniYARNCluster on Branch-2 (commit #0b61cca), I 
> see multiple stack traces warning me that a resource calculator plugin could 
> not be created
> {code}
> (ResourceCalculatorPlugin.java:getResourceCalculatorPlugin(184)) - 
> java.lang.UnsupportedOperationException: Could not determine OS: Failed to 
> instantiate default resource calculator.
> java.lang.UnsupportedOperationException: Could not determine OS
> {code}
> This is a minicluster. It doesn't need resource calculation. It certainly 
> doesn't need test logs being cluttered with even more stack traces which will 
> only generate false alarms about tests failing. 
> There needs to be a way to turn this off, and the minicluster should have it 
> that way by default.
> Being ruthless and marking as a blocker, because its a fairly major 
> regression for anyone testing with the minicluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5680) Add 2 new fields in Slider status output - image-name and is-privileged-container

2016-10-25 Thread Billie Rinaldi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5680?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Billie Rinaldi updated YARN-5680:
-
Attachment: YARN-5680-yarn-native-services.001.patch

> Add 2 new fields in Slider status output - image-name and 
> is-privileged-container
> -
>
> Key: YARN-5680
> URL: https://issues.apache.org/jira/browse/YARN-5680
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gour Saha
>Assignee: Billie Rinaldi
> Attachments: YARN-5680-yarn-native-services.001.patch
>
>
> We need to add 2 new fields in Slider status output for docker provider - 
> image-name and is-privileged-container. The native services REST API needs to 
> expose these 2 attribute values to the end-users.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5388) Deprecate and remove DockerContainerExecutor

2016-10-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15606468#comment-15606468
 ] 

Hudson commented on YARN-5388:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10677 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10677/])
YARN-5388. Deprecate and remove DockerContainerExecutor. (Daniel (kasha: rev 
de6faae97c0937dcd969386b12283d60c22dcb02)
* (delete) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestDockerContainerExecutor.java
* (edit) hadoop-project/src/site/site.xml
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/ContainerExecutor.java
* (delete) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/DockerContainerExecutor.md.vm
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
* (delete) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestDockerContainerExecutorWithMocks.java
* (delete) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/DockerContainerExecutor.java


> Deprecate and remove DockerContainerExecutor
> 
>
> Key: YARN-5388
> URL: https://issues.apache.org/jira/browse/YARN-5388
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Critical
> Attachments: YARN-5388.001.patch, YARN-5388.002.patch, 
> YARN-5388.003.patch, YARN-5388.branch-2.001.patch, 
> YARN-5388.branch-2.002.patch, YARN-5388.branch-2.003.patch
>
>
> Because the {{DockerContainerExecuter}} overrides the {{writeLaunchEnv()}} 
> method, it must also have the wildcard processing logic from 
> YARN-4958/YARN-5373 added to it.  Without it, the use of -libjars will fail 
> unless wildcarding is disabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5680) Add 2 new fields in Slider status output - image-name and is-privileged-container

2016-10-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5680?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15606496#comment-15606496
 ] 

Hadoop QA commented on YARN-5680:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
28s {color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 23s 
{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
22s {color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 29s 
{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} yarn-native-services passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 58s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-slider/hadoop-yarn-slider-core
 in yarn-native-services has 314 extant Findbugs warnings. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 28s 
{color} | {color:red} hadoop-yarn-slider-core in yarn-native-services failed. 
{color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
23s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 22s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 22s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
19s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 26s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 5s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 24s 
{color} | {color:red} hadoop-yarn-slider-core in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 1m 29s {color} 
| {color:red} hadoop-yarn-slider-core in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 17s 
{color} | {color:red} The patch generated 11 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 16m 15s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
slider.core.registry.docstore.TestPublishedConfigurationOutputter |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12835198/YARN-5680-yarn-native-services.001.patch
 |
| JIRA Issue | YARN-5680 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 3527fbe0de1b 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | yarn-native-services / 201b0b8 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| findbugs | 
https://builds.apache.org/job/PreCommit-YARN-Build/13511/artifact/patchprocess/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-slider_hadoop-yarn-slider-core-warnings.html
 |
| javadoc | 
https://builds.apache.org/job/PreCommit-YARN-Build/13511/artifact/patchprocess/branch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-slider_hadoop-yarn-slider-core.txt
 |
| javadoc | 
https://builds.

[jira] [Created] (YARN-5780) [YARN native service] Allowing YARN native services to post data to timeline service V.2

2016-10-25 Thread Li Lu (JIRA)
Li Lu created YARN-5780:
---

 Summary: [YARN native service] Allowing YARN native services to 
post data to timeline service V.2
 Key: YARN-5780
 URL: https://issues.apache.org/jira/browse/YARN-5780
 Project: Hadoop YARN
  Issue Type: New Feature
Reporter: Li Lu
Assignee: Li Lu


The basic end-to-end workflow of timeline service v.2 has been merged into 
trunk. In YARN native services, we would like to post some service-specific 
data to timeline v.2. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5765) LinuxContainerExecutor creates appcache and its subdirectories with wrong group owner.

2016-10-25 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5765?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated YARN-5765:
--
Target Version/s: 2.8.0, 3.0.0-alpha2  (was: 2.8.0)

> LinuxContainerExecutor creates appcache and its subdirectories with wrong 
> group owner.
> --
>
> Key: YARN-5765
> URL: https://issues.apache.org/jira/browse/YARN-5765
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.8.0, 3.0.0-alpha1
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>Priority: Blocker
>
> LinuxContainerExecutor creates usercache/\{userId\}/appcache/\{appId\} with 
> wrong group owner, causing Log aggregation and ShuffleHandler to fail because 
> node manager process does not have permission to read the files under the 
> directory.
> This can be easily reproduced by enabling LCE and submitting a MR example 
> job. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5780) [YARN native service] Allowing YARN native services to post data to timeline service V.2

2016-10-25 Thread Li Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Lu updated YARN-5780:

Assignee: (was: Li Lu)

> [YARN native service] Allowing YARN native services to post data to timeline 
> service V.2
> 
>
> Key: YARN-5780
> URL: https://issues.apache.org/jira/browse/YARN-5780
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Li Lu
>
> The basic end-to-end workflow of timeline service v.2 has been merged into 
> trunk. In YARN native services, we would like to post some service-specific 
> data to timeline v.2. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5780) [YARN native service] Allowing YARN native services to post data to timeline service V.2

2016-10-25 Thread Li Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Lu updated YARN-5780:

Attachment: YARN-5780.poc.patch

POC patch to launch timeline client and post timeline data. This patch applies 
to the latest slider develop branch. There were some nontrivial work to make 
this patch work with Hadoop trunk (which has timeline v.2 code), but I didn't 
include those changes since this feature should be targeted to the native 
service branch of YARN. By that time these obstacles should have been removed. 

> [YARN native service] Allowing YARN native services to post data to timeline 
> service V.2
> 
>
> Key: YARN-5780
> URL: https://issues.apache.org/jira/browse/YARN-5780
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Li Lu
> Attachments: YARN-5780.poc.patch
>
>
> The basic end-to-end workflow of timeline service v.2 has been merged into 
> trunk. In YARN native services, we would like to post some service-specific 
> data to timeline v.2. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5716) Add global scheduler interface definition and update CapacityScheduler to use it.

2016-10-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15606560#comment-15606560
 ] 

Hadoop QA commented on YARN-5716:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 20s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 13 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
43s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 18s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
0s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 34s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
47s {color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
58s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 43s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
49s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 16s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 16s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 59s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn: The patch generated 142 
new + 1467 unchanged - 164 fixed = 1609 total (was 1631) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 31s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
44s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 4s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 22s 
{color} | {color:green} hadoop-yarn-project_hadoop-yarn generated 0 new + 6484 
unchanged - 10 fixed = 6484 total (was 6494) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s 
{color} | {color:green} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager
 generated 0 new + 928 unchanged - 10 fixed = 928 total (was 938) {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 71m 54s {color} 
| {color:red} hadoop-yarn in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 39m 54s 
{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
18s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 143m 46s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.TestContainerManagerSecurity |
|   | hadoop.yarn.server.TestMiniYarnClusterNodeUtilization |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:956

[jira] [Commented] (YARN-4126) RM should not issue delegation tokens in unsecure mode

2016-10-25 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15606587#comment-15606587
 ] 

Daryn Sharp commented on YARN-4126:
---

The general contract for servers is to return null when tokens are not 
applicable.  This violates that contract and throws an exception.  How is a 
generalized client supposed to pre-meditate fetching a token?  And how to 
handle a generic IOE?

I'd rather see this reverted from trunk and never integrated.  We've 
historically had lots of problem with all the security enabled conditionals, 
which is why one of my multi-year old endeavors is to have tokens always 
enabled and gut the security conditionals.  I've always admired the fact that 
yarn unconditionally used them...  This is a step backwards.


> RM should not issue delegation tokens in unsecure mode
> --
>
> Key: YARN-4126
> URL: https://issues.apache.org/jira/browse/YARN-4126
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jian He
>Assignee: Bibin A Chundatt
> Fix For: 3.0.0-alpha1
>
> Attachments: 0001-YARN-4126.patch, 0002-YARN-4126.patch, 
> 0003-YARN-4126.patch, 0004-YARN-4126.patch, 0005-YARN-4126.patch, 
> 0006-YARN-4126.patch
>
>
> ClientRMService#getDelegationToken is currently  returning a delegation token 
> in insecure mode. We should not return the token if it's in insecure mode. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5767) Fix the order that resources are cleaned up from the local Public/Private caches

2016-10-25 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15606624#comment-15606624
 ] 

Miklos Szegedi commented on YARN-5767:
--

+1 (non-binding) Thank you, [~ctrezzo]!

I have a few optional comments:

testLRUAcrossTrackers:
The test does not check whether the right resource was deleted from each list. 
You might want to use resources.getLocalRsrc().containsKey here just like in 
testPositiveRefCount.

LocalCacheCleanerStats:
It would be useful in the future for debugging, if toStringDetailed() printed 
out the actual resource paths not just the size per user


> Fix the order that resources are cleaned up from the local Public/Private 
> caches
> 
>
> Key: YARN-5767
> URL: https://issues.apache.org/jira/browse/YARN-5767
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.6.0, 2.7.0, 3.0.0-alpha1
>Reporter: Chris Trezzo
>Assignee: Chris Trezzo
> Attachments: YARN-5767-trunk-v1.patch, YARN-5767-trunk-v2.patch, 
> YARN-5767-trunk-v3.patch
>
>
> If you look at {{ResourceLocalizationService#handleCacheCleanup}}, you can 
> see that public resources are added to the {{ResourceRetentionSet}} first 
> followed by private resources:
> {code:java}
> private void handleCacheCleanup(LocalizationEvent event) {
>   ResourceRetentionSet retain =
> new ResourceRetentionSet(delService, cacheTargetSize);
>   retain.addResources(publicRsrc);
>   if (LOG.isDebugEnabled()) {
> LOG.debug("Resource cleanup (public) " + retain);
>   }
>   for (LocalResourcesTracker t : privateRsrc.values()) {
> retain.addResources(t);
> if (LOG.isDebugEnabled()) {
>   LOG.debug("Resource cleanup " + t.getUser() + ":" + retain);
> }
>   }
>   //TODO Check if appRsrcs should also be added to the retention set.
> }
> {code}
> Unfortunately, if we look at {{ResourceRetentionSet#addResources}} we see 
> that this means public resources are deleted first until the target cache 
> size is met:
> {code:java}
> public void addResources(LocalResourcesTracker newTracker) {
>   for (LocalizedResource resource : newTracker) {
> currentSize += resource.getSize();
> if (resource.getRefCount() > 0) {
>   // always retain resources in use
>   continue;
> }
> retain.put(resource, newTracker);
>   }
>   for (Iterator> i =
>  retain.entrySet().iterator();
>currentSize - delSize > targetSize && i.hasNext();) {
> Map.Entry rsrc = i.next();
> LocalizedResource resource = rsrc.getKey();
> LocalResourcesTracker tracker = rsrc.getValue();
> if (tracker.remove(resource, delService)) {
>   delSize += resource.getSize();
>   i.remove();
> }
>   }
> }
> {code}
> The result of this is that resources in the private cache are only deleted in 
> the cases where:
> # The cache size is larger than the target cache size and the public cache is 
> empty.
> # The cache size is larger than the target cache size and everything in the 
> public cache is being used by a running container.
> For clusters that primarily use the public cache (i.e. make use of the shared 
> cache), this means that the most commonly used resources can be deleted 
> before old resources in the private cache. Furthermore, the private cache can 
> continue to grow over time causing more and more churn in the public cache.
> Additionally, the same problem exists within the private cache. Since 
> resources are added to the retention set on a user by user basis, resources 
> will get cleaned up one user at a time in the order that privateRsrc.values() 
> returns the LocalResourcesTracker. So if user1 has 10MB in their cache and 
> user2 has 100MB in their cache and the target size of the cache is 50MB, 
> user1 could potentially have their entire cache removed before anything is 
> deleted from the user2 cache.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4126) RM should not issue delegation tokens in unsecure mode

2016-10-25 Thread Robert Kanter (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15606628#comment-15606628
 ] 

Robert Kanter commented on YARN-4126:
-

That's a good point [~daryn].  
[~jianhe], what was the motivation for removing the delegation tokens and 
throwing the exception?  The JIRA description doesn't actually say.

> RM should not issue delegation tokens in unsecure mode
> --
>
> Key: YARN-4126
> URL: https://issues.apache.org/jira/browse/YARN-4126
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jian He
>Assignee: Bibin A Chundatt
> Fix For: 3.0.0-alpha1
>
> Attachments: 0001-YARN-4126.patch, 0002-YARN-4126.patch, 
> 0003-YARN-4126.patch, 0004-YARN-4126.patch, 0005-YARN-4126.patch, 
> 0006-YARN-4126.patch
>
>
> ClientRMService#getDelegationToken is currently  returning a delegation token 
> in insecure mode. We should not return the token if it's in insecure mode. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5433) Audit dependencies for Category-X

2016-10-25 Thread Sangjin Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sangjin Lee updated YARN-5433:
--
Attachment: YARN-5433.01.patch

Attaching the first version of the patch.

- excluded findbugs annotations from the dependencies
- added notices for the new 3-clause BSD binaries (ANTLR, StringTemplate, and 
Sqlline)

Any feedback on this patch or the analysis in the linked spreadsheet is 
welcome. Thanks!

> Audit dependencies for Category-X
> -
>
> Key: YARN-5433
> URL: https://issues.apache.org/jira/browse/YARN-5433
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: timelineserver
>Affects Versions: 3.0.0-alpha1
>Reporter: Sean Busbey
>Assignee: Sangjin Lee
>Priority: Blocker
> Attachments: YARN-5433.01.patch
>
>
> Recently phoenix has found some category-x dependencies in their build 
> (PHOENIX-3084, PHOENIX-3091), which also showed some problems in HBase 
> (HBASE-16260).
> Since the Timeline Server work brought in both of these as dependencies, we 
> should make sure we don't have any cat-x dependencies either. From what I've 
> seen in those projects, our choice of HBase version shouldn't be impacted but 
> our Phoenix one is.
> Greping our current dependency list for the timeline server component shows 
> some LGPL:
> {code}
> ...
> [INFO]net.sourceforge.findbugs:annotations:jar:1.3.2:compile
> ...
> {code}
> I haven't checked the rest of the dependencies that have changed since 
> HADOOP-12893 went in, so ATM I've filed this against YARN since that's where 
> this one example came in.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4734) Merge branch:YARN-3368 to trunk

2016-10-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15606709#comment-15606709
 ] 

Hadoop QA commented on YARN-4734:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue} 0m 6s 
{color} | {color:blue} Shelldocs was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 49s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
32s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 10s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
47s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 10m 
48s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
15s {color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-assemblies hadoop-yarn-project/hadoop-yarn 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site . {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
32s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 57s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 17s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
20s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 13s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 8m 13s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
35s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 9m 37s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
4s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 
13s {color} | {color:green} There were no new shellcheck issues. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 34 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 9s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-assemblies hadoop-yarn-project/hadoop-yarn 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui . {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 
18s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 5m 36s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 84m 56s {color} 
| {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 25s 
{color} | {color:red} The patch generated 2 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 171m 27s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.nodemanager.containermanager.queuing.TestQueuingContainerManager
 |
| Timed out junit tests | 
org.apache.hadoop.hdfs.server.namenode.TestNamenode

[jira] [Commented] (YARN-5433) Audit dependencies for Category-X

2016-10-25 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15606711#comment-15606711
 ] 

Xiao Chen commented on YARN-5433:
-

Thanks [~sjlee0] for working on this (and the ping).

For HADOOP-12893, the 
[spreadsheet|https://docs.google.com/spreadsheets/d/1HL2b4PSdQMZDVJmum1GIKrteFr2oainApTLiJTPnfd4/edit#gid=1060066002]
 has a 'generate.py' tab, containing the code used to generate the L&N (which 
license to skip L&N etc.). Probably an overkill here, but that's more accurate 
than my memory. :)

So:
- No need to add NOTICE for BSD here. See 
https://issues.apache.org/jira/browse/HADOOP-12893?focusedCommentId=15284739&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15284739
 for details.
- MPL and CDDL are ok. We should add it to LICENSE.txt under corresponding 
license. There's already {{servlet-api 2.5}} in there under CPPL1.0, so we can 
skip that.
- Didn't check all, but if there're any dependency with multiple licenses, we 
can choose the most friendly one.

> Audit dependencies for Category-X
> -
>
> Key: YARN-5433
> URL: https://issues.apache.org/jira/browse/YARN-5433
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: timelineserver
>Affects Versions: 3.0.0-alpha1
>Reporter: Sean Busbey
>Assignee: Sangjin Lee
>Priority: Blocker
> Attachments: YARN-5433.01.patch
>
>
> Recently phoenix has found some category-x dependencies in their build 
> (PHOENIX-3084, PHOENIX-3091), which also showed some problems in HBase 
> (HBASE-16260).
> Since the Timeline Server work brought in both of these as dependencies, we 
> should make sure we don't have any cat-x dependencies either. From what I've 
> seen in those projects, our choice of HBase version shouldn't be impacted but 
> our Phoenix one is.
> Greping our current dependency list for the timeline server component shows 
> some LGPL:
> {code}
> ...
> [INFO]net.sourceforge.findbugs:annotations:jar:1.3.2:compile
> ...
> {code}
> I haven't checked the rest of the dependencies that have changed since 
> HADOOP-12893 went in, so ATM I've filed this against YARN since that's where 
> this one example came in.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4907) Make all MockRM#waitForState consistent.

2016-10-25 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15606716#comment-15606716
 ] 

Miklos Szegedi commented on YARN-4907:
--

+1 (non-binding) I verified and it looks good to me. Thank you, [~yufeigu]!

You might want to change the wait logic in 
BaseContainerManagerTest.waitForNMContainerState as well.

> Make all MockRM#waitForState consistent. 
> -
>
> Key: YARN-4907
> URL: https://issues.apache.org/jira/browse/YARN-4907
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: YARN-4907.001.patch
>
>
> There are some inconsistencies among these {{waitForState}} in {{MockRM}}:
> 1. Some {{waitForState}} return a boolean while others don't.  
> 2. Some {{waitForState}} don't have a timeout, they can wait for ever. 
> 3. Some {{waitForState}} use LOG.info and others use {{System.out.println}} 
> to print messages.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



  1   2   >