[jira] [Updated] (YARN-6635) Merging refactored changes from yarn-native-services branch

2017-05-22 Thread Akhil PB (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akhil PB updated YARN-6635:
---
Attachment: YARN-6635.001.patch

Adding v1 patch.
Hi [~sunilg], help in test and review the patch. 

> Merging refactored changes from yarn-native-services branch
> ---
>
> Key: YARN-6635
> URL: https://issues.apache.org/jira/browse/YARN-6635
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Akhil PB
>Assignee: Akhil PB
> Attachments: YARN-6635.001.patch
>
>
> There are some refactoring done for yarn-app pages in new YARN UI codebase in 
> yarn-native-services branch. This ticket intends to bring the refactored 
> changes done in UI code into trunk from yarn-native-services branch.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6635) Merging refactored changes from yarn-native-services branch

2017-05-22 Thread Akhil PB (JIRA)
Akhil PB created YARN-6635:
--

 Summary: Merging refactored changes from yarn-native-services 
branch
 Key: YARN-6635
 URL: https://issues.apache.org/jira/browse/YARN-6635
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Akhil PB
Assignee: Akhil PB


There are some refactoring done for yarn-app pages in new YARN UI codebase in 
yarn-native-services branch. This ticket intends to bring the refactored 
changes done in UI code into trunk from yarn-native-services branch.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6493) Print requested node partition in assignContainer logs

2017-05-22 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16020676#comment-16020676
 ] 

Wangda Tan commented on YARN-6493:
--

[~brahmareddy], thanks for the reminding, just updated. 

> Print requested node partition in assignContainer logs
> --
>
> Key: YARN-6493
> URL: https://issues.apache.org/jira/browse/YARN-6493
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.8.0, 2.7.4, 2.6.6
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
> Fix For: 2.9.0, 2.7.4, 2.8.1, 3.0.0-alpha3
>
> Attachments: YARN-6493.001.patch, YARN-6493.002.patch, 
> YARN-6493.003.patch, YARN-6493-branch-2.7.001.patch, 
> YARN-6493-branch-2.7.002.patch, YARN-6493-branch-2.8.001.patch, 
> YARN-6493-branch-2.8.002.patch, YARN-6493-branch-2.8.003.patch
>
>
> It would be useful to have the node's partition when logging a container 
> allocation, for tracking purposes.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6111) Rumen input does't work in SLS

2017-05-22 Thread YuJie Huang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16020670#comment-16020670
 ] 

YuJie Huang commented on YARN-6111:
---

Thank you! My problem is the trace problem you talk about. I am the one who ask 
the question. You said it can be solved by changing something in the example. I 
didn't find where to change and how to change. Thank you very much!

> Rumen input does't work in SLS
> --
>
> Key: YARN-6111
> URL: https://issues.apache.org/jira/browse/YARN-6111
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler-load-simulator
>Affects Versions: 2.6.0, 2.7.3, 3.0.0-alpha2
> Environment: ubuntu14.0.4 os
>Reporter: YuJie Huang
>Assignee: Yufei Gu
>  Labels: test
> Fix For: 3.0.0-alpha3
>
> Attachments: YARN-6111.001.patch
>
>
> Hi guys,
> I am trying to learn the use of SLS.
> I would like to get the file realtimetrack.json, but this it only 
> contains "[]" at the end of a simulation. This is the command I use to 
> run the instance:
> HADOOP_HOME $ bin/slsrun.sh --input-rumen=sample-data/2jobsmin-rumen-jh.json 
> --output-dir=sample-data 
> All other files, including metrics, appears to be properly populated.I can 
> also trace with web:http://localhost:10001/simulate
> Can someone help?
> Thanks



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6628) Unexpected jackson-core-2.2.3 dependency introduced

2017-05-22 Thread Jonathan Eagles (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16020627#comment-16020627
 ] 

Jonathan Eagles commented on YARN-6628:
---

So we are in a catch-22 since the version of fst the has the matching jackson 
dependency has the incorrectly advertised license in the gpl (failing 
rat-check). The version with the correct Apache License in the pom has the 
newer jackson jars which are something we can't introduce in 2.8.

I tried what Jason suggested but I was unable to create a FSTConfiguration that 
didn't trigger a class not found error for com.fasterxml.jackson.

Instead perhaps we shade the com.fasterxml jackson jar to not expose this to 
the classpath.

> Unexpected jackson-core-2.2.3 dependency introduced
> ---
>
> Key: YARN-6628
> URL: https://issues.apache.org/jira/browse/YARN-6628
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: timelineserver
>Affects Versions: 2.8.1
>Reporter: Jason Lowe
>Assignee: Jonathan Eagles
>Priority: Blocker
> Attachments: YARN-6628.1.patch
>
>
> The change in YARN-5894 caused jackson-core-2.2.3.jar to be added in 
> share/hadoop/yarn/lib/. This added dependency seems to be incompatible with 
> jackson-core-asl-1.9.13.jar which is also shipped as a dependency.  This new 
> jackson-core jar ends up breaking jobs that ran fine on 2.8.0.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6628) Unexpected jackson-core-2.2.3 dependency introduced

2017-05-22 Thread Jonathan Eagles (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Eagles updated YARN-6628:
--
Attachment: YARN-6628.1.patch

> Unexpected jackson-core-2.2.3 dependency introduced
> ---
>
> Key: YARN-6628
> URL: https://issues.apache.org/jira/browse/YARN-6628
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: timelineserver
>Affects Versions: 2.8.1
>Reporter: Jason Lowe
>Assignee: Jonathan Eagles
>Priority: Blocker
> Attachments: YARN-6628.1.patch
>
>
> The change in YARN-5894 caused jackson-core-2.2.3.jar to be added in 
> share/hadoop/yarn/lib/. This added dependency seems to be incompatible with 
> jackson-core-asl-1.9.13.jar which is also shipped as a dependency.  This new 
> jackson-core jar ends up breaking jobs that ran fine on 2.8.0.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6584) Correct license headers in hadoop-common, hdfs, yarn and mapreduce

2017-05-22 Thread Yeliang Cang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16020605#comment-16020605
 ] 

Yeliang Cang commented on YARN-6584:


Thank you for the review, [~sunilg]

> Correct license headers in hadoop-common, hdfs, yarn and mapreduce
> --
>
> Key: YARN-6584
> URL: https://issues.apache.org/jira/browse/YARN-6584
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.9.0
>Reporter: Yeliang Cang
>Assignee: Yeliang Cang
>Priority: Trivial
> Fix For: 2.9.0, 3.0.0-alpha3
>
> Attachments: YARN-6584-001.patch, YARN-6584-branch-2.001.patch, 
> YARN-6584-branch2.001.patch, YARN-6584-branch-2.002.patch
>
>
> The license in some java files are not same as others. Submit a patch to fix 
> this!



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4925) ContainerRequest in AMRMClient, application should be able to specify nodes/racks together with nodeLabelExpression

2017-05-22 Thread Jonathan Hung (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16020578#comment-16020578
 ] 

Jonathan Hung commented on YARN-4925:
-

I'm not able to reproduce some of the test failures: 
{noformat}---
 T E S T S
---
Running org.apache.hadoop.yarn.client.TestApplicationClientProtocolOnHA
Tests run: 17, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 79.298 sec - 
in org.apache.hadoop.yarn.client.TestApplicationClientProtocolOnHA
Running org.apache.hadoop.yarn.client.TestApplicationMasterServiceProtocolOnHA
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.953 sec - in 
org.apache.hadoop.yarn.client.TestApplicationMasterServiceProtocolOnHA
Running org.apache.hadoop.yarn.client.TestGetGroups
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.963 sec - in 
org.apache.hadoop.yarn.client.TestGetGroups
Running org.apache.hadoop.yarn.client.TestResourceTrackerOnHA
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.216 sec - in 
org.apache.hadoop.yarn.client.TestResourceTrackerOnHA
Running org.apache.hadoop.yarn.client.TestRMFailover
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 25.319 sec - in 
org.apache.hadoop.yarn.client.TestRMFailover

Results :

Tests run: 32, Failures: 0, Errors: 0, Skipped: 0
{noformat}
Seems to be related to the jenkins environment.

Will look into TestAMRMClient, TestYarnClient, TestNMClient

> ContainerRequest in AMRMClient, application should be able to specify 
> nodes/racks together with nodeLabelExpression
> ---
>
> Key: YARN-4925
> URL: https://issues.apache.org/jira/browse/YARN-4925
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>  Labels: release-blocker
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: 0001-YARN-4925.patch, 0002-YARN-4925.patch, 
> YARN-4925-branch-2.7.001.patch
>
>
> Currently with nodelabel AMRMClient will not be able to specify nodelabels 
> with Node/Rack requests.For application like spark NODE_LOCAL requests cannot 
> be asked with label expression.
> As per the check in  {{AMRMClientImpl#checkNodeLabelExpression}}
> {noformat}
> // Don't allow specify node label against ANY request
> if ((containerRequest.getRacks() != null && 
> (!containerRequest.getRacks().isEmpty()))
> || 
> (containerRequest.getNodes() != null && 
> (!containerRequest.getNodes().isEmpty( {
>   throw new InvalidContainerRequestException(
>   "Cannot specify node label with rack and node");
> }
> {noformat}
> {{AppSchedulingInfo#updateResourceRequests}} we do reset of labels to that of 
> OFF-SWITCH. 
> The above check is not required for ContainerRequest ask /cc [~wangda] thank 
> you for confirming



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6584) Correct license headers in hadoop-common, hdfs, yarn and mapreduce

2017-05-22 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16020570#comment-16020570
 ] 

Sunil G commented on YARN-6584:
---

test case failures are not related in branch-2. 

> Correct license headers in hadoop-common, hdfs, yarn and mapreduce
> --
>
> Key: YARN-6584
> URL: https://issues.apache.org/jira/browse/YARN-6584
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.9.0
>Reporter: Yeliang Cang
>Assignee: Yeliang Cang
>Priority: Trivial
> Fix For: 2.9.0
>
> Attachments: YARN-6584-001.patch, YARN-6584-branch-2.001.patch, 
> YARN-6584-branch2.001.patch, YARN-6584-branch-2.002.patch
>
>
> The license in some java files are not same as others. Submit a patch to fix 
> this!



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6493) Print requested node partition in assignContainer logs

2017-05-22 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16020565#comment-16020565
 ] 

Brahma Reddy Battula commented on YARN-6493:


[~leftnoteasy] can you please update the CHANGES.txt in {{branch-2.7}}?

> Print requested node partition in assignContainer logs
> --
>
> Key: YARN-6493
> URL: https://issues.apache.org/jira/browse/YARN-6493
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.8.0, 2.7.4, 2.6.6
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
> Fix For: 2.9.0, 2.7.4, 2.8.1, 3.0.0-alpha3
>
> Attachments: YARN-6493.001.patch, YARN-6493.002.patch, 
> YARN-6493.003.patch, YARN-6493-branch-2.7.001.patch, 
> YARN-6493-branch-2.7.002.patch, YARN-6493-branch-2.8.001.patch, 
> YARN-6493-branch-2.8.002.patch, YARN-6493-branch-2.8.003.patch
>
>
> It would be useful to have the node's partition when logging a container 
> allocation, for tracking purposes.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6617) Services API delete call first attempt usually fails

2017-05-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16020532#comment-16020532
 ] 

Hadoop QA commented on YARN-6617:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 12m 
39s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
53s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
28s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
30s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
22s{color} | {color:green} yarn-native-services passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
56s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-slider/hadoop-yarn-slider-core
 in yarn-native-services has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
41s{color} | {color:green} hadoop-yarn-slider-core in the patch passed. {color} 
|
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 39m  8s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ac17dc |
| JIRA Issue | YARN-6617 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12869370/YARN-6617-yarn-native-services.2.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux c4a574deb957 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 
09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | yarn-native-services / 8c3b3db |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-YARN-Build/15997/artifact/patchprocess/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-slider_hadoop-yarn-slider-core-warnings.html
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/15997/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-slider/hadoop-yarn-slider-core
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-slider/hadoop-yarn-slider-core
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/

[jira] [Updated] (YARN-5608) TestAMRMClient.setup() fails with ArrayOutOfBoundsException

2017-05-22 Thread Jonathan Hung (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung updated YARN-5608:

Attachment: (was: YARN-5608-branch-2.7.001.patch)

> TestAMRMClient.setup() fails with ArrayOutOfBoundsException
> ---
>
> Key: YARN-5608
> URL: https://issues.apache.org/jira/browse/YARN-5608
> Project: Hadoop YARN
>  Issue Type: Test
>  Components: test
>Affects Versions: 2.8.0
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>  Labels: test-fail
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: YARN-5608.002.patch, YARN-5608.003.patch, 
> YARN-5608.004.patch, YARN-5608.005.patch, YARN-5608.patch
>
>
> After 39 runs the {{TestAMRMClient}} test, I encountered:
> {noformat}
> java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
>   at java.util.ArrayList.rangeCheck(ArrayList.java:635)
>   at java.util.ArrayList.get(ArrayList.java:411)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TestAMRMClient.setup(TestAMRMClient.java:144)
> {noformat}
> I see it shows up occasionally in the error emails as well.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5565) Capacity Scheduler not assigning value correctly.

2017-05-22 Thread gurmukh singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

gurmukh singh updated YARN-5565:

Component/s: yarn

> Capacity Scheduler not assigning value correctly.
> -
>
> Key: YARN-5565
> URL: https://issues.apache.org/jira/browse/YARN-5565
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, yarn
>Affects Versions: 2.7.2
> Environment: hadoop 2.7.2
>Reporter: gurmukh singh
>  Labels: capacity-scheduler, scheduler, yarn
>
> Hi
> I was testing and found out that value assigned in the scheduler 
> configuration is not consistent with what ResourceManager is assigning.
> If i set the configuration as below and understand that it is java float, but 
> the rounding is not correct.
> capacity-sheduler.xml
> 
>   yarn.scheduler.capacity.q1.capacity
>   7.142857142857143
> 
> In Java:  System.err.println((7.142857142857143f)) ===> 7.142587 
> But, instead Resource Manager is assigning is 7.1428566
> Tested this on hadoop 2.7.2



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5565) Capacity Scheduler not assigning value correctly.

2017-05-22 Thread gurmukh singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

gurmukh singh updated YARN-5565:

Labels: capacity-scheduler scheduler yarn  (was: capacity-scheduler 
scheduler)

> Capacity Scheduler not assigning value correctly.
> -
>
> Key: YARN-5565
> URL: https://issues.apache.org/jira/browse/YARN-5565
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, yarn
>Affects Versions: 2.7.2
> Environment: hadoop 2.7.2
>Reporter: gurmukh singh
>  Labels: capacity-scheduler, scheduler, yarn
>
> Hi
> I was testing and found out that value assigned in the scheduler 
> configuration is not consistent with what ResourceManager is assigning.
> If i set the configuration as below and understand that it is java float, but 
> the rounding is not correct.
> capacity-sheduler.xml
> 
>   yarn.scheduler.capacity.q1.capacity
>   7.142857142857143
> 
> In Java:  System.err.println((7.142857142857143f)) ===> 7.142587 
> But, instead Resource Manager is assigning is 7.1428566
> Tested this on hadoop 2.7.2



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6617) Services API delete call first attempt usually fails

2017-05-22 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-6617:
--
Attachment: YARN-6617-yarn-native-services.2.patch

> Services API delete call first attempt usually fails
> 
>
> Key: YARN-6617
> URL: https://issues.apache.org/jira/browse/YARN-6617
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Billie Rinaldi
>Assignee: Jian He
> Fix For: yarn-native-services
>
> Attachments: YARN-6617-yarn-native-services.1.patch, 
> YARN-6617-yarn-native-services.2.patch
>
>
> The services API is calling actionStop, which queues a stop action, 
> immediately followed by actionDestroy, which fails because the app is still 
> running. The actionStop method is ignoring the force option, so one solution 
> would be to reintroduce handling for the force option and have the services 
> API set this option.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6587) Refactor of ResourceManager#startWebApp in a Util class

2017-05-22 Thread Giovanni Matteo Fumarola (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16020409#comment-16020409
 ] 

Giovanni Matteo Fumarola commented on YARN-6587:


TestCapacityScheduler and TestRMRestart are not related to my patch and they 
passed in my local devbox.
I am figuring out why TestDelegationTokenRenewer is failing in branch-2.

> Refactor of ResourceManager#startWebApp in a Util class
> ---
>
> Key: YARN-6587
> URL: https://issues.apache.org/jira/browse/YARN-6587
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Giovanni Matteo Fumarola
>Assignee: Giovanni Matteo Fumarola
> Fix For: 3.0.0-alpha3
>
> Attachments: YARN-6587-branch-2.v1.patch, YARN-6587.v1.patch, 
> YARN-6587.v2.patch
>
>
> This jira tracks the refactor of ResourceManager#startWebApp in a util class 
> since Router in YARN-5412 has to implement the same logic for Filtering and 
> Authentication.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6593) [API] Introduce Placement Constraint object

2017-05-22 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16020389#comment-16020389
 ] 

Wangda Tan commented on YARN-6593:
--

Thanks [~kkaranasos],

I checked the latest patch, it doesn't create hierarchies in Java side, however 
I think it is a solution which makes consistency between Java/PB and clear in 
Java side. So in general it is a good approach to me, only one major suggestion 
to SimplePlacementConstraint, now mixing all three fields 
(scope/target/cardinality) in getters could still confuse developers.

I suggest to make SimplePlacementConstraint to be parent class of 
TargetPlacementConstraint/CardinalityPlacementConstraint, which has:
- Move builders to protected.
- Move getters/setters to protected.
- Remove targetConstraint/maxCardinalityConstraint, etc. static constructors to 
children class. 

Children classes such as TargetPlacementConstraint, it extends 
SimplePlacementConstraint, and has targetConstraint static constructor. So 
basically, SimplePlacementConstraint is Java side API in parallel with PB 
definition, and TargetPlacementConstraint/ConstraintPlacementConstraint are 
actual user APIs.

More detailed comments: 
1) PlacementConstraint:
- newInstance: Could we add a check to make sure only one of Simple/Compound is 
non-null?
- setSimpleConstraint/setCompoundConstraint: could be {{@Private}}
- It could be better if we add a getPlacementConstraint type field, what you 
think? We can move {{SimplePlacementConstraint#ConstraintType}} to a separate 
class which include enums: \{ COMPOUND_CONSTRAINT / TARGET_CONSTRAINT / 
CARDINALITY_CONSTRAINT \}

I have more detailed comments, but I want to reach a consensus for the major 
suggestions before posting them. One more suggestion is, since this API will be 
consumed by developers and scheduler logics, it will be better if you could 
draft an example Java source code which includes few examples. With that it 
will be easier to get if it is straightforward enough to use/consume the API.


> [API] Introduce Placement Constraint object
> ---
>
> Key: YARN-6593
> URL: https://issues.apache.org/jira/browse/YARN-6593
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Attachments: YARN-6593.001.patch, YARN-6593.002.patch
>
>
> This JIRA introduces an object for defining placement constraints.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6493) Print requested node partition in assignContainer logs

2017-05-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16020359#comment-16020359
 ] 

Hudson commented on YARN-6493:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11766 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11766/])
YARN-6493. Print requested node partition in assignContainer logs. (wangda: rev 
8e0f83e49a8987cf45a72c8a9bb8587b86e4c0ed)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/allocator/AbstractContainerAllocator.java


> Print requested node partition in assignContainer logs
> --
>
> Key: YARN-6493
> URL: https://issues.apache.org/jira/browse/YARN-6493
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.8.0, 2.7.4, 2.6.6
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
> Fix For: 2.9.0, 2.7.4, 2.8.1, 3.0.0-alpha3
>
> Attachments: YARN-6493.001.patch, YARN-6493.002.patch, 
> YARN-6493.003.patch, YARN-6493-branch-2.7.001.patch, 
> YARN-6493-branch-2.7.002.patch, YARN-6493-branch-2.8.001.patch, 
> YARN-6493-branch-2.8.002.patch, YARN-6493-branch-2.8.003.patch
>
>
> It would be useful to have the node's partition when logging a container 
> allocation, for tracking purposes.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2113) Add cross-user preemption within CapacityScheduler's leaf-queue

2017-05-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16020358#comment-16020358
 ] 

Hudson commented on YARN-2113:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11766 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11766/])
YARN-2113. Add cross-user preemption within CapacityScheduler's (wangda: rev 
c583ab02c730be0a63d974039a78f2dc67dc2db6)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/resource/DominantResourceCalculator.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/resource/Resources.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/CapacitySchedulerPreemptionContext.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/IntraQueueCandidatesSelector.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/IntraQueuePreemptionComputePlugin.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/resource/ResourceCalculator.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/UsersManager.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/TestProportionalCapacityPreemptionPolicyIntraQueueUserLimit.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/resource/DefaultResourceCalculator.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/LeafQueue.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/ProportionalCapacityPreemptionPolicyMockFramework.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/ProportionalCapacityPreemptionPolicy.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/TestProportionalCapacityPreemptionPolicyIntraQueue.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/TestProportionalCapacityPreemptionPolicyIntraQueueWithDRF.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/CapacitySchedulerPreemptionUtils.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/TempAppPerPartition.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/TempQueuePerPartition.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/FifoIntraQueuePreemptionPlugin.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/TempUserPerPartition.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacitySchedulerConfiguration.java


> Add cross-user preemption within CapacityScheduler's leaf-queue
> ---
>
> Key: YARN-2113
> URL: https://issues.apache.org/jira/browse/YARN-2113
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Sunil G
> Fix For: 3.0.0-alpha3
>
> Attachments: IntraQueue Preemption-Impact Analysis.pdf, 
> TestNoIntraQueuePreemptionIfBelowUserLimit

[jira] [Created] (YARN-6634) [API] Define an API for ResourceManager WebServices

2017-05-22 Thread Subru Krishnan (JIRA)
Subru Krishnan created YARN-6634:


 Summary: [API] Define an API for ResourceManager WebServices
 Key: YARN-6634
 URL: https://issues.apache.org/jira/browse/YARN-6634
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: resourcemanager
Affects Versions: 2.8.0
Reporter: Subru Krishnan
Priority: Critical


The RM exposes few REST queries but there's no clear API interface defined. 
This makes it painful to build either clients or extension components like 
Router (YARN-5412) that expose REST interfaces themselves. This jira proposes 
adding a RM WebServices protocol similar to the one we have for RPC, i.e. 
{{ApplicationClientProtocol}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-6634) [API] Define an API for ResourceManager WebServices

2017-05-22 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6634?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan reassigned YARN-6634:


Assignee: Giovanni Matteo Fumarola

> [API] Define an API for ResourceManager WebServices
> ---
>
> Key: YARN-6634
> URL: https://issues.apache.org/jira/browse/YARN-6634
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Affects Versions: 2.8.0
>Reporter: Subru Krishnan
>Assignee: Giovanni Matteo Fumarola
>Priority: Critical
>
> The RM exposes few REST queries but there's no clear API interface defined. 
> This makes it painful to build either clients or extension components like 
> Router (YARN-5412) that expose REST interfaces themselves. This jira proposes 
> adding a RM WebServices protocol similar to the one we have for RPC, i.e. 
> {{ApplicationClientProtocol}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6633) Backport YARN-4167 to branch 2.7

2017-05-22 Thread Inigo Goiri (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Inigo Goiri updated YARN-6633:
--
Attachment: YARN-4167-branch-2.7.patch

> Backport YARN-4167 to branch 2.7
> 
>
> Key: YARN-6633
> URL: https://issues.apache.org/jira/browse/YARN-6633
> Project: Hadoop YARN
>  Issue Type: Task
>Reporter: Inigo Goiri
>Assignee: Inigo Goiri
> Attachments: YARN-4167-branch-2.7.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6633) Backport YARN-4167 to branch 2.7

2017-05-22 Thread Inigo Goiri (JIRA)
Inigo Goiri created YARN-6633:
-

 Summary: Backport YARN-4167 to branch 2.7
 Key: YARN-6633
 URL: https://issues.apache.org/jira/browse/YARN-6633
 Project: Hadoop YARN
  Issue Type: Task
Reporter: Inigo Goiri
Assignee: Inigo Goiri
 Attachments: YARN-4167-branch-2.7.patch





--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6632) Backport YARN-3425 to branch 2.7

2017-05-22 Thread Inigo Goiri (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Inigo Goiri updated YARN-6632:
--
Attachment: YARN-3425-branch-2.7.patch

> Backport YARN-3425 to branch 2.7
> 
>
> Key: YARN-6632
> URL: https://issues.apache.org/jira/browse/YARN-6632
> Project: Hadoop YARN
>  Issue Type: Task
>Reporter: Inigo Goiri
>Assignee: Inigo Goiri
> Attachments: YARN-3425-branch-2.7.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6632) Backport YARN-3425 to branch 2.7

2017-05-22 Thread Inigo Goiri (JIRA)
Inigo Goiri created YARN-6632:
-

 Summary: Backport YARN-3425 to branch 2.7
 Key: YARN-6632
 URL: https://issues.apache.org/jira/browse/YARN-6632
 Project: Hadoop YARN
  Issue Type: Task
Reporter: Inigo Goiri
Assignee: Inigo Goiri
 Attachments: YARN-3425-branch-2.7.patch





--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-6628) Unexpected jackson-core-2.2.3 dependency introduced

2017-05-22 Thread Jonathan Eagles (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Eagles reassigned YARN-6628:
-

Assignee: Jonathan Eagles

> Unexpected jackson-core-2.2.3 dependency introduced
> ---
>
> Key: YARN-6628
> URL: https://issues.apache.org/jira/browse/YARN-6628
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: timelineserver
>Affects Versions: 2.8.1
>Reporter: Jason Lowe
>Assignee: Jonathan Eagles
>Priority: Blocker
>
> The change in YARN-5894 caused jackson-core-2.2.3.jar to be added in 
> share/hadoop/yarn/lib/. This added dependency seems to be incompatible with 
> jackson-core-asl-1.9.13.jar which is also shipped as a dependency.  This new 
> jackson-core jar ends up breaking jobs that ran fine on 2.8.0.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6625) yarn application -list returns a tracking URL for AM that doesn't work in secured and HA environment

2017-05-22 Thread Robert Kanter (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16020319#comment-16020319
 ] 

Robert Kanter commented on YARN-6625:
-

Overall looks good.  Here's some comments:
# In {{ClientRMProxy}}, instead of {{return new Text(schedulerService + "," + 
adminService);}}, let's use {{Joiner}} like what's used below it in 
{{getTokenService}}.
# {{ClientRMProxy#getAMRMTokenService}} is used in a few places.  Have you made 
sure that they're all okay with adding the RM admin address?
# I'm not expert on the way our RPCs work, but is {{HAServiceProtocolPB}} the 
right thing to check in {{AdminSecurityInfo}}?  Just from the name, it seems 
funny to use an "HA" Protocol here because what happens in a non-HA cluster?  
In any case, based on the {{getKerberosInfo}} above it and the name itself, 
wouldn't {{ResourceManagerAdministrationProtocolPB}} be the right thing to use?

> yarn application -list returns a tracking URL for AM that doesn't work in 
> secured and HA environment
> 
>
> Key: YARN-6625
> URL: https://issues.apache.org/jira/browse/YARN-6625
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: amrmproxy
>Affects Versions: 3.0.0-alpha2
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: YARN-6625.001.patch
>
>
> The tracking URL given at the command line should work secured or not. The 
> tracking URLs are like http://node-2.abc.com:47014 and AM web server supposed 
> to redirect it to a RM address like this 
> http://node-1.abc.com:8088/proxy/application_1494544954891_0002/, but it 
> fails to do that because the connection is rejected when AM is talking to RM 
> admin service to get HA status.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-6625) yarn application -list returns a tracking URL for AM that doesn't work in secured and HA environment

2017-05-22 Thread Robert Kanter (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16020319#comment-16020319
 ] 

Robert Kanter edited comment on YARN-6625 at 5/22/17 10:21 PM:
---

Overall looks good.  Here's some comments:
# In {{ClientRMProxy}}, instead of {{return new Text(schedulerService + "," + 
adminService);}}, let's use {{Joiner}} like what's used below it in 
{{getTokenService}}.
# {{ClientRMProxy#getAMRMTokenService}} is used in a few places.  Have you made 
sure that they're all okay with adding the RM admin address?
# I'm not expert on the way our RPCs work, but is {{HAServiceProtocolPB}} the 
right thing to check in {{AdminSecurityInfo}}?  Just from the name, it seems 
funny to use an "HA" Protocol here because what happens in a non-HA cluster?  
In any case, based on the {{getKerberosInfo}} above it and the name itself, 
wouldn't {{ResourceManagerAdministrationProtocolPB}} be the right thing to use?
# It would also be good to add some kind of test, though that might be tricky


was (Author: rkanter):
Overall looks good.  Here's some comments:
# In {{ClientRMProxy}}, instead of {{return new Text(schedulerService + "," + 
adminService);}}, let's use {{Joiner}} like what's used below it in 
{{getTokenService}}.
# {{ClientRMProxy#getAMRMTokenService}} is used in a few places.  Have you made 
sure that they're all okay with adding the RM admin address?
# I'm not expert on the way our RPCs work, but is {{HAServiceProtocolPB}} the 
right thing to check in {{AdminSecurityInfo}}?  Just from the name, it seems 
funny to use an "HA" Protocol here because what happens in a non-HA cluster?  
In any case, based on the {{getKerberosInfo}} above it and the name itself, 
wouldn't {{ResourceManagerAdministrationProtocolPB}} be the right thing to use?

> yarn application -list returns a tracking URL for AM that doesn't work in 
> secured and HA environment
> 
>
> Key: YARN-6625
> URL: https://issues.apache.org/jira/browse/YARN-6625
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: amrmproxy
>Affects Versions: 3.0.0-alpha2
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: YARN-6625.001.patch
>
>
> The tracking URL given at the command line should work secured or not. The 
> tracking URLs are like http://node-2.abc.com:47014 and AM web server supposed 
> to redirect it to a RM address like this 
> http://node-1.abc.com:8088/proxy/application_1494544954891_0002/, but it 
> fails to do that because the connection is rejected when AM is talking to RM 
> admin service to get HA status.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6493) Print requested node partition in assignContainer logs

2017-05-22 Thread Jonathan Hung (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16020289#comment-16020289
 ] 

Jonathan Hung commented on YARN-6493:
-

Awesome, thanks [~leftnoteasy]!

> Print requested node partition in assignContainer logs
> --
>
> Key: YARN-6493
> URL: https://issues.apache.org/jira/browse/YARN-6493
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.8.0, 2.7.4, 2.6.6
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
> Fix For: 2.9.0, 2.7.4, 2.8.1, 3.0.0-alpha3
>
> Attachments: YARN-6493.001.patch, YARN-6493.002.patch, 
> YARN-6493.003.patch, YARN-6493-branch-2.7.001.patch, 
> YARN-6493-branch-2.7.002.patch, YARN-6493-branch-2.8.001.patch, 
> YARN-6493-branch-2.8.002.patch, YARN-6493-branch-2.8.003.patch
>
>
> It would be useful to have the node's partition when logging a container 
> allocation, for tracking purposes.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6493) Print requested node partition in assignContainer logs

2017-05-22 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-6493:
-
Fix Version/s: 3.0.0-alpha3
   2.8.1
   2.7.4
   2.9.0

> Print requested node partition in assignContainer logs
> --
>
> Key: YARN-6493
> URL: https://issues.apache.org/jira/browse/YARN-6493
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.8.0, 2.7.4, 2.6.6
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
> Fix For: 2.9.0, 2.7.4, 2.8.1, 3.0.0-alpha3
>
> Attachments: YARN-6493.001.patch, YARN-6493.002.patch, 
> YARN-6493.003.patch, YARN-6493-branch-2.7.001.patch, 
> YARN-6493-branch-2.7.002.patch, YARN-6493-branch-2.8.001.patch, 
> YARN-6493-branch-2.8.002.patch, YARN-6493-branch-2.8.003.patch
>
>
> It would be useful to have the node's partition when logging a container 
> allocation, for tracking purposes.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-2113) Add cross-user preemption within CapacityScheduler's leaf-queue

2017-05-22 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2113?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-2113:
-
Component/s: (was: scheduler)
 capacity scheduler

> Add cross-user preemption within CapacityScheduler's leaf-queue
> ---
>
> Key: YARN-2113
> URL: https://issues.apache.org/jira/browse/YARN-2113
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Sunil G
> Fix For: 3.0.0-alpha3
>
> Attachments: IntraQueue Preemption-Impact Analysis.pdf, 
> TestNoIntraQueuePreemptionIfBelowUserLimitAndDifferentPrioritiesWithExtraUsers.txt,
>  YARN-2113.0001.patch, YARN-2113.0002.patch, YARN-2113.0003.patch, 
> YARN-2113.0004.patch, YARN-2113.0005.patch, YARN-2113.0006.patch, 
> YARN-2113.0007.patch, YARN-2113.0008.patch, YARN-2113.0009.patch, 
> YARN-2113.0010.patch, YARN-2113.0011.patch, YARN-2113.0012.patch, 
> YARN-2113.0013.patch, YARN-2113.0014.patch, YARN-2113.0015.patch, 
> YARN-2113.0016.patch, YARN-2113.0017.patch, YARN-2113.0018.patch, 
> YARN-2113.0019.patch, YARN-2113.apply.onto.0012.ericp.patch, YARN-2113 
> Intra-QueuePreemption Behavior.pdf, YARN-2113.v0.patch
>
>
> Preemption today only works across queues and moves around resources across 
> queues per demand and usage. We should also have user-level preemption within 
> a queue, to balance capacity across users in a predictable manner.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-2113) Add cross-user preemption within CapacityScheduler's leaf-queue

2017-05-22 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2113?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-2113:
-
Fix Version/s: 3.0.0-alpha3

> Add cross-user preemption within CapacityScheduler's leaf-queue
> ---
>
> Key: YARN-2113
> URL: https://issues.apache.org/jira/browse/YARN-2113
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Sunil G
> Fix For: 3.0.0-alpha3
>
> Attachments: IntraQueue Preemption-Impact Analysis.pdf, 
> TestNoIntraQueuePreemptionIfBelowUserLimitAndDifferentPrioritiesWithExtraUsers.txt,
>  YARN-2113.0001.patch, YARN-2113.0002.patch, YARN-2113.0003.patch, 
> YARN-2113.0004.patch, YARN-2113.0005.patch, YARN-2113.0006.patch, 
> YARN-2113.0007.patch, YARN-2113.0008.patch, YARN-2113.0009.patch, 
> YARN-2113.0010.patch, YARN-2113.0011.patch, YARN-2113.0012.patch, 
> YARN-2113.0013.patch, YARN-2113.0014.patch, YARN-2113.0015.patch, 
> YARN-2113.0016.patch, YARN-2113.0017.patch, YARN-2113.0018.patch, 
> YARN-2113.0019.patch, YARN-2113.apply.onto.0012.ericp.patch, YARN-2113 
> Intra-QueuePreemption Behavior.pdf, YARN-2113.v0.patch
>
>
> Preemption today only works across queues and moves around resources across 
> queues per demand and usage. We should also have user-level preemption within 
> a queue, to balance capacity across users in a predictable manner.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2113) Add cross-user preemption within CapacityScheduler's leaf-queue

2017-05-22 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16020269#comment-16020269
 ] 

Wangda Tan commented on YARN-2113:
--

Committed to trunk, fixed Javadocs warnings before pushing.  Thanks [~sunilg], 
thanks [~jlowe]/[~curino], and really appreciate [~eepayne] for thorough 
review/tests.

[~sunilg] the patch cannot apply on branch-2, could you update a patch for 
branch-2?

> Add cross-user preemption within CapacityScheduler's leaf-queue
> ---
>
> Key: YARN-2113
> URL: https://issues.apache.org/jira/browse/YARN-2113
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Sunil G
> Attachments: IntraQueue Preemption-Impact Analysis.pdf, 
> TestNoIntraQueuePreemptionIfBelowUserLimitAndDifferentPrioritiesWithExtraUsers.txt,
>  YARN-2113.0001.patch, YARN-2113.0002.patch, YARN-2113.0003.patch, 
> YARN-2113.0004.patch, YARN-2113.0005.patch, YARN-2113.0006.patch, 
> YARN-2113.0007.patch, YARN-2113.0008.patch, YARN-2113.0009.patch, 
> YARN-2113.0010.patch, YARN-2113.0011.patch, YARN-2113.0012.patch, 
> YARN-2113.0013.patch, YARN-2113.0014.patch, YARN-2113.0015.patch, 
> YARN-2113.0016.patch, YARN-2113.0017.patch, YARN-2113.0018.patch, 
> YARN-2113.0019.patch, YARN-2113.apply.onto.0012.ericp.patch, YARN-2113 
> Intra-QueuePreemption Behavior.pdf, YARN-2113.v0.patch
>
>
> Preemption today only works across queues and moves around resources across 
> queues per demand and usage. We should also have user-level preemption within 
> a queue, to balance capacity across users in a predictable manner.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6245) Add FinalResource object to reduce overhead of Resource class instancing

2017-05-22 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16020266#comment-16020266
 ] 

Arun Suresh commented on YARN-6245:
---

[~roniburd], is this similar to what you had proposed in YARN-6418 ?

bq.  At least as a start, it's a very simple patch that substitutes in a 
lightweight object via Resource.newInstance that simply contains 2 longs. 

I understand you had also made some changes to the ResourceCalculator.

> Add FinalResource object to reduce overhead of Resource class instancing
> 
>
> Key: YARN-6245
> URL: https://issues.apache.org/jira/browse/YARN-6245
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
> Attachments: observable-resource.patch, 
> YARN-6245.preliminary-staled.1.patch
>
>
> There're lots of Resource object creation in YARN Scheduler, since Resource 
> object is backed by protobuf, creation of such objects is expensive and 
> becomes bottleneck.
> To address the problem, we can introduce a FinalResource (Is it better to 
> call it ImmutableResource?) object, which is not backed by PBImpl. We can use 
> this object in frequent invoke paths in the scheduler.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5329) ReservationAgent enhancements required to support recurring reservations in the YARN ReservationSystem

2017-05-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16020245#comment-16020245
 ] 

Hadoop QA commented on YARN-5329:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
19s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
20s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 20s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 24s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 15 new + 43 unchanged - 3 fixed = 58 total (was 46) 
{color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
21s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
17s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 21s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 22m 14s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-5329 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12868958/YARN-5329.v0.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 59c44d0c6291 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 9cab42c |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-YARN-Build/15996/artifact/patchprocess/patch-mvninstall-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| compile | 
https://builds.apache.org/job/PreCommit-YARN-Build/15996/artifact/patchprocess/patch-compile-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| javac | 
https://builds.apache.org/job/PreCommit-YARN-Build/15996/artifact/patchprocess/patch-compile-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/15996/artifact/patchproce

[jira] [Commented] (YARN-6575) Support global configuration mutation in MutableConfProvider

2017-05-22 Thread Jonathan Hung (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6575?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16020232#comment-16020232
 ] 

Jonathan Hung commented on YARN-6575:
-

Attached patch containing changes discussed with [~leftnoteasy] and [~xgong]:
# Rename REST endpoint / associated objects to be queue independent
# Add support for passing map of global conf changes to REST endpoint
# Add global configuration mutation support in CS configuration provider and 
queue-admin based ACL policy

> Support global configuration mutation in MutableConfProvider
> 
>
> Key: YARN-6575
> URL: https://issues.apache.org/jira/browse/YARN-6575
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
> Attachments: YARN-6575-YARN-5734.001.patch
>
>
> Right now mutating configs assumes they are only queue configs. Support 
> should be added to mutate global scheduler configs.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6493) Print requested node partition in assignContainer logs

2017-05-22 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-6493:
-
Summary: Print requested node partition in assignContainer logs  (was: 
Print node partition in assignContainer logs)

> Print requested node partition in assignContainer logs
> --
>
> Key: YARN-6493
> URL: https://issues.apache.org/jira/browse/YARN-6493
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.8.0, 2.7.4, 2.6.6
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
> Attachments: YARN-6493.001.patch, YARN-6493.002.patch, 
> YARN-6493.003.patch, YARN-6493-branch-2.7.001.patch, 
> YARN-6493-branch-2.7.002.patch, YARN-6493-branch-2.8.001.patch, 
> YARN-6493-branch-2.8.002.patch, YARN-6493-branch-2.8.003.patch
>
>
> It would be useful to have the node's partition when logging a container 
> allocation, for tracking purposes.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6575) Support global configuration mutation in MutableConfProvider

2017-05-22 Thread Jonathan Hung (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6575?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung updated YARN-6575:

Attachment: YARN-6575-YARN-5734.001.patch

> Support global configuration mutation in MutableConfProvider
> 
>
> Key: YARN-6575
> URL: https://issues.apache.org/jira/browse/YARN-6575
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
> Attachments: YARN-6575-YARN-5734.001.patch
>
>
> Right now mutating configs assumes they are only queue configs. Support 
> should be added to mutate global scheduler configs.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5949) Add pluggable configuration ACL policy interface and implementation

2017-05-22 Thread Jonathan Hung (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16020192#comment-16020192
 ] 

Jonathan Hung commented on YARN-5949:
-

Thanks [~leftnoteasy] and [~xgong] for reviews and commit!

> Add pluggable configuration ACL policy interface and implementation
> ---
>
> Key: YARN-5949
> URL: https://issues.apache.org/jira/browse/YARN-5949
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
> Fix For: YARN-5734
>
> Attachments: YARN-5949-YARN-5734.001.patch, 
> YARN-5949-YARN-5734.002.patch, YARN-5949-YARN-5734.003.patch, 
> YARN-5949-YARN-5734.004.patch, YARN-5949-YARN-5734.005.patch
>
>
> This will allow different policies to customize how/if configuration changes 
> should be applied (for example, a policy might restrict whether a 
> configuration change by a certain user is allowed). This will be enforced by 
> the MutableCSConfigurationProvider.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-6613) Update json validation for new native services providers

2017-05-22 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16020188#comment-16020188
 ] 

Jian He edited comment on YARN-6613 at 5/22/17 9:00 PM:


Thanks Billie, patch looks good overall, some comments I had, most of them are 
cosmetic changes.
- remove the unused method AbstractClientProvider#processClientOperation ?
- ServiceApiUtil#validateConfigFile may be merged into this method 
{{validateConfigFiles(List configFiles, FileSystem fileSystem)}} in 
AbstractClientProvider ?
- create a common method for below method so that the code is not duplicated in 
the has-no-componet and has-component scenario. 
{code}
  AbstractClientProvider compClientProvider = SliderProviderFactory
  .getClientProvider(comp.getArtifact());
  compClientProvider.validateArtifact(comp.getArtifact(), fs
  .getFileSystem());

  // resource
  if (comp.getResource() == null) {
comp.setResource(globalResource);
  }
  validateApplicationResource(comp.getResource(), comp);

  // container count
  if (comp.getNumberOfContainers() == null) {
comp.setNumberOfContainers(globalNumberOfContainers);
  }
  if (comp.getNumberOfContainers() == null
  || comp.getNumberOfContainers() < 0) {
throw new IllegalArgumentException(String.format(
RestApiErrorMessages.ERROR_CONTAINERS_COUNT_FOR_COMP_INVALID
+ ": " + comp.getNumberOfContainers(), comp.getName()));
  }
  validateConfigFile(comp.getConfiguration().getFiles(), fs
  .getFileSystem());
  compClientProvider.validateConfigFiles(comp.getConfiguration()
  .getFiles(), fs.getFileSystem());
{code}
- validateApplicationPayload is a bit overloaded. maybe rename 
validateApplicationPayload to validateAndResolveApplicationPayload ?
- Could you add some comments to explain that this block of code is for 
resoliving external application's components or we may create a separate method 
for it, so that reader doesn't need to read through the code to understand what 
its doing.
{code}
for (Component comp : application.getComponents()) {
  if (componentNames.contains(comp.getName())) {
throw new IllegalArgumentException("Component name collision: " +
comp.getName());
  }
  // artifact
  if (comp.getArtifact() == null) {
comp.setArtifact(globalArtifact);
  }
  // configuration
  comp.getConfiguration().mergeFrom(globalConf);
  // If artifact is of type APPLICATION, read other application components
  if (comp.getArtifact() != null && comp.getArtifact().getType() ==
  Artifact.TypeEnum.APPLICATION) {
if (StringUtils.isEmpty(comp.getArtifact().getId())) {
  throw new IllegalArgumentException(
  RestApiErrorMessages.ERROR_ARTIFACT_ID_INVALID);
}
componentsToRemove.add(comp);
List applicationComponents = getApplicationComponents(fs,
comp.getArtifact().getId());
for (Component c : applicationComponents) {
  if (componentNames.contains(c.getName())) {
// TODO allow name collisions? see AppState#roles
// TODO or add prefix to external component names?
throw new IllegalArgumentException("Component name collision: " +
c.getName());
  }
  componentNames.add(c.getName());
}
componentsToAdd.addAll(applicationComponents);
  } else {
componentNames.add(comp.getName());
  }
}
{code}
- Application configurations are merged into components before persisting, this 
will increase app json file size. For hdfs, it won't be a problem though. for 
zk that's relatively sensitive to file size, may be an issue. Any reason need 
to resolve it before persisting?
- In actionStart, why is it required to write back to hdfs?
{code}
   // write app definition on to hdfs
persistApp(appDir, application);
{code}
- looks like SliderClient#monitorAppToState is only used by monitorAppToRunning 
? we can just use monitorAppToRunning. no need to have this separate method.
- rename TestConfTreeLoadExamples to something else?
- TestMiscSliderUtils can be removed ? the methods (createAppInstanceTempPath, 
purgeAppInstanceTempFiles) for which it's testing seem only used by the test 
itself.
- rename ExampleConfResources to ExampleAppJson
- In Default and Tarball provider, only the the filename of the dest_file is 
used to crete the localized file, all parent paths are ignored which makes it 
confusing by user if user supplies with a full path. should we add additional 
validation that only filename should be used in the dest_file ? or make it 
create full path


was (Author: jianhe):
Thanks Billie, patch looks good overall, some comments I had, most of them are 
cosmetic changes.
- remove the unused method AbstractCli

[jira] [Commented] (YARN-6613) Update json validation for new native services providers

2017-05-22 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16020188#comment-16020188
 ] 

Jian He commented on YARN-6613:
---

Thanks Billie, patch looks good overall, some comments I had, most of them are 
cosmetic changes.
- remove the unused method AbstractClientProvider#processClientOperation ?
- ServiceApiUtil#validateConfigFile may be merged into this method 
{{validateConfigFiles(List configFiles, FileSystem fileSystem)}} in 
AbstractClientProvider ?
- create a common method for below method so that the code is not duplicated in 
the has-no-componet and has-component scenario. 
{code}
  AbstractClientProvider compClientProvider = SliderProviderFactory
  .getClientProvider(comp.getArtifact());
  compClientProvider.validateArtifact(comp.getArtifact(), fs
  .getFileSystem());

  // resource
  if (comp.getResource() == null) {
comp.setResource(globalResource);
  }
  validateApplicationResource(comp.getResource(), comp);

  // container count
  if (comp.getNumberOfContainers() == null) {
comp.setNumberOfContainers(globalNumberOfContainers);
  }
  if (comp.getNumberOfContainers() == null
  || comp.getNumberOfContainers() < 0) {
throw new IllegalArgumentException(String.format(
RestApiErrorMessages.ERROR_CONTAINERS_COUNT_FOR_COMP_INVALID
+ ": " + comp.getNumberOfContainers(), comp.getName()));
  }
  validateConfigFile(comp.getConfiguration().getFiles(), fs
  .getFileSystem());
  compClientProvider.validateConfigFiles(comp.getConfiguration()
  .getFiles(), fs.getFileSystem());
{code}
- validateApplicationPayload is a bit overloaded. maybe rename 
validateApplicationPayload to validateAndResolveApplicationPayload ?
- Could you add some comments to explain that this block of code is for 
resoliving external application's components or we may create a separate method 
for it, so that reader doesn't need to read through the code to understand what 
its doing.
{code}
for (Component comp : application.getComponents()) {
  if (componentNames.contains(comp.getName())) {
throw new IllegalArgumentException("Component name collision: " +
comp.getName());
  }
  // artifact
  if (comp.getArtifact() == null) {
comp.setArtifact(globalArtifact);
  }
  // configuration
  comp.getConfiguration().mergeFrom(globalConf);
  // If artifact is of type APPLICATION, read other application components
  if (comp.getArtifact() != null && comp.getArtifact().getType() ==
  Artifact.TypeEnum.APPLICATION) {
if (StringUtils.isEmpty(comp.getArtifact().getId())) {
  throw new IllegalArgumentException(
  RestApiErrorMessages.ERROR_ARTIFACT_ID_INVALID);
}
componentsToRemove.add(comp);
List applicationComponents = getApplicationComponents(fs,
comp.getArtifact().getId());
for (Component c : applicationComponents) {
  if (componentNames.contains(c.getName())) {
// TODO allow name collisions? see AppState#roles
// TODO or add prefix to external component names?
throw new IllegalArgumentException("Component name collision: " +
c.getName());
  }
  componentNames.add(c.getName());
}
componentsToAdd.addAll(applicationComponents);
  } else {
componentNames.add(comp.getName());
  }
}
{code}
- Application configurations are merged into components before persisting, this 
will increase app json file size. For hdfs, it won't be a problem though. for 
zk that's relatively sensitive to file size, may be an issue. Any reason need 
to resolve it before persisting changed?
- In actionStart, why is it required to write back to hdfs?
{code}
   // write app definition on to hdfs
persistApp(appDir, application);
{code}
- looks like SliderClient#monitorAppToState is only used by monitorAppToRunning 
? we can just use monitorAppToRunning. no need to have this separate method.
- rename TestConfTreeLoadExamples to something else?
- TestMiscSliderUtils can be removed ? the methods (createAppInstanceTempPath, 
purgeAppInstanceTempFiles) for which it's testing seem only used by the test 
itself.
- rename ExampleConfResources to ExampleAppJson
- In Default and Tarball provider, only the the filename of the dest_file is 
used to crete the localized file, all parent paths are ignored which makes it 
confusing by user if user supplies with a full path. should we add additional 
validation that only filename should be used in the dest_file ? or make it 
create full path

> Update json validation for new native services providers
> 
>
> Key: YARN-6613
> URL: https://issues.apache.or

[jira] [Commented] (YARN-6245) Add FinalResource object to reduce overhead of Resource class instancing

2017-05-22 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16020182#comment-16020182
 ] 

Wangda Tan commented on YARN-6245:
--

[~daryn],

Discussed with [~jlowe] offline, it looks like a great idea. It automatically 
use light PB while doing internal computations.

bq. ...  which converts the lightweight to a pb impl as required. 

Not sure if this convert lightweight Resource instance permanently or 
temporarily, it's better to optimize the case which 
{{ProtoUtils.convertToProtoFormat(Resource)}} invoked many times on a same 
Resource object reference, ideally conversion should only happen once. 

> Add FinalResource object to reduce overhead of Resource class instancing
> 
>
> Key: YARN-6245
> URL: https://issues.apache.org/jira/browse/YARN-6245
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
> Attachments: observable-resource.patch, 
> YARN-6245.preliminary-staled.1.patch
>
>
> There're lots of Resource object creation in YARN Scheduler, since Resource 
> object is backed by protobuf, creation of such objects is expensive and 
> becomes bottleneck.
> To address the problem, we can introduce a FinalResource (Is it better to 
> call it ImmutableResource?) object, which is not backed by PBImpl. We can use 
> this object in frequent invoke paths in the scheduler.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6584) Correct license headers in hadoop-common, hdfs, yarn and mapreduce

2017-05-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16020163#comment-16020163
 ] 

Hadoop QA commented on YARN-6584:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 18m 
39s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 7 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
16s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
40s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
27s{color} | {color:green} branch-2 passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
33s{color} | {color:green} branch-2 passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
24s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
58s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  2m 
50s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  9m 
10s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m  
9s{color} | {color:green} branch-2 passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
20s{color} | {color:green} branch-2 passed with JDK v1.7.0_131 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
27s{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
51s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  6m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  3m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 12m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
40s{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
44s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
46s{color} | {color:green} hadoop-auth in the patch passed with JDK v1.7.0_131. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
13s{color} | {color:green} hadoop-common in the patch passed with JDK 
v1.7.0_131. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
23s{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.7.0_131. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 54m  1s{color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_131. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
38s{color} | {color:green} hadoop-yarn-server-applicationhistoryservice in the 
patch passed with JDK v1.7.0_131. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
48s{c

[jira] [Commented] (YARN-5608) TestAMRMClient.setup() fails with ArrayOutOfBoundsException

2017-05-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16020161#comment-16020161
 ] 

Hadoop QA commented on YARN-5608:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 11m 
31s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
48s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
15s{color} | {color:green} branch-2.7 passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
16s{color} | {color:green} branch-2.7 passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
21s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
33s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
11s{color} | {color:green} branch-2.7 passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} branch-2.7 passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 2489 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m 
36s{color} | {color:red} The patch 76 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m  
9s{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 46m 31s{color} 
| {color:red} hadoop-yarn-client in the patch failed with JDK v1.7.0_131. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}136m 32s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_131 Failed junit tests | 
hadoop.yarn.client.TestApplicationMasterServiceProtocolOnHA |
|   | hadoop.yarn.client.TestApplicationClientProtocolOnHA |
|   | hadoop.yarn.client.TestResourceTrackerOnHA |
|   | hadoop.yarn.client.TestGetGroups |
| JDK v1.8.0_131 Timed out junit tests | 
org.apache.hadoop.yarn.client.TestRMFailover |
|   | org.apache.hadoop.yarn.client.api.impl.TestYarnClient |
|   | org.apache.hadoop.yarn.client.api.impl.TestAMRMClient |
|   | org.apache.hadoop.yarn.client.api.impl.TestNMClie

[jira] [Updated] (YARN-5949) Add pluggable configuration ACL policy interface and implementation

2017-05-22 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5949?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-5949:
-
Summary: Add pluggable configuration ACL policy interface and 
implementation  (was: Add pluggable configuration policy interface as a 
component of MutableCSConfigurationProvider)

> Add pluggable configuration ACL policy interface and implementation
> ---
>
> Key: YARN-5949
> URL: https://issues.apache.org/jira/browse/YARN-5949
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
> Attachments: YARN-5949-YARN-5734.001.patch, 
> YARN-5949-YARN-5734.002.patch, YARN-5949-YARN-5734.003.patch, 
> YARN-5949-YARN-5734.004.patch, YARN-5949-YARN-5734.005.patch
>
>
> This will allow different policies to customize how/if configuration changes 
> should be applied (for example, a policy might restrict whether a 
> configuration change by a certain user is allowed). This will be enforced by 
> the MutableCSConfigurationProvider.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6615) AmIpFilter drops query parameters on redirect

2017-05-22 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16020069#comment-16020069
 ] 

Jason Lowe commented on YARN-6615:
--

Thanks for backporting the patches!  +1 for the branch-2.8/branch-2.7 patch, 
lgtm.  I'm not sure about the branch-2.6 patch since the findbugs warning 
sounds ominous, as in "you _always_ have a bug if this is being flagged" type 
of issue.

If there are no objections I'll commit the patches down through 2.7 tomorrow, 
but I'm looking for someone with more knowledge on the findbugs warning to see 
what needs to be done for 2.6 before that patch is committed.


> AmIpFilter drops query parameters on redirect
> -
>
> Key: YARN-6615
> URL: https://issues.apache.org/jira/browse/YARN-6615
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.8.0, 2.7.3, 2.6.5, 3.0.0-alpha2
>Reporter: Wilfred Spiegelenburg
>Assignee: Wilfred Spiegelenburg
> Attachments: YARN-6615.1.patch, YARN-6615-branch-2.6.1.patch, 
> YARN-6615-branch-2.6.2.patch, YARN-6615-branch-2.8.1.patch
>
>
> When an AM web request is redirected to the RM the query parameters are 
> dropped from the web request.
> This happens for Spark as described in SPARK-20772.
> The repro steps are:
> - Start up the spark-shell in yarn mode and run a job
> - Try to access the job details through http://:4040/jobs/job?id=0
> - A HTTP ERROR 400 is thrown (requirement failed: missing id parameter)
> This works fine in local or standalone mode, but does not work on Yarn where 
> the query parameter is dropped. If the UI filter 
> org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter is removed from 
> the config which shows that the problem is in the filter



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5608) TestAMRMClient.setup() fails with ArrayOutOfBoundsException

2017-05-22 Thread Jonathan Hung (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16019932#comment-16019932
 ] 

Jonathan Hung commented on YARN-5608:
-

Attaching same patch with different file name to trigger jenkins for branch-2.7

> TestAMRMClient.setup() fails with ArrayOutOfBoundsException
> ---
>
> Key: YARN-5608
> URL: https://issues.apache.org/jira/browse/YARN-5608
> Project: Hadoop YARN
>  Issue Type: Test
>  Components: test
>Affects Versions: 2.8.0
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>  Labels: test-fail
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: YARN-5608.002.patch, YARN-5608.003.patch, 
> YARN-5608.004.patch, YARN-5608.005.patch, YARN-5608-branch-2.7.001.patch, 
> YARN-5608.patch
>
>
> After 39 runs the {{TestAMRMClient}} test, I encountered:
> {noformat}
> java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
>   at java.util.ArrayList.rangeCheck(ArrayList.java:635)
>   at java.util.ArrayList.get(ArrayList.java:411)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TestAMRMClient.setup(TestAMRMClient.java:144)
> {noformat}
> I see it shows up occasionally in the error emails as well.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5608) TestAMRMClient.setup() fails with ArrayOutOfBoundsException

2017-05-22 Thread Jonathan Hung (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung updated YARN-5608:

Attachment: YARN-5608-branch-2.7.001.patch

> TestAMRMClient.setup() fails with ArrayOutOfBoundsException
> ---
>
> Key: YARN-5608
> URL: https://issues.apache.org/jira/browse/YARN-5608
> Project: Hadoop YARN
>  Issue Type: Test
>  Components: test
>Affects Versions: 2.8.0
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>  Labels: test-fail
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: YARN-5608.002.patch, YARN-5608.003.patch, 
> YARN-5608.004.patch, YARN-5608.005.patch, YARN-5608-branch-2.7.001.patch, 
> YARN-5608.patch
>
>
> After 39 runs the {{TestAMRMClient}} test, I encountered:
> {noformat}
> java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
>   at java.util.ArrayList.rangeCheck(ArrayList.java:635)
>   at java.util.ArrayList.get(ArrayList.java:411)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TestAMRMClient.setup(TestAMRMClient.java:144)
> {noformat}
> I see it shows up occasionally in the error emails as well.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Reopened] (YARN-5608) TestAMRMClient.setup() fails with ArrayOutOfBoundsException

2017-05-22 Thread Jonathan Hung (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung reopened YARN-5608:
-

> TestAMRMClient.setup() fails with ArrayOutOfBoundsException
> ---
>
> Key: YARN-5608
> URL: https://issues.apache.org/jira/browse/YARN-5608
> Project: Hadoop YARN
>  Issue Type: Test
>  Components: test
>Affects Versions: 2.8.0
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>  Labels: test-fail
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: YARN-5608.002.patch, YARN-5608.003.patch, 
> YARN-5608.004.patch, YARN-5608.005.patch, YARN-5608-branch-2.7.001.patch, 
> YARN-5608.patch
>
>
> After 39 runs the {{TestAMRMClient}} test, I encountered:
> {noformat}
> java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
>   at java.util.ArrayList.rangeCheck(ArrayList.java:635)
>   at java.util.ArrayList.get(ArrayList.java:411)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TestAMRMClient.setup(TestAMRMClient.java:144)
> {noformat}
> I see it shows up occasionally in the error emails as well.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6610) DominantResourceCalculator.getResourceAsValue() dominant param is no longer appropriate

2017-05-22 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16019866#comment-16019866
 ] 

Daniel Templeton commented on YARN-6610:


I had similar thoughts.  Maybe instead of adding scarce resources, we could 
cover most use cases by doing what I did in this patch *plus* adding a 
calculator that looks at only CPU and memory, like the pre-resource-types 
{{DominantResourceCalculator}}.  (I guess the other way to look at it would be 
to have {{DominantResourceCalculator}} ignore everything except CPU and memory 
and add a new calculator to do what I did in the patch.)

> DominantResourceCalculator.getResourceAsValue() dominant param is no longer 
> appropriate
> ---
>
> Key: YARN-6610
> URL: https://issues.apache.org/jira/browse/YARN-6610
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: YARN-3926
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Critical
> Attachments: YARN-6610.001.patch
>
>
> The {{dominant}} param assumes there are only two resources, i.e. true means 
> to compare the dominant, and false means to compare the subordinate.  Now 
> that there are _n_ resources, this parameter no longer makes sense.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6111) Rumen input does't work in SLS

2017-05-22 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16019828#comment-16019828
 ] 

Yufei Gu commented on YARN-6111:


[~yoyo], SLS should work well if you follow the steps in SLS doc 
(SchedulerLoadSimulator.md). This only solves the issue in the rumen trace 
example. The other two formats work well as I know.

> Rumen input does't work in SLS
> --
>
> Key: YARN-6111
> URL: https://issues.apache.org/jira/browse/YARN-6111
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler-load-simulator
>Affects Versions: 2.6.0, 2.7.3, 3.0.0-alpha2
> Environment: ubuntu14.0.4 os
>Reporter: YuJie Huang
>Assignee: Yufei Gu
>  Labels: test
> Fix For: 3.0.0-alpha3
>
> Attachments: YARN-6111.001.patch
>
>
> Hi guys,
> I am trying to learn the use of SLS.
> I would like to get the file realtimetrack.json, but this it only 
> contains "[]" at the end of a simulation. This is the command I use to 
> run the instance:
> HADOOP_HOME $ bin/slsrun.sh --input-rumen=sample-data/2jobsmin-rumen-jh.json 
> --output-dir=sample-data 
> All other files, including metrics, appears to be properly populated.I can 
> also trace with web:http://localhost:10001/simulate
> Can someone help?
> Thanks



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6245) Add FinalResource object to reduce overhead of Resource class instancing

2017-05-22 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16019701#comment-16019701
 ] 

Daryn Sharp commented on YARN-6245:
---

[~jlowe] asked me to comment since we're running into 2.8 scheduler performance 
issues we believe are (in part) due to pb impl based objects.  I think I've 
designed a means for resources via RPC to remain {{ResourcePBImpl}} while 
internally created resources are lightweight and only converted to a PB if it 
will be sent over the wire.

At least as a start, it's a very simple patch that substitutes in a lightweight 
object via {{Resource.newInstance}} that simply contains 2 longs.  Replaced 
usages of {{((ResourcePBImpl)r)#getProto()}} with 
{{ProtoUtils.convertToProtoFormat(Resource)}} which converts the lightweight to 
a pb impl as required.  That's it.

We're testing today.  Will post a sample patch if it looks promising.


> Add FinalResource object to reduce overhead of Resource class instancing
> 
>
> Key: YARN-6245
> URL: https://issues.apache.org/jira/browse/YARN-6245
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
> Attachments: observable-resource.patch, 
> YARN-6245.preliminary-staled.1.patch
>
>
> There're lots of Resource object creation in YARN Scheduler, since Resource 
> object is backed by protobuf, creation of such objects is expensive and 
> becomes bottleneck.
> To address the problem, we can introduce a FinalResource (Is it better to 
> call it ImmutableResource?) object, which is not backed by PBImpl. We can use 
> this object in frequent invoke paths in the scheduler.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6628) Unexpected jackson-core-2.2.3 dependency introduced

2017-05-22 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16019646#comment-16019646
 ] 

Jason Lowe commented on YARN-6628:
--

Other dependency changes associated with the move from fst-2.24 to fst-2.50 
include addiing java-util-1.9.0.jar and json-io-2.5.1.jar (in addition to 
jackson-core-2.23.jar) and the removal of objenesis-2.1.jar.

As for why jackson-core-2.7.8.jar is missing from 
hadoop-dist/target/hadoop-3.0.0-alpha3-SNAPSHOT/share/hadoop/yarn/lib, that's 
caused by HADOOP-12850.  That JIRA changed the way the projects are stitched 
together, and it only copies dependency jars if they aren't already somewhere 
else in the tree.  See dev-support/bin/dist-layout-stitching for details.  The 
jackson-core-2.7.8.jar is listed in 
hadoop-yarn-project/target/hadoop-3.0.0-alpha3-SNAPSHOT/share/hadoop/yarn/lib, 
so the fact that it's not in hadoop-dist is an artifact of how the dependencies 
are copied over.

The fst dependency on jackson-core says it's needed for 
createJSONConfiguration.  Is it possible that we are not calling that method 
and do not actually need this dependency in practice?


> Unexpected jackson-core-2.2.3 dependency introduced
> ---
>
> Key: YARN-6628
> URL: https://issues.apache.org/jira/browse/YARN-6628
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: timelineserver
>Affects Versions: 2.8.1
>Reporter: Jason Lowe
>Priority: Blocker
>
> The change in YARN-5894 caused jackson-core-2.2.3.jar to be added in 
> share/hadoop/yarn/lib/. This added dependency seems to be incompatible with 
> jackson-core-asl-1.9.13.jar which is also shipped as a dependency.  This new 
> jackson-core jar ends up breaking jobs that ran fine on 2.8.0.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6366) Refactor the NodeManager DeletionService to support additional DeletionTask types.

2017-05-22 Thread Varun Vasudev (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16019584#comment-16019584
 ] 

Varun Vasudev commented on YARN-6366:
-

+1 for the latest patch. I'll commit this tomorrow if no one objects.

> Refactor the NodeManager DeletionService to support additional DeletionTask 
> types.
> --
>
> Key: YARN-6366
> URL: https://issues.apache.org/jira/browse/YARN-6366
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager, yarn
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
> Attachments: YARN-6366.001.patch, YARN-6366.002.patch, 
> YARN-6366.003.patch, YARN-6366.004.patch, YARN-6366.005.patch, 
> YARN-6366.006.patch, YARN-6366.007.patch, YARN-6366.008.patch
>
>
> The NodeManager's DeletionService only supports file based DeletionTask's. 
> This makes sense as files (and directories) have been the primary concern for 
> clean up to date. With the addition of the Docker container runtime, addition 
> types of DeletionTask are likely to be required, such as deletion of docker 
> container and images. See YARN-5366 and YARN-5670. This issue is to refactor 
> the DeletionService to support additional DeletionTask's.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6629) NPE occurred when container allocation proposal is applied but its resource requests are removed before

2017-05-22 Thread Tao Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tao Yang updated YARN-6629:
---
Description: 
I wrote a test case to reproduce another problem for branch-2 and found new NPE 
error,  log: 
{code}
FATAL event.EventDispatcher (EventDispatcher.java:run(75)) - Error in handling 
event type NODE_UPDATE to the Event Dispatcher
java.lang.NullPointerException
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.AppSchedulingInfo.allocate(AppSchedulingInfo.java:446)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp.apply(FiCaSchedulerApp.java:516)
at 
org.apache.hadoop.yarn.client.TestNegativePendingResource$1.answer(TestNegativePendingResource.java:225)
at 
org.mockito.internal.stubbing.StubbedInvocationMatcher.answer(StubbedInvocationMatcher.java:31)
at org.mockito.internal.MockHandler.handle(MockHandler.java:97)
at 
org.mockito.internal.creation.MethodInterceptorFilter.intercept(MethodInterceptorFilter.java:47)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp$$EnhancerByMockitoWithCGLIB$$29eb8afc.apply()
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.tryCommit(CapacityScheduler.java:2396)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.submitResourceCommitRequest(CapacityScheduler.java:2281)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.allocateOrReserveNewContainers(CapacityScheduler.java:1247)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.allocateContainerOnSingleNode(CapacityScheduler.java:1236)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.allocateContainersToNode(CapacityScheduler.java:1325)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.allocateContainersToNode(CapacityScheduler.java:1112)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.nodeUpdate(CapacityScheduler.java:987)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.handle(CapacityScheduler.java:1367)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.handle(CapacityScheduler.java:143)
at 
org.apache.hadoop.yarn.event.EventDispatcher$EventProcessor.run(EventDispatcher.java:66)
at java.lang.Thread.run(Thread.java:745)
{code}

Reproduce this error in chronological order:
1. AM started and requested 1 container with schedulerRequestKey#1 : 
ApplicationMasterService#allocate -->  CapacityScheduler#allocate --> 
SchedulerApplicationAttempt#updateResourceRequests --> 
AppSchedulingInfo#updateResourceRequests 
Added schedulerRequestKey#1 into schedulerKeyToPlacementSets
2. Scheduler allocatd 1 container for this request and accepted the proposal
3. AM removed this request
ApplicationMasterService#allocate -->  CapacityScheduler#allocate --> 
SchedulerApplicationAttempt#updateResourceRequests --> 
AppSchedulingInfo#updateResourceRequests --> 
AppSchedulingInfo#addToPlacementSets --> 
AppSchedulingInfo#updatePendingResources
Removed schedulerRequestKey#1 from schedulerKeyToPlacementSets)
4. Scheduler applied this proposal
CapacityScheduler#tryCommit --> FiCaSchedulerApp#apply --> 
AppSchedulingInfo#allocate 
Throw NPE when called 
schedulerKeyToPlacementSets.get(schedulerRequestKey).allocate(schedulerKey, 
type, node);

  was:
I wrote a test case to reproduce another problem for branch-2 and found new NPE 
error,  log: 
{code}
FATAL event.EventDispatcher (EventDispatcher.java:run(75)) - Error in handling 
event type NODE_UPDATE to the Event Dispatcher
java.lang.NullPointerException
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.AppSchedulingInfo.allocate(AppSchedulingInfo.java:446)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp.apply(FiCaSchedulerApp.java:516)
at 
org.apache.hadoop.yarn.client.TestNegativePendingResource$1.answer(TestNegativePendingResource.java:225)
at 
org.mockito.internal.stubbing.StubbedInvocationMatcher.answer(StubbedInvocationMatcher.java:31)
at org.mockito.internal.MockHandler.handle(MockHandler.java:97)
at 
org.mockito.internal.creation.MethodInterceptorFilter.intercept(MethodInterceptorFilter.java:47)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp$$EnhancerByMockitoWithCGLIB$$29eb8afc.apply()
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.tryCommit(CapacityScheduler.java:2396)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.Capa

[jira] [Commented] (YARN-6584) Correct license headers in hadoop-common, hdfs, yarn and mapreduce

2017-05-22 Thread Yeliang Cang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16019463#comment-16019463
 ] 

Yeliang Cang commented on YARN-6584:


Thank you for correcting me, [~sunilg]

> Correct license headers in hadoop-common, hdfs, yarn and mapreduce
> --
>
> Key: YARN-6584
> URL: https://issues.apache.org/jira/browse/YARN-6584
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.9.0
>Reporter: Yeliang Cang
>Assignee: Yeliang Cang
>Priority: Trivial
> Fix For: 2.9.0
>
> Attachments: YARN-6584-001.patch, YARN-6584-branch-2.001.patch, 
> YARN-6584-branch2.001.patch, YARN-6584-branch-2.002.patch
>
>
> The license in some java files are not same as others. Submit a patch to fix 
> this!



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6584) Correct license headers in hadoop-common, hdfs, yarn and mapreduce

2017-05-22 Thread Yeliang Cang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16019462#comment-16019462
 ] 

Yeliang Cang commented on YARN-6584:


Sorry, I have cancelled the patch by mistake. Rename and submit the patch again!

> Correct license headers in hadoop-common, hdfs, yarn and mapreduce
> --
>
> Key: YARN-6584
> URL: https://issues.apache.org/jira/browse/YARN-6584
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.9.0
>Reporter: Yeliang Cang
>Assignee: Yeliang Cang
>Priority: Trivial
> Fix For: 2.9.0
>
> Attachments: YARN-6584-001.patch, YARN-6584-branch-2.001.patch, 
> YARN-6584-branch2.001.patch, YARN-6584-branch-2.002.patch
>
>
> The license in some java files are not same as others. Submit a patch to fix 
> this!



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6584) Correct license headers in hadoop-common, hdfs, yarn and mapreduce

2017-05-22 Thread Yeliang Cang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6584?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yeliang Cang updated YARN-6584:
---
Attachment: YARN-6584-branch-2.002.patch

> Correct license headers in hadoop-common, hdfs, yarn and mapreduce
> --
>
> Key: YARN-6584
> URL: https://issues.apache.org/jira/browse/YARN-6584
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.9.0
>Reporter: Yeliang Cang
>Assignee: Yeliang Cang
>Priority: Trivial
> Fix For: 2.9.0
>
> Attachments: YARN-6584-001.patch, YARN-6584-branch-2.001.patch, 
> YARN-6584-branch2.001.patch, YARN-6584-branch-2.002.patch
>
>
> The license in some java files are not same as others. Submit a patch to fix 
> this!



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6631) Refactor loader.js in new YARN-UI

2017-05-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16019454#comment-16019454
 ] 

Hadoop QA commented on YARN-6631:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
11s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}  1m  0s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6631 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12869242/YARN-6631.001.patch |
| Optional Tests |  asflicense  |
| uname | Linux c09d80f35fb2 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / b6f66b0 |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/15992/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Refactor loader.js in new YARN-UI
> -
>
> Key: YARN-6631
> URL: https://issues.apache.org/jira/browse/YARN-6631
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Akhil PB
>Assignee: Akhil PB
> Attachments: YARN-6631.001.patch
>
>
> Current loader.js file overwrites all other ENV properties configured in 
> config.env file other than "rmWebAdderss" and "timelineWebAddress". This 
> ticket is meant to fix the above issue.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6631) Refactor loader.js in new YARN-UI

2017-05-22 Thread Akhil PB (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akhil PB updated YARN-6631:
---
Attachment: YARN-6631.001.patch

Adding v1 patch.
Hi [~sunilg], help in review the patch.

> Refactor loader.js in new YARN-UI
> -
>
> Key: YARN-6631
> URL: https://issues.apache.org/jira/browse/YARN-6631
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Akhil PB
>Assignee: Akhil PB
> Attachments: YARN-6631.001.patch
>
>
> Current loader.js file overwrites all other ENV properties configured in 
> config.env file other than "rmWebAdderss" and "timelineWebAddress". This 
> ticket is meant to fix the above issue.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6584) Correct license headers in hadoop-common, hdfs, yarn and mapreduce

2017-05-22 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6584?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-6584:
--
Attachment: YARN-6584-branch-2.001.patch

Updating same patch given by [~Cyl], just renamed the patch name to compile 
against branch-2.

> Correct license headers in hadoop-common, hdfs, yarn and mapreduce
> --
>
> Key: YARN-6584
> URL: https://issues.apache.org/jira/browse/YARN-6584
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.9.0
>Reporter: Yeliang Cang
>Assignee: Yeliang Cang
>Priority: Trivial
> Fix For: 2.9.0
>
> Attachments: YARN-6584-001.patch, YARN-6584-branch-2.001.patch, 
> YARN-6584-branch2.001.patch
>
>
> The license in some java files are not same as others. Submit a patch to fix 
> this!



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6631) Refactor loader.js in new YARN-UI

2017-05-22 Thread Akhil PB (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akhil PB updated YARN-6631:
---
Description: Current loader.js file overwrites all other ENV properties 
configured in config.env file other than "rmWebAdderss" and 
"timelineWebAddress". This ticket is meant to fix the above issue.  (was: 
Current loader.js file overwrites all other ENV properties configured in 
config.env file other than "rmWebAdderss" and "timelineWebAddress". This is 
ticket is meant to fix the above issue.)

> Refactor loader.js in new YARN-UI
> -
>
> Key: YARN-6631
> URL: https://issues.apache.org/jira/browse/YARN-6631
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Akhil PB
>Assignee: Akhil PB
>
> Current loader.js file overwrites all other ENV properties configured in 
> config.env file other than "rmWebAdderss" and "timelineWebAddress". This 
> ticket is meant to fix the above issue.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6631) Refactor loader.js in new YARN-UI

2017-05-22 Thread Akhil PB (JIRA)
Akhil PB created YARN-6631:
--

 Summary: Refactor loader.js in new YARN-UI
 Key: YARN-6631
 URL: https://issues.apache.org/jira/browse/YARN-6631
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Akhil PB
Assignee: Akhil PB


Current loader.js file overwrites all other ENV properties configured in 
config.env file other than "rmWebAdderss" and "timelineWebAddress". This is 
ticket is meant to fix the above issue.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6630) Container worker dir could not recover when NM restart

2017-05-22 Thread Yang Wang (JIRA)
Yang Wang created YARN-6630:
---

 Summary: Container worker dir could not recover when NM restart
 Key: YARN-6630
 URL: https://issues.apache.org/jira/browse/YARN-6630
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Yang Wang


When ContainerRetryPolicy is NEVER_RETRY, container worker dir will not be 
saved in NM state store. Then NM restarts, container.workDir is null, and may 
cause other exceptions.

{code:title=ContainerLaunch.java}
...
  private void recordContainerWorkDir(ContainerId containerId,
  String workDir) throws IOException{
container.setWorkDir(workDir);
if (container.isRetryContextSet()) {
  context.getNMStateStore().storeContainerWorkDir(containerId, workDir);
}
  }
{code}

{code:title=ContainerImpl.java}
  static class ResourceLocalizedWhileRunningTransition
  extends ContainerTransition {
...
  String linkFile = new Path(container.workDir, link).toString();
...
{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6584) Correct license headers in hadoop-common, hdfs, yarn and mapreduce

2017-05-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16019422#comment-16019422
 ] 

Hadoop QA commented on YARN-6584:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m 13s{color} 
| {color:red} YARN-6584 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-6584 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12869229/YARN-6584-branch2.001.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/15991/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Correct license headers in hadoop-common, hdfs, yarn and mapreduce
> --
>
> Key: YARN-6584
> URL: https://issues.apache.org/jira/browse/YARN-6584
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.9.0
>Reporter: Yeliang Cang
>Assignee: Yeliang Cang
>Priority: Trivial
> Fix For: 2.9.0
>
> Attachments: YARN-6584-001.patch, YARN-6584-branch2.001.patch
>
>
> The license in some java files are not same as others. Submit a patch to fix 
> this!



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6584) Correct license headers in hadoop-common, hdfs, yarn and mapreduce

2017-05-22 Thread Yeliang Cang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16019399#comment-16019399
 ] 

Yeliang Cang commented on YARN-6584:


Hi, [~sunilg], I have uploaded a branch-2 patch. Please check it, and let me 
know if you have some questions!

> Correct license headers in hadoop-common, hdfs, yarn and mapreduce
> --
>
> Key: YARN-6584
> URL: https://issues.apache.org/jira/browse/YARN-6584
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.9.0
>Reporter: Yeliang Cang
>Assignee: Yeliang Cang
>Priority: Trivial
> Fix For: 2.9.0
>
> Attachments: YARN-6584-001.patch, YARN-6584-branch2.001.patch
>
>
> The license in some java files are not same as others. Submit a patch to fix 
> this!



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6584) Correct license headers in hadoop-common, hdfs, yarn and mapreduce

2017-05-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16019381#comment-16019381
 ] 

Hudson commented on YARN-6584:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11764 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11764/])
YARN-6584. Correct license headers in hadoop-common, hdfs, yarn and (sunilg: 
rev b6f66b0da1cc77f4e61118404a008b4bd7e1a752)
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/bloom/TestBloomFilters.java
* (edit) 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/webapp/TestAMWebServicesAttempts.java
* (edit) 
hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/WordMean.java
* (edit) 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/TaskAttemptContextImpl.java
* (edit) 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TestShuffleProvider.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/bloom/BloomFilterCommonTester.java
* (edit) 
hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/WordMedian.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/timeline/recovery/records/TimelineDelegationTokenIdentifierData.java
* (edit) 
hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/util/TestKerberosName.java
* (edit) 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpConstants.java
* (edit) 
hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/shade/resource/ServicesResourceTransformer.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FilterFs.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/DirectoryListing.java
* (edit) 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapreduce/security/TestMRCredentials.java
* (edit) 
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/KerberosName.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/tools/TestTools.java
* (edit) 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/JobContextImpl.java
* (edit) 
hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/WordStandardDeviation.java


> Correct license headers in hadoop-common, hdfs, yarn and mapreduce
> --
>
> Key: YARN-6584
> URL: https://issues.apache.org/jira/browse/YARN-6584
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.9.0
>Reporter: Yeliang Cang
>Assignee: Yeliang Cang
>Priority: Trivial
> Fix For: 2.9.0
>
> Attachments: YARN-6584-001.patch, YARN-6584-branch2.001.patch
>
>
> The license in some java files are not same as others. Submit a patch to fix 
> this!



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6584) Correct license headers in hadoop-common, hdfs, yarn and mapreduce

2017-05-22 Thread Yeliang Cang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6584?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yeliang Cang updated YARN-6584:
---
Attachment: YARN-6584-branch2.001.patch

> Correct license headers in hadoop-common, hdfs, yarn and mapreduce
> --
>
> Key: YARN-6584
> URL: https://issues.apache.org/jira/browse/YARN-6584
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha2
>Reporter: Yeliang Cang
>Assignee: Yeliang Cang
>Priority: Trivial
> Fix For: 3.0.0-alpha3
>
> Attachments: YARN-6584-001.patch, YARN-6584-branch2.001.patch
>
>
> The license in some java files are not same as others. Submit a patch to fix 
> this!



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-5006) ResourceManager quit due to ApplicationStateData exceed the limit size of znode in zk

2017-05-22 Thread Bibin A Chundatt (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16019319#comment-16019319
 ] 

Bibin A Chundatt edited comment on YARN-5006 at 5/22/17 9:00 AM:
-

Hi [~imstefanlee] , Does YARN-6125 solve your issue?? Or looking for some 
implementation similar to YARN-6125


was (Author: bibinchundatt):
Hi [~imstefanlee] , Does YARN-6125 solve your issue?? Or some implementation 
similar to YARN-6125

> ResourceManager quit due to ApplicationStateData exceed the limit  size of 
> znode in zk
> --
>
> Key: YARN-5006
> URL: https://issues.apache.org/jira/browse/YARN-5006
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.6.0, 2.7.2
>Reporter: dongtingting
>Assignee: Bibin A Chundatt
>Priority: Critical
> Attachments: YARN-5006.001.patch, YARN-5006.002.patch
>
>
> Client submit a job, this job add 1 file into DistributedCache. when the 
> job is submitted, ResourceManager sotre ApplicationStateData into zk. 
> ApplicationStateData  is exceed the limit size of znode. RM exit 1.   
> The related code in RMStateStore.java :
> {code}
>   private static class StoreAppTransition
>   implements SingleArcTransition {
> @Override
> public void transition(RMStateStore store, RMStateStoreEvent event) {
>   if (!(event instanceof RMStateStoreAppEvent)) {
> // should never happen
> LOG.error("Illegal event type: " + event.getClass());
> return;
>   }
>   ApplicationState appState = ((RMStateStoreAppEvent) 
> event).getAppState();
>   ApplicationId appId = appState.getAppId();
>   ApplicationStateData appStateData = ApplicationStateData
>   .newInstance(appState);
>   LOG.info("Storing info for app: " + appId);
>   try {  
> store.storeApplicationStateInternal(appId, appStateData);  //store 
> the appStateData
> store.notifyApplication(new RMAppEvent(appId,
>RMAppEventType.APP_NEW_SAVED));
>   } catch (Exception e) {
> LOG.error("Error storing app: " + appId, e);
> store.notifyStoreOperationFailed(e);   //handle fail event, system 
> exit 
>   }
> };
>   }
> {code}
> The Exception log:
> {code}
>  ...
> 2016-04-20 11:26:35,732 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore 
> AsyncDispatcher event handler: Maxed out ZK retries. Giving up!
> 2016-04-20 11:26:35,732 ERROR 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore 
> AsyncDispatcher event handler: Error storing app: 
> application_1461061795989_17671
> org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode 
> = ConnectionLoss
> at 
> org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
> at org.apache.zookeeper.ZooKeeper.multiInternal(ZooKeeper.java:931)
> at org.apache.zookeeper.ZooKeeper.multi(ZooKeeper.java:911)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore$4.run(ZKRMStateStore.java:936)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore$4.run(ZKRMStateStore.java:933)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore$ZKAction.runWithCheck(ZKRMStateStore.java:1075)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore$ZKAction.runWithRetries(ZKRMStateStore.java:1096)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore.doMultiWithRetries(ZKRMStateStore.java:933)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore.doMultiWithRetries(ZKRMStateStore.java:947)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore.createWithRetries(ZKRMStateStore.java:956)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore.storeApplicationStateInternal(ZKRMStateStore.java:626)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore$StoreAppTransition.transition(RMStateStore.java:138)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore$StoreAppTransition.transition(RMStateStore.java:123)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$SingleInternalArc.doTransition(StateMachineFactory.java:362)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:302)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java

[jira] [Commented] (YARN-5006) ResourceManager quit due to ApplicationStateData exceed the limit size of znode in zk

2017-05-22 Thread Bibin A Chundatt (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16019319#comment-16019319
 ] 

Bibin A Chundatt commented on YARN-5006:


Hi [~imstefanlee] , Does YARN-6125 solve your issue?? Or some implementation 
similar to YARN-6125

> ResourceManager quit due to ApplicationStateData exceed the limit  size of 
> znode in zk
> --
>
> Key: YARN-5006
> URL: https://issues.apache.org/jira/browse/YARN-5006
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.6.0, 2.7.2
>Reporter: dongtingting
>Assignee: Bibin A Chundatt
>Priority: Critical
> Attachments: YARN-5006.001.patch, YARN-5006.002.patch
>
>
> Client submit a job, this job add 1 file into DistributedCache. when the 
> job is submitted, ResourceManager sotre ApplicationStateData into zk. 
> ApplicationStateData  is exceed the limit size of znode. RM exit 1.   
> The related code in RMStateStore.java :
> {code}
>   private static class StoreAppTransition
>   implements SingleArcTransition {
> @Override
> public void transition(RMStateStore store, RMStateStoreEvent event) {
>   if (!(event instanceof RMStateStoreAppEvent)) {
> // should never happen
> LOG.error("Illegal event type: " + event.getClass());
> return;
>   }
>   ApplicationState appState = ((RMStateStoreAppEvent) 
> event).getAppState();
>   ApplicationId appId = appState.getAppId();
>   ApplicationStateData appStateData = ApplicationStateData
>   .newInstance(appState);
>   LOG.info("Storing info for app: " + appId);
>   try {  
> store.storeApplicationStateInternal(appId, appStateData);  //store 
> the appStateData
> store.notifyApplication(new RMAppEvent(appId,
>RMAppEventType.APP_NEW_SAVED));
>   } catch (Exception e) {
> LOG.error("Error storing app: " + appId, e);
> store.notifyStoreOperationFailed(e);   //handle fail event, system 
> exit 
>   }
> };
>   }
> {code}
> The Exception log:
> {code}
>  ...
> 2016-04-20 11:26:35,732 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore 
> AsyncDispatcher event handler: Maxed out ZK retries. Giving up!
> 2016-04-20 11:26:35,732 ERROR 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore 
> AsyncDispatcher event handler: Error storing app: 
> application_1461061795989_17671
> org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode 
> = ConnectionLoss
> at 
> org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
> at org.apache.zookeeper.ZooKeeper.multiInternal(ZooKeeper.java:931)
> at org.apache.zookeeper.ZooKeeper.multi(ZooKeeper.java:911)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore$4.run(ZKRMStateStore.java:936)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore$4.run(ZKRMStateStore.java:933)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore$ZKAction.runWithCheck(ZKRMStateStore.java:1075)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore$ZKAction.runWithRetries(ZKRMStateStore.java:1096)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore.doMultiWithRetries(ZKRMStateStore.java:933)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore.doMultiWithRetries(ZKRMStateStore.java:947)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore.createWithRetries(ZKRMStateStore.java:956)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore.storeApplicationStateInternal(ZKRMStateStore.java:626)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore$StoreAppTransition.transition(RMStateStore.java:138)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore$StoreAppTransition.transition(RMStateStore.java:123)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$SingleInternalArc.doTransition(StateMachineFactory.java:362)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:302)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore.handleStoreEvent(RMStateStore.java:806)
> at 
> org.apache.hadoop.yarn.server.resourcemanager

[jira] [Reopened] (YARN-6584) Correct license headers in hadoop-common, hdfs, yarn and mapreduce

2017-05-22 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6584?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G reopened YARN-6584:
---

branch-2 patch is needed.

> Correct license headers in hadoop-common, hdfs, yarn and mapreduce
> --
>
> Key: YARN-6584
> URL: https://issues.apache.org/jira/browse/YARN-6584
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha2
>Reporter: Yeliang Cang
>Assignee: Yeliang Cang
>Priority: Trivial
> Fix For: 3.0.0-alpha3
>
> Attachments: YARN-6584-001.patch
>
>
> The license in some java files are not same as others. Submit a patch to fix 
> this!



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6629) NPE occurred when container allocation proposal is applied but its resource requests are removed before

2017-05-22 Thread Tao Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tao Yang updated YARN-6629:
---
Description: 
I wrote a test case to reproduce another problem for branch-2 and found new NPE 
error,  log: 
{code}
FATAL event.EventDispatcher (EventDispatcher.java:run(75)) - Error in handling 
event type NODE_UPDATE to the Event Dispatcher
java.lang.NullPointerException
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.AppSchedulingInfo.allocate(AppSchedulingInfo.java:446)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp.apply(FiCaSchedulerApp.java:516)
at 
org.apache.hadoop.yarn.client.TestNegativePendingResource$1.answer(TestNegativePendingResource.java:225)
at 
org.mockito.internal.stubbing.StubbedInvocationMatcher.answer(StubbedInvocationMatcher.java:31)
at org.mockito.internal.MockHandler.handle(MockHandler.java:97)
at 
org.mockito.internal.creation.MethodInterceptorFilter.intercept(MethodInterceptorFilter.java:47)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp$$EnhancerByMockitoWithCGLIB$$29eb8afc.apply()
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.tryCommit(CapacityScheduler.java:2396)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.submitResourceCommitRequest(CapacityScheduler.java:2281)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.allocateOrReserveNewContainers(CapacityScheduler.java:1247)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.allocateContainerOnSingleNode(CapacityScheduler.java:1236)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.allocateContainersToNode(CapacityScheduler.java:1325)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.allocateContainersToNode(CapacityScheduler.java:1112)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.nodeUpdate(CapacityScheduler.java:987)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.handle(CapacityScheduler.java:1367)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.handle(CapacityScheduler.java:143)
at 
org.apache.hadoop.yarn.event.EventDispatcher$EventProcessor.run(EventDispatcher.java:66)
at java.lang.Thread.run(Thread.java:745)
{code}

Reproduce this error in chronological order:
1. AM started and requested 1 container with schedulerRequestKey#1 : 
ApplicationMasterService#allocate -->  CapacityScheduler#allocate --> 
SchedulerApplicationAttempt#updateResourceRequests --> 
AppSchedulingInfo#updateResourceRequests 
Added schedulerRequestKey#1 into schedulerKeyToPlacementSets
2. Scheduler allocatd 1 container for this request and accepted the proposal
3. AM removed this request
ApplicationMasterService#allocate -->  CapacityScheduler#allocate --> 
SchedulerApplicationAttempt#updateResourceRequests --> 
AppSchedulingInfo#updateResourceRequests --> 
AppSchedulingInfo#addToPlacementSets --> 
AppSchedulingInfo#updatePendingResources
Removed schedulerRequestKey#1 from schedulerKeyToPlacementSets)
4. Scheduler applied this proposal and wanted to deduct the pending resource
CapacityScheduler#tryCommit --> FiCaSchedulerApp#apply --> 
AppSchedulingInfo#allocate 
Throw NPE when called 
schedulerKeyToPlacementSets.get(schedulerRequestKey).allocate(schedulerKey, 
type, node);

  was:
I wrote a test case to test other problem for branch-2 and found new NPE error, 
 log: 
{code}
FATAL event.EventDispatcher (EventDispatcher.java:run(75)) - Error in handling 
event type NODE_UPDATE to the Event Dispatcher
java.lang.NullPointerException
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.AppSchedulingInfo.allocate(AppSchedulingInfo.java:446)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp.apply(FiCaSchedulerApp.java:516)
at 
org.apache.hadoop.yarn.client.TestNegativePendingResource$1.answer(TestNegativePendingResource.java:225)
at 
org.mockito.internal.stubbing.StubbedInvocationMatcher.answer(StubbedInvocationMatcher.java:31)
at org.mockito.internal.MockHandler.handle(MockHandler.java:97)
at 
org.mockito.internal.creation.MethodInterceptorFilter.intercept(MethodInterceptorFilter.java:47)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp$$EnhancerByMockitoWithCGLIB$$29eb8afc.apply()
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.tryCommit(CapacityScheduler.java:2396)
at 
org.apache.hadoop.yarn.server.reso

[jira] [Updated] (YARN-6629) NPE occurred when container allocation proposal is applied but its resource requests are removed before

2017-05-22 Thread Tao Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tao Yang updated YARN-6629:
---
Description: 
I wrote a test case to test other problem for branch-2 and found new NPE error, 
 log: 
{code}
FATAL event.EventDispatcher (EventDispatcher.java:run(75)) - Error in handling 
event type NODE_UPDATE to the Event Dispatcher
java.lang.NullPointerException
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.AppSchedulingInfo.allocate(AppSchedulingInfo.java:446)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp.apply(FiCaSchedulerApp.java:516)
at 
org.apache.hadoop.yarn.client.TestNegativePendingResource$1.answer(TestNegativePendingResource.java:225)
at 
org.mockito.internal.stubbing.StubbedInvocationMatcher.answer(StubbedInvocationMatcher.java:31)
at org.mockito.internal.MockHandler.handle(MockHandler.java:97)
at 
org.mockito.internal.creation.MethodInterceptorFilter.intercept(MethodInterceptorFilter.java:47)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp$$EnhancerByMockitoWithCGLIB$$29eb8afc.apply()
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.tryCommit(CapacityScheduler.java:2396)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.submitResourceCommitRequest(CapacityScheduler.java:2281)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.allocateOrReserveNewContainers(CapacityScheduler.java:1247)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.allocateContainerOnSingleNode(CapacityScheduler.java:1236)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.allocateContainersToNode(CapacityScheduler.java:1325)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.allocateContainersToNode(CapacityScheduler.java:1112)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.nodeUpdate(CapacityScheduler.java:987)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.handle(CapacityScheduler.java:1367)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.handle(CapacityScheduler.java:143)
at 
org.apache.hadoop.yarn.event.EventDispatcher$EventProcessor.run(EventDispatcher.java:66)
at java.lang.Thread.run(Thread.java:745)
{code}

Reproduce this error in chronological order:
1. AM started and requested 1 container with schedulerRequestKey#1 : 
ApplicationMasterService#allocate -->  CapacityScheduler#allocate --> 
SchedulerApplicationAttempt#updateResourceRequests --> 
AppSchedulingInfo#updateResourceRequests 
Added schedulerRequestKey#1 into schedulerKeyToPlacementSets
2. Scheduler allocatd 1 container for this request and accepted the proposal
3. AM removed this request
ApplicationMasterService#allocate -->  CapacityScheduler#allocate --> 
SchedulerApplicationAttempt#updateResourceRequests --> 
AppSchedulingInfo#updateResourceRequests --> 
AppSchedulingInfo#addToPlacementSets --> 
AppSchedulingInfo#updatePendingResources
Removed schedulerRequestKey#1 from schedulerKeyToPlacementSets)
4. Scheduler applied this proposal and wanted to deduct the pending resource
CapacityScheduler#tryCommit --> FiCaSchedulerApp#apply --> 
AppSchedulingInfo#allocate 
Throw NPE when called 
schedulerKeyToPlacementSets.get(schedulerRequestKey).allocate(schedulerKey, 
type, node);

  was:
Error log:
{code}
FATAL event.EventDispatcher (EventDispatcher.java:run(75)) - Error in handling 
event type NODE_UPDATE to the Event Dispatcher
java.lang.NullPointerException
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.AppSchedulingInfo.allocate(AppSchedulingInfo.java:446)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp.apply(FiCaSchedulerApp.java:516)
at 
org.apache.hadoop.yarn.client.TestNegativePendingResource$1.answer(TestNegativePendingResource.java:225)
at 
org.mockito.internal.stubbing.StubbedInvocationMatcher.answer(StubbedInvocationMatcher.java:31)
at org.mockito.internal.MockHandler.handle(MockHandler.java:97)
at 
org.mockito.internal.creation.MethodInterceptorFilter.intercept(MethodInterceptorFilter.java:47)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp$$EnhancerByMockitoWithCGLIB$$29eb8afc.apply()
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.tryCommit(CapacityScheduler.java:2396)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.submitResourceCommitRequest(Capacit

[jira] [Updated] (YARN-6584) Correct license headers in hadoop-common, hdfs, yarn and mapreduce

2017-05-22 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6584?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-6584:
--
Summary: Correct license headers in hadoop-common, hdfs, yarn and mapreduce 
 (was: Some license modification in codes)

> Correct license headers in hadoop-common, hdfs, yarn and mapreduce
> --
>
> Key: YARN-6584
> URL: https://issues.apache.org/jira/browse/YARN-6584
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha2
>Reporter: Yeliang Cang
>Assignee: Yeliang Cang
>Priority: Trivial
> Fix For: 3.0.0-alpha2
>
> Attachments: YARN-6584-001.patch
>
>
> The license in some java files are not same as others. Submit a patch to fix 
> this!



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6629) NPE occurred when container allocation proposal is applied but its resource requests are removed before

2017-05-22 Thread Tao Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tao Yang updated YARN-6629:
---
Attachment: YARN-6629.001.patch

Attach a patch for review.

> NPE occurred when container allocation proposal is applied but its resource 
> requests are removed before
> ---
>
> Key: YARN-6629
> URL: https://issues.apache.org/jira/browse/YARN-6629
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.9.0, 3.0.0-alpha2
>Reporter: Tao Yang
>Assignee: Tao Yang
> Attachments: YARN-6629.001.patch
>
>
> Error log:
> {code}
> FATAL event.EventDispatcher (EventDispatcher.java:run(75)) - Error in 
> handling event type NODE_UPDATE to the Event Dispatcher
> java.lang.NullPointerException
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.AppSchedulingInfo.allocate(AppSchedulingInfo.java:446)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp.apply(FiCaSchedulerApp.java:516)
> at 
> org.apache.hadoop.yarn.client.TestNegativePendingResource$1.answer(TestNegativePendingResource.java:225)
> at 
> org.mockito.internal.stubbing.StubbedInvocationMatcher.answer(StubbedInvocationMatcher.java:31)
> at org.mockito.internal.MockHandler.handle(MockHandler.java:97)
> at 
> org.mockito.internal.creation.MethodInterceptorFilter.intercept(MethodInterceptorFilter.java:47)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp$$EnhancerByMockitoWithCGLIB$$29eb8afc.apply()
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.tryCommit(CapacityScheduler.java:2396)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.submitResourceCommitRequest(CapacityScheduler.java:2281)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.allocateOrReserveNewContainers(CapacityScheduler.java:1247)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.allocateContainerOnSingleNode(CapacityScheduler.java:1236)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.allocateContainersToNode(CapacityScheduler.java:1325)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.allocateContainersToNode(CapacityScheduler.java:1112)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.nodeUpdate(CapacityScheduler.java:987)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.handle(CapacityScheduler.java:1367)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.handle(CapacityScheduler.java:143)
> at 
> org.apache.hadoop.yarn.event.EventDispatcher$EventProcessor.run(EventDispatcher.java:66)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> Reproduce this error in chronological order:
> 1. AM started and requested 1 container with schedulerRequestKey#1 : 
> ApplicationMasterService#allocate -->  CapacityScheduler#allocate --> 
> SchedulerApplicationAttempt#updateResourceRequests --> 
> AppSchedulingInfo#updateResourceRequests 
> Added schedulerRequestKey#1 into schedulerKeyToPlacementSets
> 2. Scheduler allocatd 1 container for this request and accepted the proposal
> 3. AM removed this request
> ApplicationMasterService#allocate -->  CapacityScheduler#allocate --> 
> SchedulerApplicationAttempt#updateResourceRequests --> 
> AppSchedulingInfo#updateResourceRequests --> 
> AppSchedulingInfo#addToPlacementSets --> 
> AppSchedulingInfo#updatePendingResources
> Removed schedulerRequestKey#1 from schedulerKeyToPlacementSets)
> 4. Scheduler applied this proposal and wanted to deduct the pending resource
> CapacityScheduler#tryCommit --> FiCaSchedulerApp#apply --> 
> AppSchedulingInfo#allocate 
> Throw NPE when called 
> schedulerKeyToPlacementSets.get(schedulerRequestKey).allocate(schedulerKey, 
> type, node);



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6629) NPE occurred when container allocation proposal is applied but its resource requests are removed before

2017-05-22 Thread Tao Yang (JIRA)
Tao Yang created YARN-6629:
--

 Summary: NPE occurred when container allocation proposal is 
applied but its resource requests are removed before
 Key: YARN-6629
 URL: https://issues.apache.org/jira/browse/YARN-6629
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 3.0.0-alpha2, 2.9.0
Reporter: Tao Yang
Assignee: Tao Yang


Error log:
{code}
FATAL event.EventDispatcher (EventDispatcher.java:run(75)) - Error in handling 
event type NODE_UPDATE to the Event Dispatcher
java.lang.NullPointerException
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.AppSchedulingInfo.allocate(AppSchedulingInfo.java:446)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp.apply(FiCaSchedulerApp.java:516)
at 
org.apache.hadoop.yarn.client.TestNegativePendingResource$1.answer(TestNegativePendingResource.java:225)
at 
org.mockito.internal.stubbing.StubbedInvocationMatcher.answer(StubbedInvocationMatcher.java:31)
at org.mockito.internal.MockHandler.handle(MockHandler.java:97)
at 
org.mockito.internal.creation.MethodInterceptorFilter.intercept(MethodInterceptorFilter.java:47)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp$$EnhancerByMockitoWithCGLIB$$29eb8afc.apply()
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.tryCommit(CapacityScheduler.java:2396)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.submitResourceCommitRequest(CapacityScheduler.java:2281)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.allocateOrReserveNewContainers(CapacityScheduler.java:1247)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.allocateContainerOnSingleNode(CapacityScheduler.java:1236)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.allocateContainersToNode(CapacityScheduler.java:1325)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.allocateContainersToNode(CapacityScheduler.java:1112)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.nodeUpdate(CapacityScheduler.java:987)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.handle(CapacityScheduler.java:1367)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.handle(CapacityScheduler.java:143)
at 
org.apache.hadoop.yarn.event.EventDispatcher$EventProcessor.run(EventDispatcher.java:66)
at java.lang.Thread.run(Thread.java:745)
{code}

Reproduce this error in chronological order:
1. AM started and requested 1 container with schedulerRequestKey#1 : 
ApplicationMasterService#allocate -->  CapacityScheduler#allocate --> 
SchedulerApplicationAttempt#updateResourceRequests --> 
AppSchedulingInfo#updateResourceRequests 
Added schedulerRequestKey#1 into schedulerKeyToPlacementSets
2. Scheduler allocatd 1 container for this request and accepted the proposal
3. AM removed this request
ApplicationMasterService#allocate -->  CapacityScheduler#allocate --> 
SchedulerApplicationAttempt#updateResourceRequests --> 
AppSchedulingInfo#updateResourceRequests --> 
AppSchedulingInfo#addToPlacementSets --> 
AppSchedulingInfo#updatePendingResources
Removed schedulerRequestKey#1 from schedulerKeyToPlacementSets)
4. Scheduler applied this proposal and wanted to deduct the pending resource
CapacityScheduler#tryCommit --> FiCaSchedulerApp#apply --> 
AppSchedulingInfo#allocate 
Throw NPE when called 
schedulerKeyToPlacementSets.get(schedulerRequestKey).allocate(schedulerKey, 
type, node);



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6555) Enable flow context read (& corresponding write) for recovering application with NM restart

2017-05-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16019224#comment-16019224
 ] 

Hadoop QA commented on YARN-6555:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
43s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
45s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 in trunk has 5 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 16s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 The patch generated 1 new + 65 unchanged - 0 fixed = 66 total (was 65) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 14m 
12s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 37m 25s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6555 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12869201/YARN-6555.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  |
| uname | Linux bf11670fd959 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / fcbdecc |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-YARN-Build/15990/artifact/patchprocess/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager-warnings.html
 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/15990/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/15990/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-pr

[jira] [Commented] (YARN-6111) Rumen input does't work in SLS

2017-05-22 Thread YuJie Huang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16019215#comment-16019215
 ] 

YuJie Huang commented on YARN-6111:
---

Thank all you very much. Because I am a  green hand, can you explain how to use 
it correctly in detail? Thank you very much! 

> Rumen input does't work in SLS
> --
>
> Key: YARN-6111
> URL: https://issues.apache.org/jira/browse/YARN-6111
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler-load-simulator
>Affects Versions: 2.6.0, 2.7.3, 3.0.0-alpha2
> Environment: ubuntu14.0.4 os
>Reporter: YuJie Huang
>Assignee: Yufei Gu
>  Labels: test
> Fix For: 3.0.0-alpha3
>
> Attachments: YARN-6111.001.patch
>
>
> Hi guys,
> I am trying to learn the use of SLS.
> I would like to get the file realtimetrack.json, but this it only 
> contains "[]" at the end of a simulation. This is the command I use to 
> run the instance:
> HADOOP_HOME $ bin/slsrun.sh --input-rumen=sample-data/2jobsmin-rumen-jh.json 
> --output-dir=sample-data 
> All other files, including metrics, appears to be properly populated.I can 
> also trace with web:http://localhost:10001/simulate
> Can someone help?
> Thanks



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org