[jira] [Commented] (YARN-7237) Cleanup usages of ResourceProfiles

2017-10-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16195559#comment-16195559
 ] 

Hadoop QA commented on YARN-7237:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 11 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
53s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 40s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
45s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
30s{color} | {color:red} hadoop-yarn-server-resourcemanager in trunk failed. 
{color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
33s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m  6s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 2 new + 184 unchanged - 3 fixed = 186 total (was 187) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 16s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
32s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
30s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
43s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
53s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
58s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 46m 36s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 30m 32s{color} 
| {color:red} hadoop-yarn-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}179m 23s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.TestApplicationMasterService |
|   | hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairScheduler |
|   | hadoop.yarn.server.resourcemanager.ahs.TestRMApplicationHistoryWriter |
|  

[jira] [Commented] (YARN-7294) TestSignalContainer#testSignalRequestDeliveryToNM fails intermittently with Fair scheduler

2017-10-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16195556#comment-16195556
 ] 

Hadoop QA commented on YARN-7294:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 10s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
22s{color} | {color:red} hadoop-yarn-server-resourcemanager in trunk failed. 
{color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  1s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
20s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 46m  0s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 89m  9s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.TestApplicationMasterService |
|   | hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairScheduler |
|   | hadoop.yarn.server.resourcemanager.ahs.TestRMApplicationHistoryWriter |
|   | hadoop.yarn.server.resourcemanager.scheduler.fair.TestFSAppAttempt |
|   | hadoop.yarn.server.resourcemanager.TestWorkPreservingRMRestart |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | YARN-7294 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12890845/YARN-7294.000.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 30f43f83ddea 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 2b08a1f |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| javadoc | 
https://builds.apache.org/job/PreCommit-YARN-Build/17813/artifact/patchprocess/branch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 

[jira] [Commented] (YARN-6608) Backport all SLS improvements from trunk to branch-2

2017-10-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16195514#comment-16195514
 ] 

Hadoop QA commented on YARN-6608:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 22 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
47s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
43s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
33s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
24s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
17s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
58s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
48s{color} | {color:green} branch-2 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
27s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
15s{color} | {color:red} hadoop-sls in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  6m 
25s{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  6m 25s{color} 
| {color:red} root in the patch failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 32s{color} | {color:orange} root: The patch generated 33 new + 163 unchanged 
- 234 fixed = 196 total (was 397) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
21s{color} | {color:red} hadoop-sls in the patch failed. {color} |
| {color:red}-1{color} | {color:red} shellcheck {color} | {color:red}  0m  
0s{color} | {color:red} The patch generated 2 new + 2 unchanged - 22 fixed = 4 
total (was 24) {color} |
| {color:orange}-0{color} | {color:orange} shelldocs {color} | {color:orange}  
0m  8s{color} | {color:orange} The patch generated 16 new + 47 unchanged - 0 
fixed = 63 total (was 47) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 2 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
29s{color} | {color:red} hadoop-sls in the patch failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
21s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 17m 13s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}100m 33s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
25s{color} | {color:green} hadoop-rumen in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 20s{color} 
| {color:red} hadoop-sls in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
26s{color} | {color:red} The patch generated 3 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}187m 11s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.net.TestDNS |
|   | hadoop.yarn.server.resourcemanager.TestApplicationACLs |
|   | 

[jira] [Commented] (YARN-2497) Changes for fair scheduler to support allocate resource respect labels

2017-10-06 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16195488#comment-16195488
 ] 

Yufei Gu commented on YARN-2497:


Thanks for woking on this, [~templedf]. I has a pretty long and detailed 
comments, but I want to discuss with you before I post that. 

Node labeling inevitably causes the partition of the cluster, which impacts 
scheduler a lot. There may be lower utilization, unfairness, infinity loop or 
deadlock/livelock. The lower utilization and unfairness seem benign but hard to 
avoid, but we definitely want to prevent the infinity loop and 
deadlock/livelock. The current solution supports labeling for each queue, which 
makes schedulers much more complicate, and makes support and maintenance much 
harder. Base on my experience with the MaxAMShare of queue,  the community 
spent years and did bunches JIRAs to make it right after it is introduced into 
fair scheduler. However, MaxAMShare issues still pop up and bite us from time 
to time. The queue level labeling is much more complex than that in terms of 
calculation of fair share, max share, max AM share. We probably don't want to 
do that if we could avoid that.

Instead of queue labeling, I suggested the application level labeling. It will 
be pretty similar to what we did for data locality. Data locality doesn't 
affect queue management and all sorts of calculation related to queue 
properties. It's all about resource requests and node properties, which is 
similar to node labeling. In that sense, node labeling won't affect queue 
management at all. 

What do you think? 

> Changes for fair scheduler to support allocate resource respect labels
> --
>
> Key: YARN-2497
> URL: https://issues.apache.org/jira/browse/YARN-2497
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Reporter: Wangda Tan
>Assignee: Daniel Templeton
> Attachments: YARN-2497.001.patch, YARN-2497.002.patch, 
> YARN-2497.003.patch, YARN-2497.004.patch, YARN-2497.005.patch, 
> YARN-2497.006.patch, YARN-2497.007.patch, YARN-2497.008.patch, 
> YARN-2497.009.patch, YARN-2497.010.patch, YARN-2499.WIP01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7294) TestSignalContainer#testSignalRequestDeliveryToNM fails intermittently with Fair scheduler

2017-10-06 Thread Miklos Szegedi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7294?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Szegedi updated YARN-7294:
-
Attachment: YARN-7294.000.patch

> TestSignalContainer#testSignalRequestDeliveryToNM fails intermittently with 
> Fair scheduler
> --
>
> Key: YARN-7294
> URL: https://issues.apache.org/jira/browse/YARN-7294
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Miklos Szegedi
>Assignee: Miklos Szegedi
> Attachments: YARN-7294.000.patch
>
>
> This issue exists due to the fact that FS needs an update after allocation 
> and more node updates for all the requests to be fulfilled.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7294) TestSignalContainer#testSignalRequestDeliveryToNM fails intermittently with Fair scheduler

2017-10-06 Thread Miklos Szegedi (JIRA)
Miklos Szegedi created YARN-7294:


 Summary: TestSignalContainer#testSignalRequestDeliveryToNM fails 
intermittently with Fair scheduler
 Key: YARN-7294
 URL: https://issues.apache.org/jira/browse/YARN-7294
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Miklos Szegedi
Assignee: Miklos Szegedi


This issue exists due to the fact that FS needs an update after allocation and 
more node updates for all the requests to be fulfilled.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7254) UI and metrics changes related to absolute resource configuration

2017-10-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16195476#comment-16195476
 ] 

Hadoop QA commented on YARN-7254:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  6s{color} 
| {color:red} YARN-7254 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-7254 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12890507/YARN-7254.002.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/17811/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> UI and metrics changes related to absolute resource configuration
> -
>
> Key: YARN-7254
> URL: https://issues.apache.org/jira/browse/YARN-7254
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Sunil G
>Assignee: Sunil G
> Attachments: YARN-7254.001.patch, YARN-7254.002.patch
>
>
> Impact on UI and metrics related to absolute resource configuration on CS



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7237) Cleanup usages of ResourceProfiles

2017-10-06 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-7237:
-
Attachment: YARN-7237.007.patch

Test failures are not related, fixed few other checkstyle issues (ver.7). 
[~templedf], mind to bless the latest patch?

> Cleanup usages of ResourceProfiles
> --
>
> Key: YARN-7237
> URL: https://issues.apache.org/jira/browse/YARN-7237
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Critical
> Attachments: YARN-7237.001.patch, YARN-7237.002.patch, 
> YARN-7237.003.patch, YARN-7237.004.patch, YARN-7237.005.patch, 
> YARN-7237.006.patch, YARN-7237.007.patch
>
>
> While doing tests, there're a couple of issues:
> 1) When use {{ProfileCapability#getProfileCapabilityOverride}}, it does 
> overwrite of whatever specified in resource-profiles.json when value >= 0. 
> Which is different from javadocs of {{ProfileCapability}} 
> bq. For example, if you have a resource profile "small" that maps to <4096M, 
> 2 cores, 1 gpu> and you set the capability override to <8192M, 0 cores, 0 
> gpu>, then the actual resource allocation on the ResourceManager will be 
> <8192M, 2 cores, 1 gpu>
> To me, the correct behavior should do overwrite when value > 0. The reason 
> is, by default resource value will be set to 0, For example, assume we have a 
> profile {{"a" = (mem=3, vcore=5, res_1=7)}}, and create a 
> capability-overwrite (capability = new resource(8). The final result should 
> be (mem=8, vcore=5, res_1=7), instead of (mem=8, vcore=0, res_1=0).
> 2) ResourceProfileManager now loads minimum/maximum profile from config file 
> (resource-profiles.json), to me this is not correct because minimum/maximum 
> allocation for each resource types are already specified inside 
> {{resource-types.xml}}. We should always use 
> {{ResourceUtils#getResourceTypesMinimum/MaximumAllocation}} to get from 
> resource-types.xml and yarn-site.xml. This value will be added to profiles so 
> client can get these configs.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7138) Fix incompatible API change for YarnScheduler involved by YARN-5521

2017-10-06 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16195408#comment-16195408
 ] 

Wangda Tan commented on YARN-7138:
--

[~rkanter], +1 to your proposal.

> Fix incompatible API change for YarnScheduler involved by YARN-5521
> ---
>
> Key: YARN-7138
> URL: https://issues.apache.org/jira/browse/YARN-7138
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: scheduler
>Reporter: Junping Du
>Priority: Critical
>
> From JACC report for 2.8.2 against 2.7.4, it indicates that we have 
> incompatible changes happen in YarnScheduler:
> {noformat}
> hadoop-yarn-server-resourcemanager-2.7.4.jar, YarnScheduler.class
> package org.apache.hadoop.yarn.server.resourcemanager.scheduler
> YarnScheduler.allocate ( ApplicationAttemptId p1, List p2, 
> List p3, List p4, List p5 ) [abstract]  :  
> Allocation 
> {noformat}
> The root cause is YARN-5221. We should change it back or workaround this by 
> adding back original API (mark as deprecated if not used any more).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6744) Recover component information on YARN native services AM restart

2017-10-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6744?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16195329#comment-16195329
 ] 

Hadoop QA commented on YARN-6744:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} yarn-native-services Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
31s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
18s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
20s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
8m 43s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
32s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m  
8s{color} | {color:green} yarn-native-services passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 10s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core:
 The patch generated 7 new + 62 unchanged - 3 fixed = 69 total (was 65) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 46s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
17s{color} | {color:green} hadoop-yarn-services-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 39m  2s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | YARN-6744 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12890823/YARN-6744-yarn-native-services.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux fb6ebd11a275 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | yarn-native-services / 4993b8a |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/17809/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-services_hadoop-yarn-services-core.txt
 |
|  Test Results | 

[jira] [Updated] (YARN-6608) Backport all SLS improvements from trunk to branch-2

2017-10-06 Thread Carlo Curino (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carlo Curino updated YARN-6608:
---
Attachment: YARN-6608-branch-2.v4.patch

> Backport all SLS improvements from trunk to branch-2
> 
>
> Key: YARN-6608
> URL: https://issues.apache.org/jira/browse/YARN-6608
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.9.0
>Reporter: Carlo Curino
>Assignee: Carlo Curino
> Attachments: YARN-6608-branch-2.v0.patch, 
> YARN-6608-branch-2.v1.patch, YARN-6608-branch-2.v2.patch, 
> YARN-6608-branch-2.v3.patch, YARN-6608-branch-2.v4.patch
>
>
> The SLS has received lots of attention in trunk, but only some of it made it 
> back to branch-2. This patch is a "raw" fork-lift of the trunk development 
> from hadoop-tools/hadoop-sls.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7190) Ensure only NM classpath in 2.x gets TSv2 related hbase jars, not the user classpath

2017-10-06 Thread Vrushali C (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C updated YARN-7190:
-
Target Version/s: 2.9.0
 Description: 
[~jlowe] had a good observation about the user classpath getting extra jars in 
hadoop 2.x brought in with TSv2.  If users start picking up Hadoop 2,x's 
version of HBase jars instead of the ones they shipped with their job, it could 
be a problem.

So when TSv2 is to be used in 2,x, the hbase related jars should come into only 
the NM classpath not the user classpath.

Here is a list of some jars

{code}
commons-csv-1.0.jar
commons-el-1.0.jar
commons-httpclient-3.1.jar
disruptor-3.3.0.jar
findbugs-annotations-1.3.9-1.jar
hbase-annotations-1.2.6.jar
hbase-client-1.2.6.jar
hbase-common-1.2.6.jar
hbase-hadoop2-compat-1.2.6.jar
hbase-hadoop-compat-1.2.6.jar
hbase-prefix-tree-1.2.6.jar
hbase-procedure-1.2.6.jar
hbase-protocol-1.2.6.jar
hbase-server-1.2.6.jar
htrace-core-3.1.0-incubating.jar
jamon-runtime-2.4.1.jar
jasper-compiler-5.5.23.jar
jasper-runtime-5.5.23.jar
jcodings-1.0.8.jar
joni-2.1.2.jar
jsp-2.1-6.1.14.jar
jsp-api-2.1-6.1.14.jar
jsr311-api-1.1.1.jar
metrics-core-2.2.0.jar
servlet-api-2.5-6.1.14.jar
{code}


  was:

[~jlowe] had a good observation about the user classpath getting extra jars in 
hadoop 2.x brought in with TSv2.  If users start picking up Hadoop 2,x's 
version of HBase jars instead of the ones they shipped with their job, it could 
be a problem.

So when TSv2 is to be used in 2,x, the hbase related jars should come into only 
the NM classpath not the user classpath.

Here is a list of some jars

{code}
commons-csv-1.0.jar
commons-el-1.0.jar
commons-httpclient-3.1.jar
disruptor-3.3.0.jar
findbugs-annotations-1.3.9-1.jar
hbase-annotations-1.2.6.jar
hbase-client-1.2.6.jar
hbase-common-1.2.6.jar
hbase-hadoop2-compat-1.2.6.jar
hbase-hadoop-compat-1.2.6.jar
hbase-prefix-tree-1.2.6.jar
hbase-procedure-1.2.6.jar
hbase-protocol-1.2.6.jar
hbase-server-1.2.6.jar
htrace-core-3.1.0-incubating.jar
jamon-runtime-2.4.1.jar
jasper-compiler-5.5.23.jar
jasper-runtime-5.5.23.jar
jcodings-1.0.8.jar
joni-2.1.2.jar
jsp-2.1-6.1.14.jar
jsp-api-2.1-6.1.14.jar
jsr311-api-1.1.1.jar
metrics-core-2.2.0.jar
servlet-api-2.5-6.1.14.jar
{code}



> Ensure only NM classpath in 2.x gets TSv2 related hbase jars, not the user 
> classpath
> 
>
> Key: YARN-7190
> URL: https://issues.apache.org/jira/browse/YARN-7190
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineclient, timelinereader, timelineserver
>Reporter: Vrushali C
>Assignee: Varun Saxena
>
> [~jlowe] had a good observation about the user classpath getting extra jars 
> in hadoop 2.x brought in with TSv2.  If users start picking up Hadoop 2,x's 
> version of HBase jars instead of the ones they shipped with their job, it 
> could be a problem.
> So when TSv2 is to be used in 2,x, the hbase related jars should come into 
> only the NM classpath not the user classpath.
> Here is a list of some jars
> {code}
> commons-csv-1.0.jar
> commons-el-1.0.jar
> commons-httpclient-3.1.jar
> disruptor-3.3.0.jar
> findbugs-annotations-1.3.9-1.jar
> hbase-annotations-1.2.6.jar
> hbase-client-1.2.6.jar
> hbase-common-1.2.6.jar
> hbase-hadoop2-compat-1.2.6.jar
> hbase-hadoop-compat-1.2.6.jar
> hbase-prefix-tree-1.2.6.jar
> hbase-procedure-1.2.6.jar
> hbase-protocol-1.2.6.jar
> hbase-server-1.2.6.jar
> htrace-core-3.1.0-incubating.jar
> jamon-runtime-2.4.1.jar
> jasper-compiler-5.5.23.jar
> jasper-runtime-5.5.23.jar
> jcodings-1.0.8.jar
> joni-2.1.2.jar
> jsp-2.1-6.1.14.jar
> jsp-api-2.1-6.1.14.jar
> jsr311-api-1.1.1.jar
> metrics-core-2.2.0.jar
> servlet-api-2.5-6.1.14.jar
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6744) Recover component information on YARN native services AM restart

2017-10-06 Thread Billie Rinaldi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Billie Rinaldi updated YARN-6744:
-
Attachment: YARN-6744-yarn-native-services.001.patch

> Recover component information on YARN native services AM restart
> 
>
> Key: YARN-6744
> URL: https://issues.apache.org/jira/browse/YARN-6744
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Reporter: Billie Rinaldi
>Assignee: Billie Rinaldi
> Fix For: yarn-native-services
>
> Attachments: YARN-6744-yarn-native-services.001.patch
>
>
> The new RoleInstance#Container constructor does not populate all the 
> information needed for a RoleInstance. This is the constructor used when 
> recovering running containers in AppState#addRestartedContainer. We will have 
> to figure out a way to determine this information for a running container.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7169) Backport new yarn-ui to branch2 code (starting with YARN-5355_branch2)

2017-10-06 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated YARN-7169:
-
Priority: Critical  (was: Major)

> Backport new yarn-ui to branch2 code (starting with YARN-5355_branch2)
> --
>
> Key: YARN-7169
> URL: https://issues.apache.org/jira/browse/YARN-7169
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineclient, timelinereader, timelineserver
>Reporter: Vrushali C
>Assignee: Vrushali C
>Priority: Critical
> Attachments: YARN-7169-YARN-5355_branch2.0001.patch
>
>
> Jira to track the backport of the new yarn-ui onto branch2. Right now adding 
> into Timeline Service v2's branch2 which is YARN-5355_branch2.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7170) Investigate bower dependencies for YARN UI v2

2017-10-06 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated YARN-7170:
-
Priority: Critical  (was: Major)

> Investigate bower dependencies for YARN UI v2
> -
>
> Key: YARN-7170
> URL: https://issues.apache.org/jira/browse/YARN-7170
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: webapp
>Reporter: Sunil G
>Assignee: Sunil G
>Priority: Critical
> Attachments: YARN-7170.001.patch
>
>
> [INFO] bower ember#2.2.0   progress Receiving
> objects:  50% (38449/75444), 722.46 MiB | 3.30 MiB/s
> ...
> [INFO] bower ember#2.2.0   progress Receiving
> objects:  99% (75017/75444), 1.56 GiB | 3.31 MiB/s
> Investigate the dependencies and reduce the download size and speed of 
> compilation.
> cc/ [~Sreenath] and [~akhilpb]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7169) Backport new yarn-ui to branch2 code (starting with YARN-5355_branch2)

2017-10-06 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated YARN-7169:
-
Target Version/s: 2.9.0
 Description: 
Jira to track the backport of the new yarn-ui onto branch2. Right now adding 
into Timeline Service v2's branch2 which is YARN-5355_branch2.



  was:

Jira to track the backport of the new yarn-ui onto branch2. Right now adding 
into Timeline Service v2's branch2 which is YARN-5355_branch2.




> Backport new yarn-ui to branch2 code (starting with YARN-5355_branch2)
> --
>
> Key: YARN-7169
> URL: https://issues.apache.org/jira/browse/YARN-7169
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineclient, timelinereader, timelineserver
>Reporter: Vrushali C
>Assignee: Vrushali C
>Priority: Critical
> Attachments: YARN-7169-YARN-5355_branch2.0001.patch
>
>
> Jira to track the backport of the new yarn-ui onto branch2. Right now adding 
> into Timeline Service v2's branch2 which is YARN-5355_branch2.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7170) Investigate bower dependencies for YARN UI v2

2017-10-06 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated YARN-7170:
-
Target Version/s: 2.9.0

> Investigate bower dependencies for YARN UI v2
> -
>
> Key: YARN-7170
> URL: https://issues.apache.org/jira/browse/YARN-7170
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: webapp
>Reporter: Sunil G
>Assignee: Sunil G
>Priority: Critical
> Attachments: YARN-7170.001.patch
>
>
> [INFO] bower ember#2.2.0   progress Receiving
> objects:  50% (38449/75444), 722.46 MiB | 3.30 MiB/s
> ...
> [INFO] bower ember#2.2.0   progress Receiving
> objects:  99% (75017/75444), 1.56 GiB | 3.31 MiB/s
> Investigate the dependencies and reduce the download size and speed of 
> compilation.
> cc/ [~Sreenath] and [~akhilpb]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7169) Backport new yarn-ui to branch2 code (starting with YARN-5355_branch2)

2017-10-06 Thread Vrushali C (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C updated YARN-7169:
-
Summary: Backport new yarn-ui to branch2 code (starting with 
YARN-5355_branch2)  (was: Backport new yarn-ui to TSv2's branch2 code 
(YARN-5355_branch2))

> Backport new yarn-ui to branch2 code (starting with YARN-5355_branch2)
> --
>
> Key: YARN-7169
> URL: https://issues.apache.org/jira/browse/YARN-7169
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineclient, timelinereader, timelineserver
>Reporter: Vrushali C
>Assignee: Vrushali C
> Attachments: YARN-7169-YARN-5355_branch2.0001.patch
>
>
> Jira to track the backport of the new yarn-ui onto branch2. Right now adding 
> into Timeline Service v2's branch2 which is YARN-5355_branch2.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-6744) Recover component information on YARN native services AM restart

2017-10-06 Thread Billie Rinaldi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Billie Rinaldi reassigned YARN-6744:


Assignee: Billie Rinaldi

> Recover component information on YARN native services AM restart
> 
>
> Key: YARN-6744
> URL: https://issues.apache.org/jira/browse/YARN-6744
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Reporter: Billie Rinaldi
>Assignee: Billie Rinaldi
> Fix For: yarn-native-services
>
>
> The new RoleInstance#Container constructor does not populate all the 
> information needed for a RoleInstance. This is the constructor used when 
> recovering running containers in AppState#addRestartedContainer. We will have 
> to figure out a way to determine this information for a running container.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7241) Merge YARN-5734 to trunk/branch-2

2017-10-06 Thread Jonathan Hung (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16195208#comment-16195208
 ] 

Jonathan Hung commented on YARN-7241:
-

branch-2 test failures don't seem related. They pass locally for me (except 
TestFederationRMFailoverProxyProvider which seems caused by YARN-6932) and 
TestContainerManager, which fails both before and after the feature is applied.

> Merge YARN-5734 to trunk/branch-2
> -
>
> Key: YARN-7241
> URL: https://issues.apache.org/jira/browse/YARN-7241
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
> Attachments: YARN-7241.001.patch, YARN-7241.002.patch, 
> YARN-7241.003.patch, YARN-7241.004.patch, YARN-7241.005.patch, 
> YARN-7241.006.patch, YARN-7241-branch-2.001.patch, 
> YARN-7241-branch-2.002.patch, YARN-7241-branch-2.003.patch, 
> YARN-7241-branch-3.0.001.patch
>
>
> Ticket for jenkins pre-commit for full diff.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7215) REST API to list all deployed services by the same user

2017-10-06 Thread Eric Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-7215:

Attachment: YARN-7215.yarn-native-services.003.patch

- Ensure solrj client is part of Hadoop distro tarball.

> REST API to list all deployed services by the same user
> ---
>
> Key: YARN-7215
> URL: https://issues.apache.org/jira/browse/YARN-7215
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api, applications
>Reporter: Eric Yang
>Assignee: Eric Yang
> Attachments: YARN-7215.yarn-native-services.001.patch, 
> YARN-7215.yarn-native-services.002.patch, 
> YARN-7215.yarn-native-services.003.patch
>
>
> In Slider, it is possible to list deployed applications from the same user by 
> using:
> {code}
> slider list
> {code}
> This API can help UI to display application and services deployed by the same 
> user.
> Apiserver does not have ability to list all applications/services at this 
> time.  This API requires fast response to list all applications because it is 
> a common UI operation.  ApiServer deployed applications persist configuration 
> in HDFS similar to slider, but using directory listing to display deployed 
> application might cost too much overhead to namenode.  We may want to use 
> alternative storage mechanism to cache deployed application configuration to 
> accelerate the response time of list deployed applications.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7215) REST API to list all deployed services by the same user

2017-10-06 Thread Eric Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-7215:

Attachment: YARN-7215.yarn-native-services.002.patch

Rebase the patch based on Jian's last minute changes to YARN-6626 patch.

> REST API to list all deployed services by the same user
> ---
>
> Key: YARN-7215
> URL: https://issues.apache.org/jira/browse/YARN-7215
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api, applications
>Reporter: Eric Yang
>Assignee: Eric Yang
> Attachments: YARN-7215.yarn-native-services.001.patch, 
> YARN-7215.yarn-native-services.002.patch
>
>
> In Slider, it is possible to list deployed applications from the same user by 
> using:
> {code}
> slider list
> {code}
> This API can help UI to display application and services deployed by the same 
> user.
> Apiserver does not have ability to list all applications/services at this 
> time.  This API requires fast response to list all applications because it is 
> a common UI operation.  ApiServer deployed applications persist configuration 
> in HDFS similar to slider, but using directory listing to display deployed 
> application might cost too much overhead to namenode.  We may want to use 
> alternative storage mechanism to cache deployed application configuration to 
> accelerate the response time of list deployed applications.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7202) End-to-end UT for api-server

2017-10-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16194957#comment-16194957
 ] 

Hadoop QA commented on YARN-7202:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  6m  
8s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
|| || || || {color:brown} yarn-native-services Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
35s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
18s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
16s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 3s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
11s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 52s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
14s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} yarn-native-services passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
12s{color} | {color:green} root: The patch generated 0 new + 7 unchanged - 3 
fixed = 7 total (was 10) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
4s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 19s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
23s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
33s{color} | {color:green} hadoop-yarn-services-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
51s{color} | {color:green} hadoop-yarn-services-api in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
36s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}110m 52s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  

[jira] [Commented] (YARN-7207) Cache the RM proxy server address

2017-10-06 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16194869#comment-16194869
 ] 

Yufei Gu commented on YARN-7207:


Committed to trunk, branch-3.0 and branch-2. Thanks for the review, [~rkanter] 
and [~aw].

> Cache the RM proxy server address
> -
>
> Key: YARN-7207
> URL: https://issues.apache.org/jira/browse/YARN-7207
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: RM
>Affects Versions: 3.1.0
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Fix For: 2.9.0, 3.0.0, 3.1.0
>
> Attachments: YARN-7207.001.patch, YARN-7207.002.patch, 
> YARN-7207.branch-2.patch, YARN-7207.branch-3.0.patch
>
>
> {{getLocalHostName()}} is invoked for generating the report for each 
> application, which means it is called 1000 times for each 
> {{getApplications()}} if there are 1000 apps in RM. Some user got a 
> performance issue when {{getLocalHostName()}} is slow under some network envs.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7207) Cache the RM proxy server address

2017-10-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16194871#comment-16194871
 ] 

Hudson commented on YARN-7207:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13043 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13043/])
YARN-7207. Cache the RM proxy server address. (Yufei Gu) (yufei: rev 
72d22b753abde4d07a727479d3f3d5d84d5dd6b2)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ResourceManager.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/RMAppAttemptImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMContextImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMContext.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/RMAppImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/TestRMAppAttemptTransitions.java


> Cache the RM proxy server address
> -
>
> Key: YARN-7207
> URL: https://issues.apache.org/jira/browse/YARN-7207
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: RM
>Affects Versions: 3.1.0
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Fix For: 2.9.0, 3.0.0, 3.1.0
>
> Attachments: YARN-7207.001.patch, YARN-7207.002.patch, 
> YARN-7207.branch-2.patch, YARN-7207.branch-3.0.patch
>
>
> {{getLocalHostName()}} is invoked for generating the report for each 
> application, which means it is called 1000 times for each 
> {{getApplications()}} if there are 1000 apps in RM. Some user got a 
> performance issue when {{getLocalHostName()}} is slow under some network envs.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7207) Cache the RM proxy server address

2017-10-06 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7207?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-7207:
---
Attachment: YARN-7207.branch-3.0.patch

> Cache the RM proxy server address
> -
>
> Key: YARN-7207
> URL: https://issues.apache.org/jira/browse/YARN-7207
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: RM
>Affects Versions: 3.1.0
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: YARN-7207.001.patch, YARN-7207.002.patch, 
> YARN-7207.branch-2.patch, YARN-7207.branch-3.0.patch
>
>
> {{getLocalHostName()}} is invoked for generating the report for each 
> application, which means it is called 1000 times for each 
> {{getApplications()}} if there are 1000 apps in RM. Some user got a 
> performance issue when {{getLocalHostName()}} is slow under some network envs.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7207) Cache the RM proxy server address

2017-10-06 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7207?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-7207:
---
Attachment: YARN-7207.branch-2.patch

> Cache the RM proxy server address
> -
>
> Key: YARN-7207
> URL: https://issues.apache.org/jira/browse/YARN-7207
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: RM
>Affects Versions: 3.1.0
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: YARN-7207.001.patch, YARN-7207.002.patch, 
> YARN-7207.branch-2.patch
>
>
> {{getLocalHostName()}} is invoked for generating the report for each 
> application, which means it is called 1000 times for each 
> {{getApplications()}} if there are 1000 apps in RM. Some user got a 
> performance issue when {{getLocalHostName()}} is slow under some network envs.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7207) Cache the RM proxy server address

2017-10-06 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7207?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-7207:
---
Summary: Cache the RM proxy server address  (was: Cache the local host name 
when getting application list in RM)

> Cache the RM proxy server address
> -
>
> Key: YARN-7207
> URL: https://issues.apache.org/jira/browse/YARN-7207
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: RM
>Affects Versions: 3.1.0
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: YARN-7207.001.patch, YARN-7207.002.patch
>
>
> {{getLocalHostName()}} is invoked for generating the report for each 
> application, which means it is called 1000 times for each 
> {{getApplications()}} if there are 1000 apps in RM. Some user got a 
> performance issue when {{getLocalHostName()}} is slow under some network envs.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2162) add ability in Fair Scheduler to optionally configure maxResources in terms of percentage

2017-10-06 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16194825#comment-16194825
 ] 

Yufei Gu commented on YARN-2162:


Committed to branch-2 and branch-3.0.

> add ability in Fair Scheduler to optionally configure maxResources in terms 
> of percentage
> -
>
> Key: YARN-2162
> URL: https://issues.apache.org/jira/browse/YARN-2162
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler, scheduler
>Reporter: Ashwin Shankar
>Assignee: Yufei Gu
>  Labels: scheduler
> Attachments: test-400nm-200app-2k_NODE_UPDATE.timecost.svg, 
> YARN-2162.001.patch, YARN-2162.002.patch, YARN-2162.003.patch, 
> YARN-2162.004.patch, YARN-2162.005.patch, YARN-2162.006.patch, 
> YARN-2162.007.patch, YARN-2162.008.patch, YARN-2162.branch-2.010.patch, 
> YARN-2162.branch-3.0.009.patch
>
>
> minResources and maxResources in fair scheduler configs are expressed in 
> terms of absolute numbers X mb, Y vcores. 
> As a result, when we expand or shrink our hadoop cluster, we need to 
> recalculate and change minResources/maxResources accordingly, which is pretty 
> inconvenient.
> We can circumvent this problem if we can optionally configure these 
> properties in terms of percentage of cluster capacity. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7293) ContainerLaunch.cleanupContainer may leak a starting container process

2017-10-06 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated YARN-7293:
-
Summary: ContainerLaunch.cleanupContainer may leak a starting container 
process  (was: ContainerLaunch.cleanupContainer may miss a starting container)

> ContainerLaunch.cleanupContainer may leak a starting container process
> --
>
> Key: YARN-7293
> URL: https://issues.apache.org/jira/browse/YARN-7293
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Miklos Szegedi
>
> Currently, NM can leave a starting container that is supposed to be killed 
> running if the kill event is handled right before the container pid file is 
> created. The relevant part of YARN-7009, namely, in ContainerLaunch to send a 
> kill event if container process is not found, needs to be backported to 
> branch-2



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7293) ContainerLaunch.cleanupContainer may miss a starting container

2017-10-06 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated YARN-7293:
-
Description: Currently, NM can leave a starting container that is supposed 
to be killed running if the kill event is handled right before the container 
pid file is created. The relevant part of YARN-7009, namely, in ContainerLaunch 
to send a kill event if container process is not found, needs to be backported 
to branch-2  (was: The relevant part of YARN-7009 needs to be backported to 
branch-2)

> ContainerLaunch.cleanupContainer may miss a starting container
> --
>
> Key: YARN-7293
> URL: https://issues.apache.org/jira/browse/YARN-7293
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Miklos Szegedi
>
> Currently, NM can leave a starting container that is supposed to be killed 
> running if the kill event is handled right before the container pid file is 
> created. The relevant part of YARN-7009, namely, in ContainerLaunch to send a 
> kill event if container process is not found, needs to be backported to 
> branch-2



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7202) End-to-end UT for api-server

2017-10-06 Thread Eric Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-7202:

Attachment: YARN-7202.yarn-native-services.008.patch

Rebase patch by the changes Jian He made in YARN-6626.

> End-to-end UT for api-server
> 
>
> Key: YARN-7202
> URL: https://issues.apache.org/jira/browse/YARN-7202
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
>Assignee: Eric Yang
> Attachments: YARN-7202.yarn-native-services.001.patch, 
> YARN-7202.yarn-native-services.002.patch, 
> YARN-7202.yarn-native-services.003.patch, 
> YARN-7202.yarn-native-services.004.patch, 
> YARN-7202.yarn-native-services.005.patch, 
> YARN-7202.yarn-native-services.006.patch, 
> YARN-7202.yarn-native-services.007.patch, 
> YARN-7202.yarn-native-services.008.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7272) Enable timeline collector fault tolerance

2017-10-06 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16194763#comment-16194763
 ] 

Jason Lowe commented on YARN-7272:
--

Leveldb seems like a great fit for this, IMO.  It has high performance for 
writes and works quite well in the nodemanager use-case.  This case seems 
identical in that the collector would write to the database and only read upon 
recovery.  Leveldb is already a dependency used in multiple places, and I'd 
hate to see us add yet another dependency or reinvent the wheel here.


> Enable timeline collector fault tolerance
> -
>
> Key: YARN-7272
> URL: https://issues.apache.org/jira/browse/YARN-7272
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineclient, timelinereader, timelineserver
>Reporter: Vrushali C
>Assignee: Rohith Sharma K S
>
> If a NM goes down and along with it the timeline collector aux service for a 
> running yarn app, we would like that yarn app to re-establish connection with 
> a new timeline collector. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7275) NM Statestore cleanup for Container updates

2017-10-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16194605#comment-16194605
 ] 

Hadoop QA commented on YARN-7275:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 33s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 19s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 The patch generated 10 new + 242 unchanged - 0 fixed = 252 total (was 242) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  5s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 15m 57s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 56m 55s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.nodemanager.scheduler.TestDistributedScheduler |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | YARN-7275 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12890729/YARN-7275.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 8c537ca76de0 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 49ae538 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/17807/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/17807/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
|  Test Results | 

[jira] [Commented] (YARN-7245) In Cap Sched UI, Max AM Resource column in Active Users Info section should be per-user

2017-10-06 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16194559#comment-16194559
 ] 

Sunil G commented on YARN-7245:
---

Looks fine. Thank you.
I will commit the patch later today if no objections.

> In Cap Sched UI, Max AM Resource column in Active Users Info section should 
> be per-user
> ---
>
> Key: YARN-7245
> URL: https://issues.apache.org/jira/browse/YARN-7245
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, yarn
>Affects Versions: 2.9.0, 2.8.1, 3.0.0-alpha4
>Reporter: Eric Payne
>Assignee: Eric Payne
> Attachments: CapSched UI Showing Inaccurate Per User Max AM 
> Resource.png, Max AM Resource Per User -- Fixed.png, YARN-7245.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7275) NM Statestore cleanup for Container updates

2017-10-06 Thread kartheek muthyala (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kartheek muthyala updated YARN-7275:

Attachment: YARN-7275.002.patch

Updating the patch, as it has missed commit.

> NM Statestore cleanup for Container updates
> ---
>
> Key: YARN-7275
> URL: https://issues.apache.org/jira/browse/YARN-7275
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: kartheek muthyala
>Priority: Blocker
> Attachments: YARN-7275.001.patch, YARN-7275.002.patch
>
>
> Currently, only resource updates are recorded in the NM state store, we need 
> to add ExecutionType updates as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6373) [YARN-3368] Improvements in cluster-overview page in YARN-UI

2017-10-06 Thread JIRA

[ 
https://issues.apache.org/jira/browse/YARN-6373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16194287#comment-16194287
 ] 

Gergely Novák commented on YARN-6373:
-

[~sunilg] I could reproduce your issue on trunk with AND without my patch..

> [YARN-3368] Improvements in cluster-overview page in YARN-UI
> 
>
> Key: YARN-6373
> URL: https://issues.apache.org/jira/browse/YARN-6373
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-ui-v2
>Reporter: Akhil PB
>Assignee: Gergely Novák
> Attachments: YARN-6373.001.patch, YARN-6373.002.patch, 
> YARN-6373.003.patch
>
>
> # Make appld and queueName clickable to navigate to respective pages.
> # Flow layout for panels in cluster-overview page.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2162) add ability in Fair Scheduler to optionally configure maxResources in terms of percentage

2017-10-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16194253#comment-16194253
 ] 

Hadoop QA commented on YARN-2162:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  7s{color} 
| {color:red} YARN-2162 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-2162 |
| GITHUB PR | https://github.com/apache/hadoop/pull/261 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/17806/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> add ability in Fair Scheduler to optionally configure maxResources in terms 
> of percentage
> -
>
> Key: YARN-2162
> URL: https://issues.apache.org/jira/browse/YARN-2162
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler, scheduler
>Reporter: Ashwin Shankar
>Assignee: Yufei Gu
>  Labels: scheduler
> Attachments: test-400nm-200app-2k_NODE_UPDATE.timecost.svg, 
> YARN-2162.001.patch, YARN-2162.002.patch, YARN-2162.003.patch, 
> YARN-2162.004.patch, YARN-2162.005.patch, YARN-2162.006.patch, 
> YARN-2162.007.patch, YARN-2162.008.patch, YARN-2162.branch-2.010.patch, 
> YARN-2162.branch-3.0.009.patch
>
>
> minResources and maxResources in fair scheduler configs are expressed in 
> terms of absolute numbers X mb, Y vcores. 
> As a result, when we expand or shrink our hadoop cluster, we need to 
> recalculate and change minResources/maxResources accordingly, which is pretty 
> inconvenient.
> We can circumvent this problem if we can optionally configure these 
> properties in terms of percentage of cluster capacity. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-2162) add ability in Fair Scheduler to optionally configure maxResources in terms of percentage

2017-10-06 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-2162:
---
Attachment: YARN-2162.branch-2.010.patch

Committed to trunk and uploaded patches for branch-2 and branch-3.0

> add ability in Fair Scheduler to optionally configure maxResources in terms 
> of percentage
> -
>
> Key: YARN-2162
> URL: https://issues.apache.org/jira/browse/YARN-2162
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler, scheduler
>Reporter: Ashwin Shankar
>Assignee: Yufei Gu
>  Labels: scheduler
> Attachments: test-400nm-200app-2k_NODE_UPDATE.timecost.svg, 
> YARN-2162.001.patch, YARN-2162.002.patch, YARN-2162.003.patch, 
> YARN-2162.004.patch, YARN-2162.005.patch, YARN-2162.006.patch, 
> YARN-2162.007.patch, YARN-2162.008.patch, YARN-2162.branch-2.010.patch, 
> YARN-2162.branch-3.0.009.patch
>
>
> minResources and maxResources in fair scheduler configs are expressed in 
> terms of absolute numbers X mb, Y vcores. 
> As a result, when we expand or shrink our hadoop cluster, we need to 
> recalculate and change minResources/maxResources accordingly, which is pretty 
> inconvenient.
> We can circumvent this problem if we can optionally configure these 
> properties in terms of percentage of cluster capacity. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-2162) add ability in Fair Scheduler to optionally configure maxResources in terms of percentage

2017-10-06 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-2162:
---
Attachment: YARN-2162.branch-3.0.009.patch

> add ability in Fair Scheduler to optionally configure maxResources in terms 
> of percentage
> -
>
> Key: YARN-2162
> URL: https://issues.apache.org/jira/browse/YARN-2162
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler, scheduler
>Reporter: Ashwin Shankar
>Assignee: Yufei Gu
>  Labels: scheduler
> Attachments: test-400nm-200app-2k_NODE_UPDATE.timecost.svg, 
> YARN-2162.001.patch, YARN-2162.002.patch, YARN-2162.003.patch, 
> YARN-2162.004.patch, YARN-2162.005.patch, YARN-2162.006.patch, 
> YARN-2162.007.patch, YARN-2162.008.patch, YARN-2162.branch-3.0.009.patch
>
>
> minResources and maxResources in fair scheduler configs are expressed in 
> terms of absolute numbers X mb, Y vcores. 
> As a result, when we expand or shrink our hadoop cluster, we need to 
> recalculate and change minResources/maxResources accordingly, which is pretty 
> inconvenient.
> We can circumvent this problem if we can optionally configure these 
> properties in terms of percentage of cluster capacity. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7275) NM Statestore cleanup for Container updates

2017-10-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16194246#comment-16194246
 ] 

Hadoop QA commented on YARN-7275:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
46s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  2s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
11s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
19s{color} | {color:red} hadoop-yarn-server-nodemanager in the patch failed. 
{color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  1m 
21s{color} | {color:red} hadoop-yarn in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  1m 21s{color} 
| {color:red} hadoop-yarn in the patch failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 28s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 2 new + 12 unchanged - 242 fixed = 14 total (was 254) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
21s{color} | {color:red} hadoop-yarn-server-nodemanager in the patch failed. 
{color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  4m  
2s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
17s{color} | {color:red} hadoop-yarn-server-nodemanager in the patch failed. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
40s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 40s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 59m  5s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | YARN-7275 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12890658/YARN-7275.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 85a2a56c6a69 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 25f31d9 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| mvninstall | 

[jira] [Commented] (YARN-2162) add ability in Fair Scheduler to optionally configure maxResources in terms of percentage

2017-10-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16194243#comment-16194243
 ] 

Hudson commented on YARN-2162:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13041 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13041/])
YARN-2162. Add ability in Fair Scheduler to optionally configure (yufei: rev 
49ae538164bf631ebf961006a06c8ba0f3469b89)
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestConfigurableResource.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/ConfigurableResource.java


> add ability in Fair Scheduler to optionally configure maxResources in terms 
> of percentage
> -
>
> Key: YARN-2162
> URL: https://issues.apache.org/jira/browse/YARN-2162
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler, scheduler
>Reporter: Ashwin Shankar
>Assignee: Yufei Gu
>  Labels: scheduler
> Attachments: test-400nm-200app-2k_NODE_UPDATE.timecost.svg, 
> YARN-2162.001.patch, YARN-2162.002.patch, YARN-2162.003.patch, 
> YARN-2162.004.patch, YARN-2162.005.patch, YARN-2162.006.patch, 
> YARN-2162.007.patch, YARN-2162.008.patch
>
>
> minResources and maxResources in fair scheduler configs are expressed in 
> terms of absolute numbers X mb, Y vcores. 
> As a result, when we expand or shrink our hadoop cluster, we need to 
> recalculate and change minResources/maxResources accordingly, which is pretty 
> inconvenient.
> We can circumvent this problem if we can optionally configure these 
> properties in terms of percentage of cluster capacity. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2162) add ability in Fair Scheduler to optionally configure maxResources in terms of percentage

2017-10-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16194235#comment-16194235
 ] 

Hudson commented on YARN-2162:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #13040 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13040/])
YARN-2162. Add ability in Fair Scheduler to optionally configure (yufei: rev 
99292adcefdc6b8f280b8e100605fb39f755c38a)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairScheduler.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/FairScheduler.md
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestQueueManager.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSQueue.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestFSParentQueue.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestFairScheduler.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/TestFairSchedulerQueueInfo.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestAllocationFileLoaderService.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSParentQueue.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/AllocationFileLoaderService.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairSchedulerConfiguration.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestFairSchedulerConfiguration.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSLeafQueue.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/AllocationConfiguration.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/QueueManager.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestMaxRunningAppsEnforcer.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestFSLeafQueue.java


> add ability in Fair Scheduler to optionally configure maxResources in terms 
> of percentage
> -
>
> Key: YARN-2162
> URL: https://issues.apache.org/jira/browse/YARN-2162
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler, scheduler
>Reporter: Ashwin Shankar
>Assignee: Yufei Gu
>  Labels: scheduler
> Attachments: test-400nm-200app-2k_NODE_UPDATE.timecost.svg, 
> YARN-2162.001.patch, YARN-2162.002.patch, YARN-2162.003.patch, 
> YARN-2162.004.patch, YARN-2162.005.patch, YARN-2162.006.patch, 
> YARN-2162.007.patch, YARN-2162.008.patch
>
>
> minResources and maxResources in fair scheduler configs are expressed in 
> terms of absolute numbers X mb, Y vcores. 
> As a result, when we expand or shrink our hadoop cluster, we need to 
> recalculate and change minResources/maxResources accordingly, which is pretty 
> inconvenient.
> We can circumvent this problem if we can optionally configure these 
> properties in terms of percentage of cluster capacity. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (YARN-7275) NM Statestore cleanup for Container updates

2017-10-06 Thread kartheek muthyala (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16194198#comment-16194198
 ] 

kartheek muthyala edited comment on YARN-7275 at 10/6/17 6:15 AM:
--

Submitting the initial version of the patch that has the following changes
1. Removing from statestore that the container was queued if it is going into 
running state.
2. Adding container update token to statestore whenever resource is updated or 
exectype is updated.

[~asuresh], can you please review this change.



was (Author: kartheek):
Submitting the initial version of the patch that has the following changes
1. Removing from statestore that the container was queued if it is going into 
running state.
2. Adding container update token to statestore whenever resource is updated or 
exectype is updated.


> NM Statestore cleanup for Container updates
> ---
>
> Key: YARN-7275
> URL: https://issues.apache.org/jira/browse/YARN-7275
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: kartheek muthyala
>Priority: Blocker
> Attachments: YARN-7275.001.patch
>
>
> Currently, only resource updates are recorded in the NM state store, we need 
> to add ExecutionType updates as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7275) NM Statestore cleanup for Container updates

2017-10-06 Thread kartheek muthyala (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kartheek muthyala updated YARN-7275:

Attachment: YARN-7275.001.patch

Submitting the initial version of the patch that has the following changes
1. Removing from statestore that the container was queued if it is going into 
running state.
2. Adding container update token to statestore whenever resource is updated or 
exectype is updated.


> NM Statestore cleanup for Container updates
> ---
>
> Key: YARN-7275
> URL: https://issues.apache.org/jira/browse/YARN-7275
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: kartheek muthyala
>Priority: Blocker
> Attachments: YARN-7275.001.patch
>
>
> Currently, only resource updates are recorded in the NM state store, we need 
> to add ExecutionType updates as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org