[jira] [Commented] (YARN-5798) Handle FSPreemptionThread crashing due to a RuntimeException

2017-02-15 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15869445#comment-15869445
 ] 

Karthik Kambatla commented on YARN-5798:


+1

> Handle FSPreemptionThread crashing due to a RuntimeException
> 
>
> Key: YARN-5798
> URL: https://issues.apache.org/jira/browse/YARN-5798
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Affects Versions: 2.9.0
>Reporter: Karthik Kambatla
>Assignee: Yufei Gu
> Attachments: YARN-5798.001.patch, YARN-5798.002.patch, 
> YARN-5798.003.patch, YARN-5798.004.patch
>
>
> YARN-5605 added an FSPreemptionThread. The run() method catches 
> InterruptedException. If this were to run into a RuntimeException, the 
> preemption thread would crash. We should probably fail the RM itself (or 
> transition to standby) when this happens. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5798) Set UncaughtExceptionHandler for all FairScheduler threads

2017-02-15 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated YARN-5798:
---
Summary: Set UncaughtExceptionHandler for all FairScheduler threads  (was: 
Handle FSPreemptionThread crashing due to a RuntimeException)

> Set UncaughtExceptionHandler for all FairScheduler threads
> --
>
> Key: YARN-5798
> URL: https://issues.apache.org/jira/browse/YARN-5798
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Affects Versions: 2.9.0
>Reporter: Karthik Kambatla
>Assignee: Yufei Gu
> Attachments: YARN-5798.001.patch, YARN-5798.002.patch, 
> YARN-5798.003.patch, YARN-5798.004.patch
>
>
> YARN-5605 added an FSPreemptionThread. The run() method catches 
> InterruptedException. If this were to run into a RuntimeException, the 
> preemption thread would crash. We should probably fail the RM itself (or 
> transition to standby) when this happens. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6164) Expose maximum-am-resource-percent in YarnClient

2017-02-15 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15869435#comment-15869435
 ] 

Sunil G commented on YARN-6164:
---

[~bibinchundatt], I am still doubtful about that approach. Since many 
informations related to configuration is already exposed by QueueInfo, 
*Properties* may not a become a better logical grouping unless we move all 
existing config items to new subclass. This will cause compatibility, and we 
wont be able to do that. For me, I feel we many confuse more if we add 
*Properties* sub section , as its not a single grouping for configuration 
related items.

> Expose maximum-am-resource-percent in YarnClient
> 
>
> Key: YARN-6164
> URL: https://issues.apache.org/jira/browse/YARN-6164
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.7.2
>Reporter: Benson Qiu
>Assignee: Benson Qiu
> Attachments: YARN-6164.001.patch, YARN-6164.002.patch, 
> YARN-6164.003.patch
>
>
> `yarn.scheduler.capacity.maximum-am-resource-percent` is exposed through the 
> [Cluster Scheduler 
> API|http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/ResourceManagerRest.html#Cluster_Scheduler_API],
>  but not through 
> [YarnClient|https://hadoop.apache.org/docs/current/api/org/apache/hadoop/yarn/client/api/YarnClient.html].
> Since YarnClient and RM REST APIs depend on different ports (8032 vs 8088 by 
> default), it would be nice to expose `maximum-am-resource-percent` in 
> YarnClient as well. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4212) FairScheduler: Can't create a DRF queue under a FAIR policy queue

2017-02-15 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15869430#comment-15869430
 ] 

Karthik Kambatla commented on YARN-4212:


+1. Checking this in. 

> FairScheduler: Can't create a DRF queue under a FAIR policy queue
> -
>
> Key: YARN-4212
> URL: https://issues.apache.org/jira/browse/YARN-4212
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Arun Suresh
>Assignee: Yufei Gu
>  Labels: fairscheduler
> Attachments: YARN-4212.002.patch, YARN-4212.003.patch, 
> YARN-4212.004.patch, YARN-4212.005.patch, YARN-4212.006.patch, 
> YARN-4212.007.patch, YARN-4212.008.patch, YARN-4212.009.patch, 
> YARN-4212.010.patch, YARN-4212.011.patch, YARN-4212.1.patch
>
>
> The Fair Scheduler, while performing a {{recomputeShares()}} during an 
> {{update()}} call, uses the parent queues policy to distribute shares to its 
> children.
> If the parent queues policy is 'fair', it only computes weight for memory and 
> sets the vcores fair share of its children to 0.
> Assuming a situation where we have 1 parent queue with policy 'fair' and 
> multiple leaf queues with policy 'drf', Any app submitted to the child queues 
> with vcore requirement > 1 will always be above fairshare, since during the 
> recomputeShare process, the child queues were all assigned 0 for fairshare 
> vcores.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4212) FairScheduler: Can't create a DRF queue under a FAIR policy queue

2017-02-15 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated YARN-4212:
---
Summary: FairScheduler: Can't create a DRF queue under a FAIR policy queue  
(was: FairScheduler: Parent queues is not allowed to be 'Fair' policy if its 
children have the "drf" policy)

> FairScheduler: Can't create a DRF queue under a FAIR policy queue
> -
>
> Key: YARN-4212
> URL: https://issues.apache.org/jira/browse/YARN-4212
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Arun Suresh
>Assignee: Yufei Gu
>  Labels: fairscheduler
> Attachments: YARN-4212.002.patch, YARN-4212.003.patch, 
> YARN-4212.004.patch, YARN-4212.005.patch, YARN-4212.006.patch, 
> YARN-4212.007.patch, YARN-4212.008.patch, YARN-4212.009.patch, 
> YARN-4212.010.patch, YARN-4212.011.patch, YARN-4212.1.patch
>
>
> The Fair Scheduler, while performing a {{recomputeShares()}} during an 
> {{update()}} call, uses the parent queues policy to distribute shares to its 
> children.
> If the parent queues policy is 'fair', it only computes weight for memory and 
> sets the vcores fair share of its children to 0.
> Assuming a situation where we have 1 parent queue with policy 'fair' and 
> multiple leaf queues with policy 'drf', Any app submitted to the child queues 
> with vcore requirement > 1 will always be above fairshare, since during the 
> recomputeShare process, the child queues were all assigned 0 for fairshare 
> vcores.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6193) FairScheduler might not trigger preemption when using DRF

2017-02-15 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated YARN-6193:
---
Attachment: YARN-6193.001.patch

> FairScheduler might not trigger preemption when using DRF
> -
>
> Key: YARN-6193
> URL: https://issues.apache.org/jira/browse/YARN-6193
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Affects Versions: 2.8.0
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
> Attachments: YARN-6193.000.patch, YARN-6193.001.patch
>
>
> {{FSAppAttempt#canContainerBePreempted}} verifies preempting a container 
> doesn't lead to new starvation ({{Resources.fitsIn}}). When using DRF, this 
> leads to verifying both resources instead of just the dominant resource. 
> Ideally, the check should be based on policy. 
> Note that current implementation of 
> {{DominantResourceFairnessPolicy#checkIfUsageOverFairShare}} is broken. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6163) FS Preemption is a trickle for severely starved applications

2017-02-15 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15869392#comment-15869392
 ] 

ASF GitHub Bot commented on YARN-6163:
--

Github user kambatla closed the pull request at:

https://github.com/apache/hadoop/pull/192


> FS Preemption is a trickle for severely starved applications
> 
>
> Key: YARN-6163
> URL: https://issues.apache.org/jira/browse/YARN-6163
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Affects Versions: 2.9.0
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
> Fix For: 2.9.0
>
> Attachments: YARN-6163.004.patch, YARN-6163.005.patch, 
> YARN-6163.006.patch, yarn-6163-1.patch, yarn-6163-2.patch
>
>
> With current logic, only one RR is considered per each instance of marking an 
> application starved. This marking happens only on the update call that runs 
> every 500ms.  Due to this, an application that is severely starved takes 
> forever to reach fairshare based on preemptions.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6163) FS Preemption is a trickle for severely starved applications

2017-02-15 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15869391#comment-15869391
 ] 

ASF GitHub Bot commented on YARN-6163:
--

Github user kambatla commented on the issue:

https://github.com/apache/hadoop/pull/192
  
Committed this to trunk and branch-2.


> FS Preemption is a trickle for severely starved applications
> 
>
> Key: YARN-6163
> URL: https://issues.apache.org/jira/browse/YARN-6163
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Affects Versions: 2.9.0
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
> Fix For: 2.9.0
>
> Attachments: YARN-6163.004.patch, YARN-6163.005.patch, 
> YARN-6163.006.patch, yarn-6163-1.patch, yarn-6163-2.patch
>
>
> With current logic, only one RR is considered per each instance of marking an 
> application starved. This marking happens only on the update call that runs 
> every 500ms.  Due to this, an application that is severely starved takes 
> forever to reach fairshare based on preemptions.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6125) The application attempt's diagnostic message should have a maximum size

2017-02-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15869348#comment-15869348
 ] 

Hadoop QA commented on YARN-6125:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
45s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
25s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
58s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  8m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
5s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
42s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m  
0s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 64m 59s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  1m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}144m  3s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption |
|   | hadoop.yarn.server.resourcemanager.TestRMRestart |
|   | hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-6125 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12852973/YARN-6125.009.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux 5b33bd3662a6 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 
20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / a136936 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| unit 

[jira] [Updated] (YARN-6050) AMs can't be scheduled on racks or nodes

2017-02-15 Thread Robert Kanter (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Kanter updated YARN-6050:

Attachment: YARN-6050.008.patch

The 008 patch:
- Fixes a javadoc mistake
- Fixes a javac warning
- Fixes {{TestRMRestart}}, where I had made an if statement backwards in 
{{RMAppManager#recoverApplication}}
- {{TestDelegationTokenRenewer}} failed because it's flakey and is unrelated 
(see YARN-5816)
- Rebased on the latest trunk, which had some minor conflicts with YARN-6156
-- I moved the AM blacklist related code to where the node label code for this 
is in {{RMServerUtils}} and added tests for both

> AMs can't be scheduled on racks or nodes
> 
>
> Key: YARN-6050
> URL: https://issues.apache.org/jira/browse/YARN-6050
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.9.0, 3.0.0-alpha2
>Reporter: Robert Kanter
>Assignee: Robert Kanter
> Attachments: YARN-6050.001.patch, YARN-6050.002.patch, 
> YARN-6050.003.patch, YARN-6050.004.patch, YARN-6050.005.patch, 
> YARN-6050.006.patch, YARN-6050.007.patch, YARN-6050.008.patch
>
>
> Yarn itself supports rack/node aware scheduling for AMs; however, there 
> currently are two problems:
> # To specify hard or soft rack/node requests, you have to specify more than 
> one {{ResourceRequest}}.  For example, if you want to schedule an AM only on 
> "rackA", you have to create two {{ResourceRequest}}, like this:
> {code}
> ResourceRequest.newInstance(PRIORITY, ANY, CAPABILITY, NUM_CONTAINERS, false);
> ResourceRequest.newInstance(PRIORITY, "rackA", CAPABILITY, NUM_CONTAINERS, 
> true);
> {code}
> The problem is that the Yarn API doesn't actually allow you to specify more 
> than one {{ResourceRequest}} in the {{ApplicationSubmissionContext}}.  The 
> current behavior is to either build one from {{getResource}} or directly from 
> {{getAMContainerResourceRequest}}, depending on if 
> {{getAMContainerResourceRequest}} is null or not.  We'll need to add a third 
> method, say {{getAMContainerResourceRequests}}, which takes a list of 
> {{ResourceRequest}} so that clients can specify the multiple resource 
> requests.
> # There are some places where things are hardcoded to overwrite what the 
> client specifies.  These are pretty straightforward to fix.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6184) [YARN-3368] Introduce loading icon in each page of YARN-UI

2017-02-15 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15869320#comment-15869320
 ] 

Sunil G commented on YARN-6184:
---

*yarn-queus* could still override from *abstract* route. We need not have to 
remove that.

> [YARN-3368] Introduce loading icon in each page of YARN-UI
> --
>
> Key: YARN-6184
> URL: https://issues.apache.org/jira/browse/YARN-6184
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Akhil PB
>Assignee: Akhil PB
> Attachments: YARN-6184.001.patch, YARN-6184.002.patch
>
>
> Add loading icon in each page in new YARN-UI.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6177) Yarn client should exit with an informative error message if an incompatible Jersey library is used at client

2017-02-15 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6177?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15869304#comment-15869304
 ] 

Li Lu commented on YARN-6177:
-

bq. Set yarn.timeline-service.client.best-effort to true with this patch, so 
yarn client doesn't treat such failure as a fatal error.
This is actually my concern... My feeling is we may not want dealing Errors as 
a part of best effort. Not sure about this, cc/[~jlowe]...
Hi Jason, I saw you committed the original timelineBestEffort patch, so just a 
quick inquiry to see if you think handling this Error under best effort mode a 
good idea. Thanks! 

> Yarn client should exit with an informative error message if an incompatible 
> Jersey library is used at client
> -
>
> Key: YARN-6177
> URL: https://issues.apache.org/jira/browse/YARN-6177
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.8.0, 3.0.0-alpha2
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: spark2-job-output-after-besteffort.out, 
> spark2-job-output-after.out, spark2-job-output-before.out, 
> YARN-6177.01.patch, YARN-6177.02.patch, YARN-6177.03.patch, 
> YARN-6177.04.patch, YARN-6177.05.patch
>
>
> Per discussion in YARN-5271, lets provide an error message to suggest user to 
> disable timeline service instead of disabling for them.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4675) Reorganize TimelineClient and TimelineClientImpl into separate classes for ATSv1.x and ATSv2

2017-02-15 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15869268#comment-15869268
 ] 

Naganarasimha G R commented on YARN-4675:
-

Yes [~sjlee0] working on it almost done will upload shortly !

> Reorganize TimelineClient and TimelineClientImpl into separate classes for 
> ATSv1.x and ATSv2
> 
>
> Key: YARN-4675
> URL: https://issues.apache.org/jira/browse/YARN-4675
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
>  Labels: YARN-5355, yarn-5355-merge-blocker
> Attachments: YARN-4675.v2.002.patch, YARN-4675.v2.003.patch, 
> YARN-4675.v2.004.patch, YARN-4675.v2.005.patch, YARN-4675.v2.006.patch, 
> YARN-4675.v2.007.patch, YARN-4675.v2.008.patch, YARN-4675.v2.009.patch, 
> YARN-4675.v2.010.patch, YARN-4675.v2.011.patch, 
> YARN-4675-YARN-2928.v1.001.patch
>
>
> We need to reorganize TimeClientImpl into TimeClientV1Impl ,  
> TimeClientV2Impl and if required a base class, so that its clear which part 
> of the code belongs to which version and thus better maintainable.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-5878) About page in RM UI displays misleading HA state

2017-02-15 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang resolved YARN-5878.
---
Resolution: Won't Fix

This seems no longer important since yarn new UI is coming, closing this one 
now.

> About page in RM UI displays misleading HA state 
> -
>
> Key: YARN-5878
> URL: https://issues.apache.org/jira/browse/YARN-5878
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager, yarn
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Minor
> Attachments: About_page_RM_non-HA.jpg, YARN-5878.01.patch, 
> YARN-5878.02.patch, YARN-5878.03.patch
>
>
> In a non-HA environment, RM UI (v1) displays  *ResourceManager HA state: 
> active* which is misleading.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6184) [YARN-3368] Introduce loading icon in each page of YARN-UI

2017-02-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15869247#comment-15869247
 ] 

Hadoop QA commented on YARN-6184:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}  0m 50s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-6184 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12852981/YARN-6184.002.patch |
| Optional Tests |  asflicense  |
| uname | Linux f41865853785 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 
15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / a136936 |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/14976/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> [YARN-3368] Introduce loading icon in each page of YARN-UI
> --
>
> Key: YARN-6184
> URL: https://issues.apache.org/jira/browse/YARN-6184
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Akhil PB
>Assignee: Akhil PB
> Attachments: YARN-6184.001.patch, YARN-6184.002.patch
>
>
> Add loading icon in each page in new YARN-UI.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6184) [YARN-3368] Introduce loading icon in each page of YARN-UI

2017-02-15 Thread Akhil PB (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akhil PB updated YARN-6184:
---
Attachment: YARN-6184.002.patch

> [YARN-3368] Introduce loading icon in each page of YARN-UI
> --
>
> Key: YARN-6184
> URL: https://issues.apache.org/jira/browse/YARN-6184
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Akhil PB
>Assignee: Akhil PB
> Attachments: YARN-6184.001.patch, YARN-6184.002.patch
>
>
> Add loading icon in each page in new YARN-UI.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5939) FSDownload leaks FileSystem resources

2017-02-15 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5939?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated YARN-5939:
--
Labels: leak  (was: )

> FSDownload leaks FileSystem resources
> -
>
> Key: YARN-5939
> URL: https://issues.apache.org/jira/browse/YARN-5939
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.5.1, 2.7.3
>Reporter: liuxiangwei
>Assignee: Weiwei Yang
>  Labels: leak
> Attachments: YARN-5939.01.patch, YARN-5939.02.patch, 
> YARN-5939.03.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> Background
> To use our self-defined FileSystem class, the item of configuration 
> "fs.%s.impl.disable.cache" should set to true.
> In YARN's source code, the class named 
> "org.apache.hadoop.yarn.util.FSDownload" use getFileSystem but never close, 
> which leading to file descriptor leak because our self-defined FileSystem 
> class close the file descriptor when the close function is invoked.
> My Question below:
> 1. whether invoking "getFileSystem" but never close is YARN's expected 
> behavior 
> 2. what should we do in our self-defined FileSystem resolve it.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4675) Reorganize TimelineClient and TimelineClientImpl into separate classes for ATSv1.x and ATSv2

2017-02-15 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15869215#comment-15869215
 ] 

Sangjin Lee commented on YARN-4675:
---

The latest patch LGTM (v.11). I'll commit it soon. [~Naganarasimha], could you 
also post a patch for YARN-5355 if the trunk version does not apply cleanly?

> Reorganize TimelineClient and TimelineClientImpl into separate classes for 
> ATSv1.x and ATSv2
> 
>
> Key: YARN-4675
> URL: https://issues.apache.org/jira/browse/YARN-4675
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
>  Labels: YARN-5355, yarn-5355-merge-blocker
> Attachments: YARN-4675.v2.002.patch, YARN-4675.v2.003.patch, 
> YARN-4675.v2.004.patch, YARN-4675.v2.005.patch, YARN-4675.v2.006.patch, 
> YARN-4675.v2.007.patch, YARN-4675.v2.008.patch, YARN-4675.v2.009.patch, 
> YARN-4675.v2.010.patch, YARN-4675.v2.011.patch, 
> YARN-4675-YARN-2928.v1.001.patch
>
>
> We need to reorganize TimeClientImpl into TimeClientV1Impl ,  
> TimeClientV2Impl and if required a base class, so that its clear which part 
> of the code belongs to which version and thus better maintainable.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4675) Reorganize TimelineClient and TimelineClientImpl into separate classes for ATSv1.x and ATSv2

2017-02-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15869197#comment-15869197
 ] 

Hadoop QA commented on YARN-4675:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
53s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  2m 
14s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
49s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  2m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common 
generated 0 new + 4575 unchanged - 4 fixed = 4575 total (was 4579) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} hadoop-yarn-server-tests in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} hadoop-yarn-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} hadoop-yarn-applications-distributedshell in the 
patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} hadoop-mapreduce-client-app in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
32s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m 
55s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  6m 41s{color} 

[jira] [Commented] (YARN-6069) CORS support in timeline v2

2017-02-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15869176#comment-15869176
 ] 

Hadoop QA commented on YARN-6069:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
35s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
54s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
23s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
22s{color} | {color:green} YARN-5355 passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
25s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} YARN-5355 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 1 line(s) with tabs. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
43s{color} | {color:green} hadoop-yarn-server-timelineservice in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m  
9s{color} | {color:green} hadoop-yarn-site in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 26m 45s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | YARN-6069 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12852972/YARN-6069-YARN-5355.0002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 1c725553d96c 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 
15:37:11 

[jira] [Commented] (YARN-6164) Expose maximum-am-resource-percent in YarnClient

2017-02-15 Thread Bibin A Chundatt (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15869174#comment-15869174
 ] 

Bibin A Chundatt commented on YARN-6164:


For every new property added  in queue we are trying to handle separately in 
QueueInfo. If we expose  Property object in QueueInfo and add all static 
configs to that no need to handle separately.
{quote}
already have mix set of informations such as capacity, application, preemption 
disable 
{quote}
As you said since QueueInfo exposed all  above we might have continue doing the 
same.

> Expose maximum-am-resource-percent in YarnClient
> 
>
> Key: YARN-6164
> URL: https://issues.apache.org/jira/browse/YARN-6164
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.7.2
>Reporter: Benson Qiu
>Assignee: Benson Qiu
> Attachments: YARN-6164.001.patch, YARN-6164.002.patch, 
> YARN-6164.003.patch
>
>
> `yarn.scheduler.capacity.maximum-am-resource-percent` is exposed through the 
> [Cluster Scheduler 
> API|http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/ResourceManagerRest.html#Cluster_Scheduler_API],
>  but not through 
> [YarnClient|https://hadoop.apache.org/docs/current/api/org/apache/hadoop/yarn/client/api/YarnClient.html].
> Since YarnClient and RM REST APIs depend on different ports (8032 vs 8088 by 
> default), it would be nice to expose `maximum-am-resource-percent` in 
> YarnClient as well. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6069) CORS support in timeline v2

2017-02-15 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15869154#comment-15869154
 ] 

Sunil G commented on YARN-6069:
---

In RM and NM, we just a enable a configuration to decide whether CORS support 
is to be done at server end. Rest all handling is done at hadoop-common and 
various configuration from *core-site.xml* could help to achieve the same. I am 
behind such a linear approach and reuse existing capabilities. Its a matter 
having more configs.
New patch is in the line, and i feel this is better. Thoughts?

> CORS support in timeline v2
> ---
>
> Key: YARN-6069
> URL: https://issues.apache.org/jira/browse/YARN-6069
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Sreenath Somarajapuram
>Assignee: Rohith Sharma K S
> Attachments: YARN-6069-YARN-5355.0001.patch, 
> YARN-6069-YARN-5355.0002.patch
>
>
> By default the browser prevents accessing resources from multiple domains. In 
> most cases the UIs would be loaded form a domain different from that of  
> timeline server. Hence without CORS support, it would be difficult for the 
> UIs to load data from timeline v2.
> YARN-2277 must provide more info on the implementation.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6164) Expose maximum-am-resource-percent in YarnClient

2017-02-15 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15869149#comment-15869149
 ] 

Sunil G commented on YARN-6164:
---

[~bibinchundatt]

{{org.apache.hadoop.yarn.api.records.QueueInfo}} already have mix set of 
informations such as capacity, application, preemption disable info etc, so I 
thought adding another param inside QueueInfo for am-percentage as well. I 
guess you are mentioning about a subclass inside *QueueInfo* and push all these 
under that? Could u please explain a bit more.

> Expose maximum-am-resource-percent in YarnClient
> 
>
> Key: YARN-6164
> URL: https://issues.apache.org/jira/browse/YARN-6164
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.7.2
>Reporter: Benson Qiu
>Assignee: Benson Qiu
> Attachments: YARN-6164.001.patch, YARN-6164.002.patch, 
> YARN-6164.003.patch
>
>
> `yarn.scheduler.capacity.maximum-am-resource-percent` is exposed through the 
> [Cluster Scheduler 
> API|http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/ResourceManagerRest.html#Cluster_Scheduler_API],
>  but not through 
> [YarnClient|https://hadoop.apache.org/docs/current/api/org/apache/hadoop/yarn/client/api/YarnClient.html].
> Since YarnClient and RM REST APIs depend on different ports (8032 vs 8088 by 
> default), it would be nice to expose `maximum-am-resource-percent` in 
> YarnClient as well. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6125) The application attempt's diagnostic message should have a maximum size

2017-02-15 Thread Andras Piros (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6125?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Piros updated YARN-6125:
---
Attachment: YARN-6125.009.patch

[~templedf] addressing all the mentioned issues.

> The application attempt's diagnostic message should have a maximum size
> ---
>
> Key: YARN-6125
> URL: https://issues.apache.org/jira/browse/YARN-6125
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Affects Versions: 2.7.0
>Reporter: Daniel Templeton
>Assignee: Andras Piros
>Priority: Critical
> Fix For: 3.0.0-alpha3
>
> Attachments: YARN-6125.000.patch, YARN-6125.001.patch, 
> YARN-6125.002.patch, YARN-6125.003.patch, YARN-6125.004.patch, 
> YARN-6125.005.patch, YARN-6125.006.patch, YARN-6125.007.patch, 
> YARN-6125.008.patch, YARN-6125.009.patch
>
>
> We've found through experience that the diagnostic message can grow 
> unbounded.  I've seen attempts that have diagnostic messages over 1MB.  Since 
> the message is stored in the state store, it's a bad idea to allow the 
> message to grow unbounded.  Instead, there should be a property that sets a 
> maximum size on the message.
> I suspect that some of the ZK state store issues we've seen in the past were 
> due to the size of the diagnostic messages and not to the size of the 
> classpath, as is the current prevailing opinion.
> An open question is how best to prune the message once it grows too large.  
> Should we
> # truncate the tail,
> # truncate the head,
> # truncate the middle,
> # add another property to make the behavior selectable, or
> # none of the above?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6069) CORS support in timeline v2

2017-02-15 Thread Rohith Sharma K S (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S updated YARN-6069:

Attachment: YARN-6069-YARN-5355.0002.patch

updated straight forward patch reusing hadoop-common code. 

> CORS support in timeline v2
> ---
>
> Key: YARN-6069
> URL: https://issues.apache.org/jira/browse/YARN-6069
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Sreenath Somarajapuram
>Assignee: Rohith Sharma K S
> Attachments: YARN-6069-YARN-5355.0001.patch, 
> YARN-6069-YARN-5355.0002.patch
>
>
> By default the browser prevents accessing resources from multiple domains. In 
> most cases the UIs would be loaded form a domain different from that of  
> timeline server. Hence without CORS support, it would be difficult for the 
> UIs to load data from timeline v2.
> YARN-2277 must provide more info on the implementation.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-6164) Expose maximum-am-resource-percent in YarnClient

2017-02-15 Thread Bibin A Chundatt (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15869129#comment-15869129
 ] 

Bibin A Chundatt edited comment on YARN-6164 at 2/16/17 4:16 AM:
-

[~sunilg]
Thoughts on getting these as {{Properties}} ?? So that per queue all additional 
info could be send. 


was (Author: bibinchundatt):
[~sunilg]
Thoughts on getting these are {{Properties}} ??

> Expose maximum-am-resource-percent in YarnClient
> 
>
> Key: YARN-6164
> URL: https://issues.apache.org/jira/browse/YARN-6164
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.7.2
>Reporter: Benson Qiu
>Assignee: Benson Qiu
> Attachments: YARN-6164.001.patch, YARN-6164.002.patch, 
> YARN-6164.003.patch
>
>
> `yarn.scheduler.capacity.maximum-am-resource-percent` is exposed through the 
> [Cluster Scheduler 
> API|http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/ResourceManagerRest.html#Cluster_Scheduler_API],
>  but not through 
> [YarnClient|https://hadoop.apache.org/docs/current/api/org/apache/hadoop/yarn/client/api/YarnClient.html].
> Since YarnClient and RM REST APIs depend on different ports (8032 vs 8088 by 
> default), it would be nice to expose `maximum-am-resource-percent` in 
> YarnClient as well. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6164) Expose maximum-am-resource-percent in YarnClient

2017-02-15 Thread Bibin A Chundatt (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15869129#comment-15869129
 ] 

Bibin A Chundatt commented on YARN-6164:


[~sunilg]
Thoughts on getting these are {{Properties}} ??

> Expose maximum-am-resource-percent in YarnClient
> 
>
> Key: YARN-6164
> URL: https://issues.apache.org/jira/browse/YARN-6164
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.7.2
>Reporter: Benson Qiu
>Assignee: Benson Qiu
> Attachments: YARN-6164.001.patch, YARN-6164.002.patch, 
> YARN-6164.003.patch
>
>
> `yarn.scheduler.capacity.maximum-am-resource-percent` is exposed through the 
> [Cluster Scheduler 
> API|http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/ResourceManagerRest.html#Cluster_Scheduler_API],
>  but not through 
> [YarnClient|https://hadoop.apache.org/docs/current/api/org/apache/hadoop/yarn/client/api/YarnClient.html].
> Since YarnClient and RM REST APIs depend on different ports (8032 vs 8088 by 
> default), it would be nice to expose `maximum-am-resource-percent` in 
> YarnClient as well. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6069) CORS support in timeline v2

2017-02-15 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15869117#comment-15869117
 ] 

Rohith Sharma K S commented on YARN-6069:
-

I am not fan of adding daemon specific code snippet into yarn-server-common 
package with timeline service prefix. Since readerserver REST API are GET we 
need not to override any configurations so that directly 
HttpCrossOriginFilterInitializer can be used. 

> CORS support in timeline v2
> ---
>
> Key: YARN-6069
> URL: https://issues.apache.org/jira/browse/YARN-6069
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Sreenath Somarajapuram
>Assignee: Rohith Sharma K S
> Attachments: YARN-6069-YARN-5355.0001.patch
>
>
> By default the browser prevents accessing resources from multiple domains. In 
> most cases the UIs would be loaded form a domain different from that of  
> timeline server. Hence without CORS support, it would be difficult for the 
> UIs to load data from timeline v2.
> YARN-2277 must provide more info on the implementation.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-6069) CORS support in timeline v2

2017-02-15 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15869094#comment-15869094
 ] 

Varun Saxena edited comment on YARN-6069 at 2/16/17 3:53 AM:
-

As of now for timeline service, we use the timeline specific configurations if 
they are configured and fall back to common configurations in 
HttpCrossOriginFilterInitializer, if not configured.
Now for ATSv2 we can potentially use common configurations itself. As ATSv2 is 
new, we can probably define it this way.
However this does not give us flexibility of switching off CORS for some 
modules and not for some(if same configuration is used). But I wonder why 
someone would be doing it i.e. switch off CORS for some and not switch it off 
for other.


was (Author: varun_saxena):
As of now for timeline service, we use the timeline specific configurations if 
they are configured and fall back to common configurations in 
HttpCrossOriginFilterInitializer, if not configured.
Now for ATSv2 we can potentially use common configurations itself. As ATSv2 is 
new, we can probably define it this way.

> CORS support in timeline v2
> ---
>
> Key: YARN-6069
> URL: https://issues.apache.org/jira/browse/YARN-6069
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Sreenath Somarajapuram
>Assignee: Rohith Sharma K S
> Attachments: YARN-6069-YARN-5355.0001.patch
>
>
> By default the browser prevents accessing resources from multiple domains. In 
> most cases the UIs would be loaded form a domain different from that of  
> timeline server. Hence without CORS support, it would be difficult for the 
> UIs to load data from timeline v2.
> YARN-2277 must provide more info on the implementation.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6069) CORS support in timeline v2

2017-02-15 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15869094#comment-15869094
 ] 

Varun Saxena commented on YARN-6069:


As of now for timeline service, we use the timeline specific configurations if 
they are configured and fall back to common configurations in 
HttpCrossOriginFilterInitializer, if not configured.
Now for ATSv2 we can potentially use common configurations itself. As ATSv2 is 
new, we can probably define it this way.

> CORS support in timeline v2
> ---
>
> Key: YARN-6069
> URL: https://issues.apache.org/jira/browse/YARN-6069
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Sreenath Somarajapuram
>Assignee: Rohith Sharma K S
> Attachments: YARN-6069-YARN-5355.0001.patch
>
>
> By default the browser prevents accessing resources from multiple domains. In 
> most cases the UIs would be loaded form a domain different from that of  
> timeline server. Hence without CORS support, it would be difficult for the 
> UIs to load data from timeline v2.
> YARN-2277 must provide more info on the implementation.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6069) CORS support in timeline v2

2017-02-15 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15869084#comment-15869084
 ] 

Varun Saxena commented on YARN-6069:


bq. I think you are missing one key thing is PREFIX is different i.e 
yarn.timeline-service.http-cross-origin which is same as ATSv1.
Sorry did not get you. The config used for ATSv2 is same as ATSv1 so we can use 
the same class. Right? Infact if configurations were different my comment would 
have been to keep them the same. We reuse configurations across ATSv1 and ATSv2.
Hence I suggested to move CrossOriginFilterInitializer to 
hadoop-yarn-server-common and let 
ATSv1(hadoop-yarn-server-applicationhistoryservice package) and 
ATSv2(hadoop-yarn-server-timelineservice package) refer to it.

bq. May be I can remove class CrossOriginFilterInitializer and keep 
HttpCrossOriginFilterInitializer itself for ATSv2 
The reason we have CrossOriginFilterInitializer which extends 
HttpCrossOriginFilterInitializer is to override sibling configurations of 
HttpCrossOriginFilterInitializer with specific timeline service configurations. 
So we would need to have a derived class. Right?

> CORS support in timeline v2
> ---
>
> Key: YARN-6069
> URL: https://issues.apache.org/jira/browse/YARN-6069
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Sreenath Somarajapuram
>Assignee: Rohith Sharma K S
> Attachments: YARN-6069-YARN-5355.0001.patch
>
>
> By default the browser prevents accessing resources from multiple domains. In 
> most cases the UIs would be loaded form a domain different from that of  
> timeline server. Hence without CORS support, it would be difficult for the 
> UIs to load data from timeline v2.
> YARN-2277 must provide more info on the implementation.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6069) CORS support in timeline v2

2017-02-15 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15869054#comment-15869054
 ] 

Rohith Sharma K S commented on YARN-6069:
-

[~varun_saxena] I think you are missing one key thing is PREFIX is different 
i.e *yarn.timeline-service.http-cross-origin* which is same as ATSv1. This is 
the reason we have added new class. May be I can remove class 
CrossOriginFilterInitializer and keep HttpCrossOriginFilterInitializer itself 
for ATSv2 with the property *yarn.timeline-service.http-cross-origin.enabled*. 
This will be much more simpler. I will update a different patch which uses 
common code. 

[~sunilg] Currently I have referred and borrowed it. It is done as same as 
ATSv1. Lets change to different way inline with RM and NM. 

> CORS support in timeline v2
> ---
>
> Key: YARN-6069
> URL: https://issues.apache.org/jira/browse/YARN-6069
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Sreenath Somarajapuram
>Assignee: Rohith Sharma K S
> Attachments: YARN-6069-YARN-5355.0001.patch
>
>
> By default the browser prevents accessing resources from multiple domains. In 
> most cases the UIs would be loaded form a domain different from that of  
> timeline server. Hence without CORS support, it would be difficult for the 
> UIs to load data from timeline v2.
> YARN-2277 must provide more info on the implementation.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6177) Yarn client should exit with an informative error message if an incompatible Jersey library is used at client

2017-02-15 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6177?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15869045#comment-15869045
 ] 

Weiwei Yang commented on YARN-6177:
---

Hi [~gtCarrera9]

Yes, 2 approaches

# Set {{yarn.timeline-service.client.best-effort}} to {{true}} with this patch, 
so yarn client doesn't treat such failure as a fatal error. Default value is 
false, so user always hits the error with the prompted message at first time.
# Set {{yarn.timeine-service.enabled}} to {{false}} to disable timeline service 
from client side config. 

I was suggesting to use #1 because it literally explains the work around 
better. Disable timeline service is a bit confusing because this property is 
more like a server side config.

> Yarn client should exit with an informative error message if an incompatible 
> Jersey library is used at client
> -
>
> Key: YARN-6177
> URL: https://issues.apache.org/jira/browse/YARN-6177
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.8.0, 3.0.0-alpha2
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: spark2-job-output-after-besteffort.out, 
> spark2-job-output-after.out, spark2-job-output-before.out, 
> YARN-6177.01.patch, YARN-6177.02.patch, YARN-6177.03.patch, 
> YARN-6177.04.patch, YARN-6177.05.patch
>
>
> Per discussion in YARN-5271, lets provide an error message to suggest user to 
> disable timeline service instead of disabling for them.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4518) [YARN-3368] Support rendering statistic-by-node-label for queues/apps page

2017-02-15 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4518?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-4518:
--
Attachment: YARN-4518.0001.patch

A w.i.p patch

> [YARN-3368] Support rendering statistic-by-node-label for queues/apps page
> --
>
> Key: YARN-4518
> URL: https://issues.apache.org/jira/browse/YARN-4518
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Sunil G
> Attachments: YARN-4518.0001.patch, YARN-4518-YARN-3368.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6069) CORS support in timeline v2

2017-02-15 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15869030#comment-15869030
 ] 

Sunil G commented on YARN-6069:
---

[~rohithsharma]
Do we have 
{{org.apache.hadoop.yarn.server.timeline.webapp.CrossOriginFilterInitializer}} 
already. Could we refer to that?, I think Varun also suggesting the same?

> CORS support in timeline v2
> ---
>
> Key: YARN-6069
> URL: https://issues.apache.org/jira/browse/YARN-6069
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Sreenath Somarajapuram
>Assignee: Rohith Sharma K S
> Attachments: YARN-6069-YARN-5355.0001.patch
>
>
> By default the browser prevents accessing resources from multiple domains. In 
> most cases the UIs would be loaded form a domain different from that of  
> timeline server. Hence without CORS support, it would be difficult for the 
> UIs to load data from timeline v2.
> YARN-2277 must provide more info on the implementation.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6163) FS Preemption is a trickle for severely starved applications

2017-02-15 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15869019#comment-15869019
 ] 

Daniel Templeton commented on YARN-6163:


Alright, LGTM.  +1

> FS Preemption is a trickle for severely starved applications
> 
>
> Key: YARN-6163
> URL: https://issues.apache.org/jira/browse/YARN-6163
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Affects Versions: 2.9.0
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
> Attachments: YARN-6163.004.patch, YARN-6163.005.patch, 
> YARN-6163.006.patch, yarn-6163-1.patch, yarn-6163-2.patch
>
>
> With current logic, only one RR is considered per each instance of marking an 
> application starved. This marking happens only on the update call that runs 
> every 500ms.  Due to this, an application that is severely starved takes 
> forever to reach fairshare based on preemptions.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6188) Fix OOM issue with decommissioningNodesWatcher in the case of clusters with large number of nodes

2017-02-15 Thread Ajay Jadhav (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15869018#comment-15869018
 ] 

Ajay Jadhav commented on YARN-6188:
---

In the for loop, the string is appended couple of times. 
First to get the node information such as node id, status etc.
And also, for each node, it goes through all the app ids to get more 
information such as state, application type etc.

StringBuilder looked more cleaner with all the formatting etc. due to which I 
kept it that way. Let me know if there are any particular concerns.

> Fix OOM issue with decommissioningNodesWatcher in the case of clusters with 
> large number of nodes
> -
>
> Key: YARN-6188
> URL: https://issues.apache.org/jira/browse/YARN-6188
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.9.0
>Reporter: Ajay Jadhav
> Fix For: 2.9.0, 3.0.0-alpha1
>
> Attachments: YARN-6188.001.patch, YARN-6188.002.patch
>
>
> LogDecommissioningNodesStatus method in DecommissioningNodesWatcher uses 
> StringBuilder to append status of all
> decommissioning nodes for logging purpose.
> In the case of large number of decommissioning nodes, this leads to OOM 
> exception. The fix scopes StringBuilder so that in case of memory pressure, 
> GC can kick in and free up the memory.
> This is supposed to fix a bug introduced in 
> https://issues.apache.org/jira/browse/YARN-4676



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4675) Reorganize TimelineClient and TimelineClientImpl into separate classes for ATSv1.x and ATSv2

2017-02-15 Thread Naganarasimha G R (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naganarasimha G R updated YARN-4675:

Attachment: YARN-4675.v2.011.patch

Thanks [~sjlee0], for pointing it and apart from the one you mentioned could 
not find any other kind in the modified code. Javadoc errors are pointing to 
even unmodified code too. And faced similar issue in another jira too.
Attaching the fix for trunk.

> Reorganize TimelineClient and TimelineClientImpl into separate classes for 
> ATSv1.x and ATSv2
> 
>
> Key: YARN-4675
> URL: https://issues.apache.org/jira/browse/YARN-4675
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
>  Labels: YARN-5355, yarn-5355-merge-blocker
> Attachments: YARN-4675.v2.002.patch, YARN-4675.v2.003.patch, 
> YARN-4675.v2.004.patch, YARN-4675.v2.005.patch, YARN-4675.v2.006.patch, 
> YARN-4675.v2.007.patch, YARN-4675.v2.008.patch, YARN-4675.v2.009.patch, 
> YARN-4675.v2.010.patch, YARN-4675.v2.011.patch, 
> YARN-4675-YARN-2928.v1.001.patch
>
>
> We need to reorganize TimeClientImpl into TimeClientV1Impl ,  
> TimeClientV2Impl and if required a base class, so that its clear which part 
> of the code belongs to which version and thus better maintainable.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-5946) Create YarnConfigurationStore interface and InMemoryConfigurationStore class

2017-02-15 Thread Jonathan Hung (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15868965#comment-15868965
 ] 

Jonathan Hung edited comment on YARN-5946 at 2/16/17 1:53 AM:
--

Thanks [~leftnoteasy] for the comments.
bq. It is actually means "last confirmed" transaction id, correct? I found in 
the step 5 it get increased even if update failed.
It is the txnid for which all logs with a lesser txnid do not need to be 
replayed on recovery. Either this means the log has been persisted to the store 
in case of successful refresh, or the mutation has been deemed invalid in case 
of failure to refresh (which is why it is incremented even if update fails). So 
in this case perhaps confirmMutation(long id) should be confirmMutation(long 
id, boolean isValid). 
bq. So I suggest to persist a transaction-id in addition to "last good" 
configuration to table-1.
Sure, I think this is implementation dependent, in general though we can have a 
configuration entry with key="transaction.id" or something similar.
bq. Who will generate "id" for each logItem?
I think the YarnConfigurationStore should maintain the current id and generate 
new ones, which are returned upon logMutation calls. So when MCM receives a 
mutation, it will log it, which will then return an incremented id "id", then 
MCM will try to refresh, and will call confirmMutation("id", true/false).

Here the YarnConfigurationStore can store a map of "id" to LogMutation in 
memory, so it can quickly store the LogMutation into table1 if 
confirmMutation(id, true) is called.
bq. YarnConfigurationStore#retrieve, does it mean get from table-1 or get from 
table-1/2/3 (which described by your "for the failover case ..." in your 
previous comment)? I would prefer the latter one.
On failover MCM would call retrieve (which returns a "conf"), and 
getPendingMutations, apply each pendingMutation one by one to "conf", and 
confirmMutation(pendingMutation.id, true/false) if refresh is 
successful/unsuccessful. So YarnConfigurationStore#retrieve on its own returns 
from table1 which may not have all logs applied, but MCM will reconstruct the 
updated configuration from getPendingMutations. So not sure if 
retrieveLatestConf is necessary (the third API in previous comment).

Since MCM stores an in memory configuration, YarnConfigurationStore#retrieve 
and getPendingMutations should be only called once, on failover.
So my proposal is: {noformat}1) initialize(Configuration conf, Map schedConf);
2) retrieve which returns conf stored in table1
3) long logMutation(LogMutation) returns id to save the new mutation in table2
4) confirmMutation(long id, boolean isValid) to increment txnid stored in 
table1, and persist the logged mutation if isValid==true
5) List getPendingMutations(void) for getting unconfirmed 
mutations{noformat}
I think we can add getConfirmedConfHistory in a later patch.

If no concerns with this approach, will upload patch. Thanks!


was (Author: jhung):
Thanks [~leftnoteasy] for the comments.
bq. It is actually means "last confirmed" transaction id, correct? I found in 
the step 5 it get increased even if update failed.
It is the txnid for which all logs with a lesser txnid do not need to be 
replayed on recovery. Either this means the log has been persisted to the store 
in case of successful refresh, or the mutation has been deemed invalid in case 
of failure to refresh (which is why it is incremented even if update fails). So 
in this case perhaps confirmMutation(long id) should be confirmMutation(long 
id, boolean isValid). 
bq. So I suggest to persist a transaction-id in addition to "last good" 
configuration to table-1.
Sure, I think this is implementation dependent, in general though we can have a 
configuration entry with key="transaction.id" or something similar.
bq. Who will generate "id" for each logItem?
I think the YarnConfigurationStore should maintain the current id and generate 
new ones, which are returned upon logMutation calls. So when MCM receives a 
mutation, it will log it, which will then return an incremented id "id", then 
MCM will try to refresh, and will call confirmMutation("id", true/false).

Here the YarnConfigurationStore can store a map of "id" to LogMutation in 
memory, so it can quickly store the LogMutation into table1 if 
confirmMutation(id, true) is called.
bq. YarnConfigurationStore#retrieve, does it mean get from table-1 or get from 
table-1/2/3 (which described by your "for the failover case ..." in your 
previous comment)? I would prefer the latter one.
On failover MCM would call retrieve (which returns a "conf"), and 
getPendingMutations, apply each pendingMutation one by one to "conf", and 
confirmMutation(pendingMutation.id, true/false) if refresh is 
successful/unsuccessful. So YarnConfigurationStore#retrieve on its own returns 
from table1 which may not have all logs applied, but MCM will 

[jira] [Commented] (YARN-5946) Create YarnConfigurationStore interface and InMemoryConfigurationStore class

2017-02-15 Thread Jonathan Hung (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15868965#comment-15868965
 ] 

Jonathan Hung commented on YARN-5946:
-

Thanks [~leftnoteasy] for the comments.
bq. It is actually means "last confirmed" transaction id, correct? I found in 
the step 5 it get increased even if update failed.
It is the txnid for which all logs with a lesser txnid do not need to be 
replayed on recovery. Either this means the log has been persisted to the store 
in case of successful refresh, or the mutation has been deemed invalid in case 
of failure to refresh (which is why it is incremented even if update fails). So 
in this case perhaps confirmMutation(long id) should be confirmMutation(long 
id, boolean isValid). 
bq. So I suggest to persist a transaction-id in addition to "last good" 
configuration to table-1.
Sure, I think this is implementation dependent, in general though we can have a 
configuration entry with key="transaction.id" or something similar.
bq. Who will generate "id" for each logItem?
I think the YarnConfigurationStore should maintain the current id and generate 
new ones, which are returned upon logMutation calls. So when MCM receives a 
mutation, it will log it, which will then return an incremented id "id", then 
MCM will try to refresh, and will call confirmMutation("id", true/false).

Here the YarnConfigurationStore can store a map of "id" to LogMutation in 
memory, so it can quickly store the LogMutation into table1 if 
confirmMutation(id, true) is called.
bq. YarnConfigurationStore#retrieve, does it mean get from table-1 or get from 
table-1/2/3 (which described by your "for the failover case ..." in your 
previous comment)? I would prefer the latter one.
On failover MCM would call retrieve (which returns a "conf"), and 
getPendingMutations, apply each pendingMutation one by one to "conf", and 
confirmMutation(pendingMutation.id, true/false) if refresh is 
successful/unsuccessful. So YarnConfigurationStore#retrieve on its own returns 
from table1 which may not have all logs applied, but MCM will reconstruct the 
updated configuration from getPendingMutations. So not sure if 
retrieveLatestConf is necessary (the third API in previous comment).

Since MCM stores an in memory configuration, YarnConfigurationStore#retrieve 
and getPendingMutations should be only called once, on failover.
So my proposal is: {noformat}1) initialize(Configuration conf, Map schedConf);
2) retrieve which returns conf stored in table1
3) logMutation to save the new mutation in table2
4) confirmMutation(long id, boolean isValid) to increment txnid stored in 
table1, and persist the logged mutation if isValid==true
5) List getPendingMutations(void) for getting unconfirmed 
mutations{noformat}
I think we can add getConfirmedConfHistory in a later patch.

If no concerns with this approach, will upload patch. Thanks!

> Create YarnConfigurationStore interface and InMemoryConfigurationStore class
> 
>
> Key: YARN-5946
> URL: https://issues.apache.org/jira/browse/YARN-5946
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
> Attachments: YARN-5946.001.patch, YARN-5946-YARN-5734.002.patch
>
>
> This class provides the interface to persist YARN configurations in a backing 
> store.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6196) [YARN-3368] Invalid information in Node pages and improve Resource Donut chart with better label

2017-02-15 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15868960#comment-15868960
 ] 

Sunil G commented on YARN-6196:
---

Adding few more comments from Vinod

1. Sorting by “Mem used” is not numerical sort. Same with “Mem Avail”, “VCores 
Used”, “VCores Avail".
2. Heatmap -  Hovering on the box shows the info but hovering on the hostname 
text inside it doesn’t.

> [YARN-3368] Invalid information in Node pages and improve Resource Donut 
> chart with better label
> 
>
> Key: YARN-6196
> URL: https://issues.apache.org/jira/browse/YARN-6196
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Akhil PB
>Assignee: Akhil PB
>
> In nodes page:
> # Change 'Nodes Table' label to 'Information'
> # Change 'VCores Avail' label to 'VCores Available'
> # Show Health Report as N/A if not available
> # Change 'Mem Used' to 'Memory Used'
> # Change 'Mem Avail' to 'Memory Available'
> In node page:
> # Node Health Report missing
> # NodeManager Start Time shows Invalid Date
> # Reverse colors in the 'Resouce - Memory' and 'Resource - VCores' donut 
> charts
> # Convert Resource Memory into GB/TB
> # In List of Apps or List of Containers page, Node link is broken
> # Diagnostics is empty in Container Information



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6182) [YARN-3368] Fix alignment issues and missing information in Queue pages

2017-02-15 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15868957#comment-15868957
 ] 

Sunil G commented on YARN-6182:
---

Few more comments (from Vinod):
1. Title of the side-bar in the per-queue page  is incorrectly set to 
“Application”. 
2. When an individual queue is hovered, the titles of the modules should show 
the queue name
3. The queue tree graph is taking too much space. We should reduce both the 
vertical and horizontal spacing.

> [YARN-3368] Fix alignment issues and missing information in Queue pages
> ---
>
> Key: YARN-6182
> URL: https://issues.apache.org/jira/browse/YARN-6182
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-ui-v2
>Reporter: Akhil PB
>Assignee: Akhil PB
> Attachments: YARN-6182.001.patch
>
>
> 1. Queue -> Queue Capacities : Absolute Max Capacity should be aligned better.
> 2. Queue -> Queue Information: State is coming empty



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6201) Add node label as a query param for nodes REST end point

2017-02-15 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15868945#comment-15868945
 ] 

Sunil G commented on YARN-6201:
---

Yes. It could help for new UI as well.

> Add node label as a query param for nodes REST end point
> 
>
> Key: YARN-6201
> URL: https://issues.apache.org/jira/browse/YARN-6201
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: webapp
>Reporter: Sunil G
>Assignee: Sunil G
>
> For {{/nodes}} end point, label could be a query param.
> This could help to retrieve Node informations for a give label (with 
> resources etc).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6171) ConcurrentModificationException in ApplicationMasterService.allocate

2017-02-15 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15868919#comment-15868919
 ] 

Miklos Szegedi commented on YARN-6171:
--

Thank you, [~kasha]. Yes, it does. In order to be consistent and atomic, we 
have to return one single copy from the critical section at the expense of some 
memory and cpu.

> ConcurrentModificationException in ApplicationMasterService.allocate
> 
>
> Key: YARN-6171
> URL: https://issues.apache.org/jira/browse/YARN-6171
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 3.0.0-alpha2
>Reporter: Miklos Szegedi
>Assignee: Miklos Szegedi
> Attachments: YARN-6171.000.patch, YARN-6171.001.patch, 
> YARN-6171.002.patch, YARN-6171.003.patch
>
>
> I have noticed an exception that closes the Application Master occasionally 
> with Fair scheduler.
> {code}
> Caused by: 
> org.apache.hadoop.ipc.RemoteException(java.util.ConcurrentModificationException):
>  java.util.ConcurrentModificationException
>   at java.util.HashMap$HashIterator.nextEntry(HashMap.java:922)
>   at java.util.HashMap$KeyIterator.next(HashMap.java:956)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.allocate(FairScheduler.java:1005)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService.allocate(ApplicationMasterService.java:532)
>   at 
> org.apache.hadoop.yarn.api.impl.pb.service.ApplicationMasterProtocolPBServiceImpl.allocate(ApplicationMasterProtocolPBServiceImpl.java:60)
>   at 
> org.apache.hadoop.yarn.proto.ApplicationMasterProtocol$ApplicationMasterProtocolService$2.callBlockingMethod(ApplicationMasterProtocol.java:99)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6202) Configuration item Dispatcher.DISPATCHER_EXIT_ON_ERROR_KEY is disregarded

2017-02-15 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-6202:
---
Component/s: resourcemanager
 nodemanager

> Configuration item Dispatcher.DISPATCHER_EXIT_ON_ERROR_KEY is disregarded
> -
>
> Key: YARN-6202
> URL: https://issues.apache.org/jira/browse/YARN-6202
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager, resourcemanager
>Affects Versions: 2.9.0, 3.0.0-alpha2
>Reporter: Yufei Gu
>Assignee: Yufei Gu
>
> Dispatcher.DISPATCHER_EXIT_ON_ERROR_KEY (yarn.dispatcher.exit-on-error) 
> always be true no matter what value in configuration files. This misleads 
> users. Two solutions: 
> # Remove the configuration item but also allow it to be false to enable some 
> unit tests, the assumption is there is no need for false value in product env.
> # Make it default true instead of false, so that we don't need to hard code 
> in RM and NM, it is still configurable. 
> Other than that, the code around it needs to refactor. {{public static 
> final}} for a variable of interface isn't necessary, and YARN related 
> configure item should be in class YarnConfiguration.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6202) Configuration item Dispatcher.DISPATCHER_EXIT_ON_ERROR_KEY is disregarded

2017-02-15 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-6202:
---
Description: 
Dispatcher.DISPATCHER_EXIT_ON_ERROR_KEY (yarn.dispatcher.exit-on-error) always 
be true no matter what value in configuration files. This misleads users. Two 
solutions: 
# Remove the configuration item but also allow it to be false to enable some 
unit tests, the assumption is there is no need for false value in product env.
# Make it default true instead of false, so that we don't need to hard code in 
RM and NM, it is still configurable. 

Other than that, the code around it needs to refactor. {{public static final}} 
for a variable of interface isn't necessary, and YARN related configure item 
should be in class YarnConfiguration.

> Configuration item Dispatcher.DISPATCHER_EXIT_ON_ERROR_KEY is disregarded
> -
>
> Key: YARN-6202
> URL: https://issues.apache.org/jira/browse/YARN-6202
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.9.0, 3.0.0-alpha2
>Reporter: Yufei Gu
>Assignee: Yufei Gu
>
> Dispatcher.DISPATCHER_EXIT_ON_ERROR_KEY (yarn.dispatcher.exit-on-error) 
> always be true no matter what value in configuration files. This misleads 
> users. Two solutions: 
> # Remove the configuration item but also allow it to be false to enable some 
> unit tests, the assumption is there is no need for false value in product env.
> # Make it default true instead of false, so that we don't need to hard code 
> in RM and NM, it is still configurable. 
> Other than that, the code around it needs to refactor. {{public static 
> final}} for a variable of interface isn't necessary, and YARN related 
> configure item should be in class YarnConfiguration.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6190) Bug in LocalityMulticastAMRMProxyPolicy argument validation

2017-02-15 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated YARN-6190:
-
Summary: Bug in LocalityMulticastAMRMProxyPolicy argument validation  (was: 
Bug fixes in federation polices)

> Bug in LocalityMulticastAMRMProxyPolicy argument validation
> ---
>
> Key: YARN-6190
> URL: https://issues.apache.org/jira/browse/YARN-6190
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: federation
>Reporter: Botong Huang
>Assignee: Botong Huang
>Priority: Minor
> Attachments: YARN-6190-YARN-2915.v1.patch
>
>
> A bug fix in LocalityMulticastAMRMProxyPolicy on policy array condition 
> check, along with misc cleanups. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6203) Occasional test failure in TestWeightedRandomRouterPolicy

2017-02-15 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated YARN-6203:
-
Issue Type: Sub-task  (was: Bug)
Parent: YARN-2915

> Occasional test failure in TestWeightedRandomRouterPolicy
> -
>
> Key: YARN-6203
> URL: https://issues.apache.org/jira/browse/YARN-6203
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Botong Huang
>Assignee: Carlo Curino
>Priority: Minor
>
> Tests run: 8, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 8.432 sec <<< 
> FAILURE! - in 
> org.apache.hadoop.yarn.server.federation.policies.router.TestWeightedRandomRouterPolicy
> testClusterChosenWithRightProbability(org.apache.hadoop.yarn.server.federation.policies.router.TestWeightedRandomRouterPolicy)
>   Time elapsed: 7.437 sec  <<< FAILURE!
> java.lang.AssertionError: Id sc5 Actual weight: 0.00228 expected weight: 
> 0.0018106138
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at 
> org.apache.hadoop.yarn.server.federation.policies.router.TestWeightedRandomRouterPolicy.testClusterChosenWithRightProbability(TestWeightedRandomRouterPolicy.java:118)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6093) Invalid AMRM token exception when using FederationRMFailoverProxyProvider at AMRMtoken renewal during a RM failover

2017-02-15 Thread Botong Huang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15868900#comment-15868900
 ] 

Botong Huang commented on YARN-6093:


Sure, YARN-6203 created and assigned to [~curino]

> Invalid AMRM token exception when using FederationRMFailoverProxyProvider at 
> AMRMtoken renewal during a RM failover
> ---
>
> Key: YARN-6093
> URL: https://issues.apache.org/jira/browse/YARN-6093
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: amrmproxy, federation
>Affects Versions: YARN-2915
>Reporter: Botong Huang
>Assignee: Botong Huang
>Priority: Minor
> Attachments: 
> YARN-6093-08dc09581230ba595ce48fe7d3bc4eb2b6f98091.v4.patch, 
> YARN-6093-git08dc09581230ba595ce48fe7d3bc4eb2b6f98091.v4.patch, 
> YARN-6093.v1.patch, YARN-6093-YARN-2915.v1.patch, 
> YARN-6093-YARN-2915.v2.patch, YARN-6093-YARN-2915.v3.patch, 
> YARN-6093-YARN-2915.v4.patch
>
>
> AMRMProxy uses expired AMRMToken to talk to RM, leading to the "Invalid 
> AMRMToken" exception. The bug is triggered when both conditions are met: 
> 1. RM rolls master key and renews AMRMToken for a running AM.
> 2. Existing RPC connection between AMRMProxy and RM drops and attempt to 
> reconnect via failover in FederationRMFailoverProxyProvider. 
> Here's what happened: 
> In DefaultRequestInterceptor.init(), we create a proxy ugi, load it with the 
> initial AMRMToken issued by RM, and used it for initiating rmClient. Then we 
> arrive at FederationRMFailoverProxyProvider.init(), a full copy of ugi tokens 
> are saved locally, create an actual RM proxy and setup the RPC connection. 
> Later when RM rolls master key and issues a new AMRMToken, 
> DefaultRequestInterceptor.updateAMRMToken() updates it into the proxy ugi. 
> However the new token is never used, until the existing RPC connection 
> between AMRMProxy and RM drops for other reasons (say master RM crashes). 
> When we try to reconnect, since the service name of the new AMRMToken is not 
> yet set correctly in DefaultRequestInterceptor.updateAMRMToken(), RPC found 
> no valid AMRMToken when trying to setup a new connection. We first hit a 
> "Client cannot authenticate via:[TOKEN]" exception. This is expected. 
> Next, FederationRMFailoverProxyProvider fails over, we reset the service 
> token via ClientRMProxy.getRMAddress() and reconnect. Supposedly this would 
> have worked. 
> However since DefaultRequestInterceptor does not use the proxy user for later 
> calls to rmClient, when performing failover in 
> FederationRMFailoverProxyProvider, we are not in the proxy user. Currently 
> the code solve the problem by reloading the current ugi with all tokens saved 
> locally in originalTokens in method addOriginalTokens(). The problem is that 
> the original AMRMToken loaded is no longer accepted by RM, and thus we keep 
> hitting the "Invalid AMRMToken" exception until AM fails. 
> The correct way is that rather than saving the original tokens in the proxy 
> ugi, we save the original ugi itself. Every time we perform failover and 
> create the new RM proxy, we use the original ugi, which is always loaded with 
> the up-to-date AMRMToken. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6203) Occasional test failure in TestWeightedRandomRouterPolicy

2017-02-15 Thread Botong Huang (JIRA)
Botong Huang created YARN-6203:
--

 Summary: Occasional test failure in TestWeightedRandomRouterPolicy
 Key: YARN-6203
 URL: https://issues.apache.org/jira/browse/YARN-6203
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Botong Huang
Assignee: Carlo Curino
Priority: Minor






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6203) Occasional test failure in TestWeightedRandomRouterPolicy

2017-02-15 Thread Botong Huang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Botong Huang updated YARN-6203:
---
Description: 
Tests run: 8, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 8.432 sec <<< 
FAILURE! - in 
org.apache.hadoop.yarn.server.federation.policies.router.TestWeightedRandomRouterPolicy
testClusterChosenWithRightProbability(org.apache.hadoop.yarn.server.federation.policies.router.TestWeightedRandomRouterPolicy)
  Time elapsed: 7.437 sec  <<< FAILURE!
java.lang.AssertionError: Id sc5 Actual weight: 0.00228 expected weight: 
0.0018106138
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.assertTrue(Assert.java:41)
at 
org.apache.hadoop.yarn.server.federation.policies.router.TestWeightedRandomRouterPolicy.testClusterChosenWithRightProbability(TestWeightedRandomRouterPolicy.java:118)

> Occasional test failure in TestWeightedRandomRouterPolicy
> -
>
> Key: YARN-6203
> URL: https://issues.apache.org/jira/browse/YARN-6203
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Botong Huang
>Assignee: Carlo Curino
>Priority: Minor
>
> Tests run: 8, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 8.432 sec <<< 
> FAILURE! - in 
> org.apache.hadoop.yarn.server.federation.policies.router.TestWeightedRandomRouterPolicy
> testClusterChosenWithRightProbability(org.apache.hadoop.yarn.server.federation.policies.router.TestWeightedRandomRouterPolicy)
>   Time elapsed: 7.437 sec  <<< FAILURE!
> java.lang.AssertionError: Id sc5 Actual weight: 0.00228 expected weight: 
> 0.0018106138
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at 
> org.apache.hadoop.yarn.server.federation.policies.router.TestWeightedRandomRouterPolicy.testClusterChosenWithRightProbability(TestWeightedRandomRouterPolicy.java:118)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6202) Configuration item Dispatcher.DISPATCHER_EXIT_ON_ERROR_KEY is disregarded

2017-02-15 Thread Yufei Gu (JIRA)
Yufei Gu created YARN-6202:
--

 Summary: Configuration item 
Dispatcher.DISPATCHER_EXIT_ON_ERROR_KEY is disregarded
 Key: YARN-6202
 URL: https://issues.apache.org/jira/browse/YARN-6202
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 3.0.0-alpha2, 2.9.0
Reporter: Yufei Gu
Assignee: Yufei Gu






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2962) ZKRMStateStore: Limit the number of znodes under a znode

2017-02-15 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15868892#comment-15868892
 ] 

Daniel Templeton commented on YARN-2962:


[~varun_saxena], can we get this one closed?

> ZKRMStateStore: Limit the number of znodes under a znode
> 
>
> Key: YARN-2962
> URL: https://issues.apache.org/jira/browse/YARN-2962
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Affects Versions: 2.6.0
>Reporter: Karthik Kambatla
>Assignee: Varun Saxena
>Priority: Critical
> Attachments: YARN-2962.006.patch, YARN-2962.01.patch, 
> YARN-2962.04.patch, YARN-2962.05.patch, YARN-2962.2.patch, YARN-2962.3.patch
>
>
> We ran into this issue where we were hitting the default ZK server message 
> size configs, primarily because the message had too many znodes even though 
> they individually they were all small.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6059) Update paused container state in the state store

2017-02-15 Thread Konstantinos Karanasos (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15868890#comment-15868890
 ] 

Konstantinos Karanasos commented on YARN-6059:
--

Just checked the updated patch, [~hrsharma].
Some comments:
* The paused container has to go past the {{scheduleContainer()}} method of 
{{ContainerScheduler}} to reach your newly-added codepath. For this to happen, 
resources have to be available for the container to be scheduled, whereas what 
we really want to do is simply kill the recovered paused container. You see 
what I mean?
* {{RecoverPausedContainerLaunch}}: by sending the ContainerExitEvent inside 
the try statement, in case there is a problem in the commands before, the 
ContainerExitEvent will never be sent. So let's move it after the try/catch 
statement.
* {{ContainerLaunch}}: 
** storeContainerQueued -> storeContainerPaused (LOG.warn message under it 
needs fixing too);
** I think you need to add a similar call in the resumeContainer() to put it 
back to the running state.
* {{NMLeveldbStateStoreService}}: I don't see why we need the if's in lines 
248, 252-254, and 258-260. I think the rcs.status can be nothing but REQUESTED 
at this point.
* Fix checkstyle issues.
* Can you run the failing tests locally without your changes and make sure they 
are also failing there as well?

Nits:
* {{ContainerImpl}}: let's not introduce the new line (it can cause merge 
conflicts on other people)
* {{ContainersLauncher}}: please put the RECOVER_PAUSED_CONTAINER case below 
the RECOVER_CONTAINER

> Update paused container state in the state store
> 
>
> Key: YARN-6059
> URL: https://issues.apache.org/jira/browse/YARN-6059
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Hitesh Sharma
>Assignee: Hitesh Sharma
> Attachments: YARN-5216-YARN-6059.001.patch, 
> YARN-6059-YARN-5972.001.patch, YARN-6059-YARN-5972.002.patch, 
> YARN-6059-YARN-5972.003.patch, YARN-6059-YARN-5972.004.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5410) Bootstrap Router module

2017-02-15 Thread Giovanni Matteo Fumarola (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15868882#comment-15868882
 ] 

Giovanni Matteo Fumarola commented on YARN-5410:


The test failed are unrelated to the patch.

> Bootstrap Router module
> ---
>
> Key: YARN-5410
> URL: https://issues.apache.org/jira/browse/YARN-5410
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Subru Krishnan
>Assignee: Giovanni Matteo Fumarola
> Attachments: YARN-5410-YARN-2915-v1.patch, 
> YARN-5410-YARN-2915-v2.patch, YARN-5410-YARN-2915-v3.patch, 
> YARN-5410-YARN-2915-v4.patch, YARN-5410-YARN-2915-v5.patch, 
> YARN-5410-YARN-2915-v6.patch
>
>
> As detailed in the proposal in the umbrella JIRA, we are introducing a new 
> component that routes client request to appropriate ResourceManager(s). This 
> JIRA tracks the creation of a new sub-module for the Router.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5068) Expose scheduler queue to application master

2017-02-15 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15868851#comment-15868851
 ] 

Junping Du commented on YARN-5068:
--

bq. Since this JIRA is committed to 2.8 as well, reverting this change required 
to go to 2.8 as well otherwise compatibility issue would occur.
Agree. +1 to revert this patch from 2.8.0 as well. Thanks [~vinodkv] for 
identify this issue and [~rohithsharma] for working on it.

> Expose scheduler queue to application master
> 
>
> Key: YARN-5068
> URL: https://issues.apache.org/jira/browse/YARN-5068
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Harish Jaiprakash
>Assignee: Harish Jaiprakash
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: MAPREDUCE-6692.patch, YARN-5068.1.patch, 
> YARN-5068.2.patch, YARN-5068-branch-2.1.patch
>
>
> The AM needs to know the queue name in which it was launched.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6163) FS Preemption is a trickle for severely starved applications

2017-02-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15868850#comment-15868850
 ] 

Hadoop QA commented on YARN-6163:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
26s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 42m 
53s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 96m  0s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-6163 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12852919/YARN-6163.006.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 88acd98ba951 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 
15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 0741dd3 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/14970/testReport/ |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: hadoop-yarn-project/hadoop-yarn |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/14970/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> FS Preemption is a trickle for severely starved applications
> 

[jira] [Commented] (YARN-5798) Handle FSPreemptionThread crashing due to a RuntimeException

2017-02-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15868831#comment-15868831
 ] 

Hadoop QA commented on YARN-5798:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 44m 22s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 65m 54s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.fair.TestSchedulingPolicy |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-5798 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12852923/YARN-5798.004.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 7f466440534a 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 0741dd3 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/14972/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/14972/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/14972/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Handle FSPreemptionThread crashing due to a RuntimeException
> 

[jira] [Commented] (YARN-6188) Fix OOM issue with decommissioningNodesWatcher in the case of clusters with large number of nodes

2017-02-15 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15868825#comment-15868825
 ] 

Daniel Templeton commented on YARN-6188:


{quote}I have tested this by creating 4 large clusters (2000+ nodes) and 
execute a long running hive and spark (on yarn) job.
While the jobs are running, resize the cluster to reduce the number of nodes.
The resource manager didn't restart and no OOM exception was seen in the 
logs.{quote}

And does the same operation without the patch cause an OOM?  (Sounds obvious, 
but I gotta check...)

If the {{StringBuilder}} is pulled inside the loop, does it still need to be a 
{{StringBuilder}}?  Can't we just log the formatted string directly?

> Fix OOM issue with decommissioningNodesWatcher in the case of clusters with 
> large number of nodes
> -
>
> Key: YARN-6188
> URL: https://issues.apache.org/jira/browse/YARN-6188
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.9.0
>Reporter: Ajay Jadhav
> Fix For: 2.9.0, 3.0.0-alpha1
>
> Attachments: YARN-6188.001.patch, YARN-6188.002.patch
>
>
> LogDecommissioningNodesStatus method in DecommissioningNodesWatcher uses 
> StringBuilder to append status of all
> decommissioning nodes for logging purpose.
> In the case of large number of decommissioning nodes, this leads to OOM 
> exception. The fix scopes StringBuilder so that in case of memory pressure, 
> GC can kick in and free up the memory.
> This is supposed to fix a bug introduced in 
> https://issues.apache.org/jira/browse/YARN-4676



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6177) Yarn client should exit with an informative error message if an incompatible Jersey library is used at client

2017-02-15 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6177?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15868817#comment-15868817
 ] 

Li Lu commented on YARN-6177:
-

OK I see. Is it possible to disable timeline service for those affected 
clients? 

> Yarn client should exit with an informative error message if an incompatible 
> Jersey library is used at client
> -
>
> Key: YARN-6177
> URL: https://issues.apache.org/jira/browse/YARN-6177
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.8.0, 3.0.0-alpha2
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: spark2-job-output-after-besteffort.out, 
> spark2-job-output-after.out, spark2-job-output-before.out, 
> YARN-6177.01.patch, YARN-6177.02.patch, YARN-6177.03.patch, 
> YARN-6177.04.patch, YARN-6177.05.patch
>
>
> Per discussion in YARN-5271, lets provide an error message to suggest user to 
> disable timeline service instead of disabling for them.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6177) Yarn client should exit with an informative error message if an incompatible Jersey library is used at client

2017-02-15 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6177?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15868801#comment-15868801
 ] 

Weiwei Yang commented on YARN-6177:
---

Hi [~gtCarrera9]

Yes, if user turns on best effort and it gives a warning message and proceed 
with this particular error, this is the work around for the issue I described 
earlier. Option 2 is to disable timeline service in client config, but I think 
this one is better as this property was introduced to help clients continue to 
run when the failure with timeline client was not fatal. Without these lines, 
option 2 is the only work around then.

> Yarn client should exit with an informative error message if an incompatible 
> Jersey library is used at client
> -
>
> Key: YARN-6177
> URL: https://issues.apache.org/jira/browse/YARN-6177
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.8.0, 3.0.0-alpha2
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: spark2-job-output-after-besteffort.out, 
> spark2-job-output-after.out, spark2-job-output-before.out, 
> YARN-6177.01.patch, YARN-6177.02.patch, YARN-6177.03.patch, 
> YARN-6177.04.patch, YARN-6177.05.patch
>
>
> Per discussion in YARN-5271, lets provide an error message to suggest user to 
> disable timeline service instead of disabling for them.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5410) Bootstrap Router module

2017-02-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15868802#comment-15868802
 ] 

Hadoop QA commented on YARN-5410:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
24s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
25s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
35s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
33s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
53s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
11s{color} | {color:green} YARN-2915 passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server 
hadoop-yarn-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
51s{color} | {color:green} YARN-2915 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  5m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
5s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server 
hadoop-yarn-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
14s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 18m 44s{color} 
| {color:red} hadoop-yarn-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
19s{color} | {color:green} hadoop-yarn-server-router in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 26m  0s{color} 
| {color:red} hadoop-yarn-project in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
38s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}138m 33s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.timeline.webapp.TestTimelineWebServices |
|   | hadoop.yarn.server.timeline.webapp.TestTimelineWebServices |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| 

[jira] [Commented] (YARN-5798) Handle FSPreemptionThread crashing due to a RuntimeException

2017-02-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15868783#comment-15868783
 ] 

Hadoop QA commented on YARN-5798:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  3m 
48s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 41m 45s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 68m 53s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-5798 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12852912/YARN-5798.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux d885833b1cbf 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 0741dd3 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/14969/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/14969/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/14969/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Handle FSPreemptionThread crashing due to a RuntimeException
> 

[jira] [Commented] (YARN-3659) Federation Router (hiding multiple RMs for ApplicationClientProtocol)

2017-02-15 Thread Subru Krishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15868774#comment-15868774
 ] 

Subru Krishnan commented on YARN-3659:
--

Thanks [~luhuichun] for offering to help! We have a prototype code which you 
can see the in the uber patch [here | 
https://issues.apache.org/jira/secure/attachment/12744108/federation-prototype.patch],
 which [~giovanni.fumarola] has been polishing/testing. He's close to posting 
the patch, so will be good if you help out with the review etc.

> Federation Router (hiding multiple RMs for ApplicationClientProtocol)
> -
>
> Key: YARN-3659
> URL: https://issues.apache.org/jira/browse/YARN-3659
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: client, resourcemanager
>Reporter: Giovanni Matteo Fumarola
>Assignee: Giovanni Matteo Fumarola
> Attachments: YARN-3659.pdf
>
>
> This JIRA tracks the design/implementation of the layer for routing 
> ApplicaitonClientProtocol requests to the appropriate
> RM(s) in a federated YARN cluster.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6176) Test whether FS preemption consider child queues over fair share if the parent is under

2017-02-15 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15868771#comment-15868771
 ] 

Yufei Gu commented on YARN-6176:


YARN-6163 did some changes to class TestFairSchedulerPreemption. Will wait 
until it is in.

> Test whether FS preemption consider child queues over fair share if the 
> parent is under
> ---
>
> Key: YARN-6176
> URL: https://issues.apache.org/jira/browse/YARN-6176
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Yufei Gu
>Assignee: Yufei Gu
>
> Port the test case in YARN-6151 to trunk.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5718) TimelineClient (and other places in YARN) shouldn't over-write HDFS client retry settings which could cause unexpected behavior

2017-02-15 Thread Li Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Lu updated YARN-5718:

Hadoop Flags: Incompatible change

> TimelineClient (and other places in YARN) shouldn't over-write HDFS client 
> retry settings which could cause unexpected behavior
> ---
>
> Key: YARN-5718
> URL: https://issues.apache.org/jira/browse/YARN-5718
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager, timelineclient
>Reporter: Junping Du
>Assignee: Junping Du
> Fix For: 3.0.0-alpha2
>
> Attachments: YARN-5718.patch, YARN-5718-v2.1.patch, YARN-5718-v2.patch
>
>
> In one HA cluster, after NN failed over, we noticed that job is getting 
> failed as TimelineClient failed to retry connection to proper NN. This is 
> because we are overwrite hdfs client settings that hard code retry policy to 
> be enabled that conflict NN failed-over case - hdfs client should fail fast 
> so can retry on another NN.
> We shouldn't assume any retry policy for hdfs client at all places in YARN. 
> This should keep consistent with HDFS settings that has different retry 
> polices in different deployment case. Thus, we should clean up these hard 
> code settings in YARN, include: FileSystemTimelineWriter, 
> FileSystemRMStateStore and FileSystemNodeLabelsStore.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5798) Handle FSPreemptionThread crashing due to a RuntimeException

2017-02-15 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-5798:
---
Attachment: YARN-5798.004.patch

Patch v4 add handler for continuous scheduling thread.

> Handle FSPreemptionThread crashing due to a RuntimeException
> 
>
> Key: YARN-5798
> URL: https://issues.apache.org/jira/browse/YARN-5798
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Affects Versions: 2.9.0
>Reporter: Karthik Kambatla
>Assignee: Yufei Gu
> Attachments: YARN-5798.001.patch, YARN-5798.002.patch, 
> YARN-5798.003.patch, YARN-5798.004.patch
>
>
> YARN-5605 added an FSPreemptionThread. The run() method catches 
> InterruptedException. If this were to run into a RuntimeException, the 
> preemption thread would crash. We should probably fail the RM itself (or 
> transition to standby) when this happens. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6125) The application attempt's diagnostic message should have a maximum size

2017-02-15 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15868744#comment-15868744
 ] 

Daniel Templeton commented on YARN-6125:


Thanks, [~andras.piros].  A few more comments. :)

* The {{BoundedAppender.toString()}} method is missing a javadoc summary
* While you're making changes, maybe the error message in 
{{getDiagnosticsLimitKCOrThrow()}} should be a constant.
* It just occurred to me to wonder if there's a sensible way to test this patch 
in action, by testing the {{RMAppAttemptImpl}}...

> The application attempt's diagnostic message should have a maximum size
> ---
>
> Key: YARN-6125
> URL: https://issues.apache.org/jira/browse/YARN-6125
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Affects Versions: 2.7.0
>Reporter: Daniel Templeton
>Assignee: Andras Piros
>Priority: Critical
> Fix For: 3.0.0-alpha3
>
> Attachments: YARN-6125.000.patch, YARN-6125.001.patch, 
> YARN-6125.002.patch, YARN-6125.003.patch, YARN-6125.004.patch, 
> YARN-6125.005.patch, YARN-6125.006.patch, YARN-6125.007.patch, 
> YARN-6125.008.patch
>
>
> We've found through experience that the diagnostic message can grow 
> unbounded.  I've seen attempts that have diagnostic messages over 1MB.  Since 
> the message is stored in the state store, it's a bad idea to allow the 
> message to grow unbounded.  Instead, there should be a property that sets a 
> maximum size on the message.
> I suspect that some of the ZK state store issues we've seen in the past were 
> due to the size of the diagnostic messages and not to the size of the 
> classpath, as is the current prevailing opinion.
> An open question is how best to prune the message once it grows too large.  
> Should we
> # truncate the tail,
> # truncate the head,
> # truncate the middle,
> # add another property to make the behavior selectable, or
> # none of the above?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6027) Support fromid(offset) filter for /flows API

2017-02-15 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15868746#comment-15868746
 ] 

Li Lu commented on YARN-6027:
-

Thanks [~rohithsharma]! Generally fine but one nit is that we're exposing a lot 
of immediate values in the parsing process of {{FlowActivityEntityReader}}. I 
understand managing those values after the split would be troublesome, but I 
think keep exposing them may cause some future issues. Any plans to have some 
centralized managements of those values? 

> Support fromid(offset) filter for /flows API
> 
>
> Key: YARN-6027
> URL: https://issues.apache.org/jira/browse/YARN-6027
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>  Labels: yarn-5355-merge-blocker
> Attachments: YARN-6027-YARN-5355.0001.patch, 
> YARN-6027-YARN-5355.0002.patch
>
>
> In YARN-5585 , fromId is supported for retrieving entities. We need similar 
> filter for flows/flowRun apps and flow run and flow as well. 
> Along with supporting fromId, this JIRA should also discuss following points
> * Should we throw an exception for entities/entity retrieval if duplicates 
> found?
> * TimelieEntity :
> ** Should equals method also check for idPrefix?
> ** Does idPrefix is part of identifiers?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6027) Support fromid(offset) filter for /flows API

2017-02-15 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15868739#comment-15868739
 ] 

Sangjin Lee commented on YARN-6027:
---

On a related note, how would the client provide the value for fromId? Does it 
need to compose the fromId, or can we add a UID for flow activity in the info 
and they can just grab and send it as fromId? I thought we were doing something 
like that for other pagination?

If it is the latter, then can we not use FLOW_UID for this?

> Support fromid(offset) filter for /flows API
> 
>
> Key: YARN-6027
> URL: https://issues.apache.org/jira/browse/YARN-6027
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>  Labels: yarn-5355-merge-blocker
> Attachments: YARN-6027-YARN-5355.0001.patch, 
> YARN-6027-YARN-5355.0002.patch
>
>
> In YARN-5585 , fromId is supported for retrieving entities. We need similar 
> filter for flows/flowRun apps and flow run and flow as well. 
> Along with supporting fromId, this JIRA should also discuss following points
> * Should we throw an exception for entities/entity retrieval if duplicates 
> found?
> * TimelieEntity :
> ** Should equals method also check for idPrefix?
> ** Does idPrefix is part of identifiers?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6193) FairScheduler might not trigger preemption when using DRF

2017-02-15 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15868719#comment-15868719
 ] 

Karthik Kambatla commented on YARN-6193:


Thanks for the quick review, Daniel. The patch applies on top of YARN-6163. I 
ll fix the review comments and post a new patch once that gets in. 

> FairScheduler might not trigger preemption when using DRF
> -
>
> Key: YARN-6193
> URL: https://issues.apache.org/jira/browse/YARN-6193
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Affects Versions: 2.8.0
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
> Attachments: YARN-6193.000.patch
>
>
> {{FSAppAttempt#canContainerBePreempted}} verifies preempting a container 
> doesn't lead to new starvation ({{Resources.fitsIn}}). When using DRF, this 
> leads to verifying both resources instead of just the dominant resource. 
> Ideally, the check should be based on policy. 
> Note that current implementation of 
> {{DominantResourceFairnessPolicy#checkIfUsageOverFairShare}} is broken. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6163) FS Preemption is a trickle for severely starved applications

2017-02-15 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6163?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated YARN-6163:
---
Attachment: YARN-6163.006.patch

> FS Preemption is a trickle for severely starved applications
> 
>
> Key: YARN-6163
> URL: https://issues.apache.org/jira/browse/YARN-6163
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Affects Versions: 2.9.0
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
> Attachments: YARN-6163.004.patch, YARN-6163.005.patch, 
> YARN-6163.006.patch, yarn-6163-1.patch, yarn-6163-2.patch
>
>
> With current logic, only one RR is considered per each instance of marking an 
> application starved. This marking happens only on the update call that runs 
> every 500ms.  Due to this, an application that is severely starved takes 
> forever to reach fairshare based on preemptions.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6163) FS Preemption is a trickle for severely starved applications

2017-02-15 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15868690#comment-15868690
 ] 

ASF GitHub Bot commented on YARN-6163:
--

Github user kambatla commented on a diff in the pull request:

https://github.com/apache/hadoop/pull/192#discussion_r101399054
  
--- Diff: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairScheduler.java
 ---
@@ -1774,4 +1774,8 @@ protected void decreaseContainer(
   public float getReservableNodesRatio() {
 return reservableNodesRatio;
   }
+
+  long getNMHeartbeatInterval() {
--- End diff --

Trivial, unambiguous @Private method that is very obvious. Don't think 
javadoc adds any information. 


> FS Preemption is a trickle for severely starved applications
> 
>
> Key: YARN-6163
> URL: https://issues.apache.org/jira/browse/YARN-6163
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Affects Versions: 2.9.0
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
> Attachments: YARN-6163.004.patch, YARN-6163.005.patch, 
> yarn-6163-1.patch, yarn-6163-2.patch
>
>
> With current logic, only one RR is considered per each instance of marking an 
> application starved. This marking happens only on the update call that runs 
> every 500ms.  Due to this, an application that is severely starved takes 
> forever to reach fairshare based on preemptions.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6163) FS Preemption is a trickle for severely starved applications

2017-02-15 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15868686#comment-15868686
 ] 

ASF GitHub Bot commented on YARN-6163:
--

Github user kambatla commented on a diff in the pull request:

https://github.com/apache/hadoop/pull/192#discussion_r101398794
  
--- Diff: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSAppAttempt.java
 ---
@@ -1123,6 +1201,13 @@ public Resource getDemand() {
 return demand;
   }
 
+  /**
+   * Get the current app's unsatisfied demand.
+   */
--- End diff --

Trivial, unambiguous @Private method that is package-private. The @return 
is not of much use. 


> FS Preemption is a trickle for severely starved applications
> 
>
> Key: YARN-6163
> URL: https://issues.apache.org/jira/browse/YARN-6163
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Affects Versions: 2.9.0
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
> Attachments: YARN-6163.004.patch, YARN-6163.005.patch, 
> yarn-6163-1.patch, yarn-6163-2.patch
>
>
> With current logic, only one RR is considered per each instance of marking an 
> application starved. This marking happens only on the update call that runs 
> every 500ms.  Due to this, an application that is severely starved takes 
> forever to reach fairshare based on preemptions.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6177) Yarn client should exit with an informative error message if an incompatible Jersey library is used at client

2017-02-15 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6177?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15868682#comment-15868682
 ] 

Li Lu commented on YARN-6177:
-

Thanks [~cheersyang]. My concern is with these lines:
{code}
379 } catch (NoClassDefFoundError e) {
380   if (timelineServiceBestEffort) {
381 LOG.warn("Ignore a NoClassDefFoundError when attempting to get"
382 + " delegation token from the timeline server: " + 
e.getMessage());
383 return null;
384   }
385 
{code}

So if {{timelineServiceBestEffort}} is set to true, we'll leave a message and 
then proceed? I was think we may not need to treat 
{{timelineServiceBestEffort}} separately here since even with best effort we do 
not need to keep running on errors. 

> Yarn client should exit with an informative error message if an incompatible 
> Jersey library is used at client
> -
>
> Key: YARN-6177
> URL: https://issues.apache.org/jira/browse/YARN-6177
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.8.0, 3.0.0-alpha2
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: spark2-job-output-after-besteffort.out, 
> spark2-job-output-after.out, spark2-job-output-before.out, 
> YARN-6177.01.patch, YARN-6177.02.patch, YARN-6177.03.patch, 
> YARN-6177.04.patch, YARN-6177.05.patch
>
>
> Per discussion in YARN-5271, lets provide an error message to suggest user to 
> disable timeline service instead of disabling for them.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6177) Yarn client should exit with an informative error message if an incompatible Jersey library is used at client

2017-02-15 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6177?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15868669#comment-15868669
 ] 

Weiwei Yang commented on YARN-6177:
---

Hi [~gtCarrera9]

Yes I understand your concern, but i think the patch is providing a reasonable 
work around if clients ship a jersey2 lib, such as spark2 (more in 
SPARK-15343), they will 100% hit this.  Such services don't really use timeline 
service but if without this fix, yarn client exits with no information prompted 
and user needs to study the code, and manually disable timeline client. The 
error message provided in this patch helps to make this process easier. The 
patch is not "swallowing" any error (new UT guarantees that), it is just aim to 
provide useful information plus a work around and user is the decision maker. 
Does that make sense?

> Yarn client should exit with an informative error message if an incompatible 
> Jersey library is used at client
> -
>
> Key: YARN-6177
> URL: https://issues.apache.org/jira/browse/YARN-6177
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.8.0, 3.0.0-alpha2
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: spark2-job-output-after-besteffort.out, 
> spark2-job-output-after.out, spark2-job-output-before.out, 
> YARN-6177.01.patch, YARN-6177.02.patch, YARN-6177.03.patch, 
> YARN-6177.04.patch, YARN-6177.05.patch
>
>
> Per discussion in YARN-5271, lets provide an error message to suggest user to 
> disable timeline service instead of disabling for them.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4675) Reorganize TimelineClient and TimelineClientImpl into separate classes for ATSv1.x and ATSv2

2017-02-15 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15868665#comment-15868665
 ] 

Sangjin Lee commented on YARN-4675:
---

It's kind of hard to parse but the javadoc error is related to the patch:
{noformat}
[ERROR] 
/testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/AMRMClient.java:698:
 error: unexpected text
[ERROR] * @throws YarnException, when this method is invoked even when ATS V2 
is not
[ERROR] ^
{noformat}
It might be the comma after YarnException that's the problem.

> Reorganize TimelineClient and TimelineClientImpl into separate classes for 
> ATSv1.x and ATSv2
> 
>
> Key: YARN-4675
> URL: https://issues.apache.org/jira/browse/YARN-4675
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
>  Labels: YARN-5355, yarn-5355-merge-blocker
> Attachments: YARN-4675.v2.002.patch, YARN-4675.v2.003.patch, 
> YARN-4675.v2.004.patch, YARN-4675.v2.005.patch, YARN-4675.v2.006.patch, 
> YARN-4675.v2.007.patch, YARN-4675.v2.008.patch, YARN-4675.v2.009.patch, 
> YARN-4675.v2.010.patch, YARN-4675-YARN-2928.v1.001.patch
>
>
> We need to reorganize TimeClientImpl into TimeClientV1Impl ,  
> TimeClientV2Impl and if required a base class, so that its clear which part 
> of the code belongs to which version and thus better maintainable.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5798) Handle FSPreemptionThread crashing due to a RuntimeException

2017-02-15 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-5798:
---
Attachment: YARN-5798.003.patch

> Handle FSPreemptionThread crashing due to a RuntimeException
> 
>
> Key: YARN-5798
> URL: https://issues.apache.org/jira/browse/YARN-5798
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Affects Versions: 2.9.0
>Reporter: Karthik Kambatla
>Assignee: Yufei Gu
> Attachments: YARN-5798.001.patch, YARN-5798.002.patch, 
> YARN-5798.003.patch
>
>
> YARN-5605 added an FSPreemptionThread. The run() method catches 
> InterruptedException. If this were to run into a RuntimeException, the 
> preemption thread would crash. We should probably fail the RM itself (or 
> transition to standby) when this happens. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6069) CORS support in timeline v2

2017-02-15 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15868628#comment-15868628
 ] 

Varun Saxena commented on YARN-6069:


Thanks [~rohithsharma] for the patch.
# I think we can move CrossOriginFilterInitializer to hadoop-yarn-server-common 
to avoid code duplication. Both AHS and ATSv2 will use the same piece of code. 
You are additionally checking for CORS enabled and printing a message in ATSv2 
but same change should be fine for AHS too. 
# We dont need to override getEnabledConfigKey. Base class method is protected 
hence would be accessible and it also calls the method getPrefix() so it will 
get the correct prefix. Consequently, we do not need the field ENABLED_SUFFIX 
in CrossOriginFilterInitializer.
# getPrefix method can simply return PREFIX. No need of accessing it via 
classname as local variable is accessible.
# PREFIX can be private.
# Checkstyle can be fixed as well.

> CORS support in timeline v2
> ---
>
> Key: YARN-6069
> URL: https://issues.apache.org/jira/browse/YARN-6069
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Sreenath Somarajapuram
>Assignee: Rohith Sharma K S
> Attachments: YARN-6069-YARN-5355.0001.patch
>
>
> By default the browser prevents accessing resources from multiple domains. In 
> most cases the UIs would be loaded form a domain different from that of  
> timeline server. Hence without CORS support, it would be difficult for the 
> UIs to load data from timeline v2.
> YARN-2277 must provide more info on the implementation.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4675) Reorganize TimelineClient and TimelineClientImpl into separate classes for ATSv1.x and ATSv2

2017-02-15 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15868601#comment-15868601
 ] 

Varun Saxena commented on YARN-4675:


Javadoc failure looks weird. Wasn't there in the last build report. Most of the 
failures anyways look unrelated.

> Reorganize TimelineClient and TimelineClientImpl into separate classes for 
> ATSv1.x and ATSv2
> 
>
> Key: YARN-4675
> URL: https://issues.apache.org/jira/browse/YARN-4675
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
>  Labels: YARN-5355, yarn-5355-merge-blocker
> Attachments: YARN-4675.v2.002.patch, YARN-4675.v2.003.patch, 
> YARN-4675.v2.004.patch, YARN-4675.v2.005.patch, YARN-4675.v2.006.patch, 
> YARN-4675.v2.007.patch, YARN-4675.v2.008.patch, YARN-4675.v2.009.patch, 
> YARN-4675.v2.010.patch, YARN-4675-YARN-2928.v1.001.patch
>
>
> We need to reorganize TimeClientImpl into TimeClientV1Impl ,  
> TimeClientV2Impl and if required a base class, so that its clear which part 
> of the code belongs to which version and thus better maintainable.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5798) Handle FSPreemptionThread crashing due to a RuntimeException

2017-02-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15868596#comment-15868596
 ] 

Hadoop QA commented on YARN-5798:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
13s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
13s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 13s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 14s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
13s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
12s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
10s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 11s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 21m 24s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-5798 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12852904/YARN-5798.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 46b8cdc8dab2 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 
15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 0741dd3 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-YARN-Build/14967/artifact/patchprocess/patch-mvninstall-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| compile | 
https://builds.apache.org/job/PreCommit-YARN-Build/14967/artifact/patchprocess/patch-compile-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| javac | 

[jira] [Commented] (YARN-4675) Reorganize TimelineClient and TimelineClientImpl into separate classes for ATSv1.x and ATSv2

2017-02-15 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15868593#comment-15868593
 ] 

Li Lu commented on YARN-4675:
-

V10 patch looks good to me. 

> Reorganize TimelineClient and TimelineClientImpl into separate classes for 
> ATSv1.x and ATSv2
> 
>
> Key: YARN-4675
> URL: https://issues.apache.org/jira/browse/YARN-4675
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
>  Labels: YARN-5355, yarn-5355-merge-blocker
> Attachments: YARN-4675.v2.002.patch, YARN-4675.v2.003.patch, 
> YARN-4675.v2.004.patch, YARN-4675.v2.005.patch, YARN-4675.v2.006.patch, 
> YARN-4675.v2.007.patch, YARN-4675.v2.008.patch, YARN-4675.v2.009.patch, 
> YARN-4675.v2.010.patch, YARN-4675-YARN-2928.v1.001.patch
>
>
> We need to reorganize TimeClientImpl into TimeClientV1Impl ,  
> TimeClientV2Impl and if required a base class, so that its clear which part 
> of the code belongs to which version and thus better maintainable.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4675) Reorganize TimelineClient and TimelineClientImpl into separate classes for ATSv1.x and ATSv2

2017-02-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15868571#comment-15868571
 ] 

Hadoop QA commented on YARN-4675:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
58s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
26s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  2m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common 
generated 0 new + 4575 unchanged - 4 fixed = 4575 total (was 4579) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} hadoop-yarn-server-tests in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
23s{color} | {color:red} hadoop-yarn-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} hadoop-yarn-applications-distributedshell in the 
patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} hadoop-mapreduce-client-app in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
39s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m  
8s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  6m 37s{color} 
| 

[jira] [Commented] (YARN-6042) Fairscheduler: Dump scheduler state in log

2017-02-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15868562#comment-15868562
 ] 

Hadoop QA commented on YARN-6042:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
 4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 44m  8s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 68m 58s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacityScheduler |
|   | hadoop.yarn.server.resourcemanager.TestRMRestart |
|   | hadoop.yarn.server.resourcemanager.scheduler.fair.TestFSAppStarvation |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-6042 |
| GITHUB PR | https://github.com/apache/hadoop/pull/193 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 4cac1b7941d3 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 
15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 627da6f |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/14966/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/14966/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/14966/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Fairscheduler: Dump scheduler state in log
> --
>
> 

[jira] [Updated] (YARN-5798) Handle FSPreemptionThread crashing due to a RuntimeException

2017-02-15 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-5798:
---
Attachment: (was: YARN-5789.002.patch)

> Handle FSPreemptionThread crashing due to a RuntimeException
> 
>
> Key: YARN-5798
> URL: https://issues.apache.org/jira/browse/YARN-5798
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Affects Versions: 2.9.0
>Reporter: Karthik Kambatla
>Assignee: Yufei Gu
> Attachments: YARN-5798.001.patch, YARN-5798.002.patch
>
>
> YARN-5605 added an FSPreemptionThread. The run() method catches 
> InterruptedException. If this were to run into a RuntimeException, the 
> preemption thread would crash. We should probably fail the RM itself (or 
> transition to standby) when this happens. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6163) FS Preemption is a trickle for severely starved applications

2017-02-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15868558#comment-15868558
 ] 

Hadoop QA commented on YARN-6163:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
30s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 43m 15s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 97m 39s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer |
|   | hadoop.yarn.server.resourcemanager.scheduler.fair.TestFSAppStarvation |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-6163 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12852881/YARN-6163.005.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 6d9cc9b66b59 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 
15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 627da6f |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/14965/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/14965/testReport/ |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 

[jira] [Updated] (YARN-5798) Handle FSPreemptionThread crashing due to a RuntimeException

2017-02-15 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-5798:
---
Target Version/s: 3.0.0-alpha2, 2.9.0  (was: 2.9.0)

> Handle FSPreemptionThread crashing due to a RuntimeException
> 
>
> Key: YARN-5798
> URL: https://issues.apache.org/jira/browse/YARN-5798
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Affects Versions: 2.9.0
>Reporter: Karthik Kambatla
>Assignee: Yufei Gu
> Attachments: YARN-5798.001.patch, YARN-5798.002.patch
>
>
> YARN-5605 added an FSPreemptionThread. The run() method catches 
> InterruptedException. If this were to run into a RuntimeException, the 
> preemption thread would crash. We should probably fail the RM itself (or 
> transition to standby) when this happens. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4675) Reorganize TimelineClient and TimelineClientImpl into separate classes for ATSv1.x and ATSv2

2017-02-15 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15868554#comment-15868554
 ] 

Sangjin Lee commented on YARN-4675:
---

The latest version looks good to me (v.10). I'll wait for the jenkins before 
proceeding.

> Reorganize TimelineClient and TimelineClientImpl into separate classes for 
> ATSv1.x and ATSv2
> 
>
> Key: YARN-4675
> URL: https://issues.apache.org/jira/browse/YARN-4675
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
>  Labels: YARN-5355, yarn-5355-merge-blocker
> Attachments: YARN-4675.v2.002.patch, YARN-4675.v2.003.patch, 
> YARN-4675.v2.004.patch, YARN-4675.v2.005.patch, YARN-4675.v2.006.patch, 
> YARN-4675.v2.007.patch, YARN-4675.v2.008.patch, YARN-4675.v2.009.patch, 
> YARN-4675.v2.010.patch, YARN-4675-YARN-2928.v1.001.patch
>
>
> We need to reorganize TimeClientImpl into TimeClientV1Impl ,  
> TimeClientV2Impl and if required a base class, so that its clear which part 
> of the code belongs to which version and thus better maintainable.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5798) Handle FSPreemptionThread crashing due to a RuntimeException

2017-02-15 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15868551#comment-15868551
 ] 

Yufei Gu commented on YARN-5798:


Uploaded patch v2 based on YARN-6061, which also handles the update thread in 
FS.

> Handle FSPreemptionThread crashing due to a RuntimeException
> 
>
> Key: YARN-5798
> URL: https://issues.apache.org/jira/browse/YARN-5798
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Affects Versions: 2.9.0
>Reporter: Karthik Kambatla
>Assignee: Yufei Gu
> Attachments: YARN-5798.001.patch, YARN-5798.002.patch
>
>
> YARN-5605 added an FSPreemptionThread. The run() method catches 
> InterruptedException. If this were to run into a RuntimeException, the 
> preemption thread would crash. We should probably fail the RM itself (or 
> transition to standby) when this happens. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5798) Handle FSPreemptionThread crashing due to a RuntimeException

2017-02-15 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-5798:
---
Attachment: YARN-5798.002.patch

> Handle FSPreemptionThread crashing due to a RuntimeException
> 
>
> Key: YARN-5798
> URL: https://issues.apache.org/jira/browse/YARN-5798
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Affects Versions: 2.9.0
>Reporter: Karthik Kambatla
>Assignee: Yufei Gu
> Attachments: YARN-5798.001.patch, YARN-5798.002.patch
>
>
> YARN-5605 added an FSPreemptionThread. The run() method catches 
> InterruptedException. If this were to run into a RuntimeException, the 
> preemption thread would crash. We should probably fail the RM itself (or 
> transition to standby) when this happens. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5410) Bootstrap Router module

2017-02-15 Thread Giovanni Matteo Fumarola (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giovanni Matteo Fumarola updated YARN-5410:
---
Attachment: YARN-5410-YARN-2915-v6.patch

> Bootstrap Router module
> ---
>
> Key: YARN-5410
> URL: https://issues.apache.org/jira/browse/YARN-5410
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Subru Krishnan
>Assignee: Giovanni Matteo Fumarola
> Attachments: YARN-5410-YARN-2915-v1.patch, 
> YARN-5410-YARN-2915-v2.patch, YARN-5410-YARN-2915-v3.patch, 
> YARN-5410-YARN-2915-v4.patch, YARN-5410-YARN-2915-v5.patch, 
> YARN-5410-YARN-2915-v6.patch
>
>
> As detailed in the proposal in the umbrella JIRA, we are introducing a new 
> component that routes client request to appropriate ResourceManager(s). This 
> JIRA tracks the creation of a new sub-module for the Router.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5798) Handle FSPreemptionThread crashing due to a RuntimeException

2017-02-15 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-5798:
---
Attachment: YARN-5789.002.patch

> Handle FSPreemptionThread crashing due to a RuntimeException
> 
>
> Key: YARN-5798
> URL: https://issues.apache.org/jira/browse/YARN-5798
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Affects Versions: 2.9.0
>Reporter: Karthik Kambatla
>Assignee: Yufei Gu
> Attachments: YARN-5798.001.patch, YARN-5798.002.patch
>
>
> YARN-5605 added an FSPreemptionThread. The run() method catches 
> InterruptedException. If this were to run into a RuntimeException, the 
> preemption thread would crash. We should probably fail the RM itself (or 
> transition to standby) when this happens. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6195) Export UsedCapacity and AbsoluteUsedCapacity to JMX

2017-02-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15868518#comment-15868518
 ] 

Hadoop QA commented on YARN-6195:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
 6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
17s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 8 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
46s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 44m 47s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
37s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}136m 23s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.resourcemanager.TestRMRestart |
|   | hadoop.yarn.server.resourcemanager.scheduler.fair.TestFSAppStarvation |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-6195 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12852846/YARN-6195.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 004c679b35af 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 
15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 0fc6f38 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| whitespace | 
https://builds.apache.org/job/PreCommit-YARN-Build/14961/artifact/patchprocess/whitespace-tabs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/14961/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/14961/testReport/ |
| modules | C: 

[jira] [Commented] (YARN-6163) FS Preemption is a trickle for severely starved applications

2017-02-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15868508#comment-15868508
 ] 

Hadoop QA commented on YARN-6163:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
6s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
27s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 39m 46s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 95m 12s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption |
|   | hadoop.yarn.server.resourcemanager.TestRMRestart |
|   | hadoop.yarn.server.resourcemanager.scheduler.fair.TestFSAppStarvation |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-6163 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12852881/YARN-6163.005.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 900dfcf8dfaa 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 627da6f |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/14963/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/14963/testReport/ |
| modules | C: 

[jira] [Updated] (YARN-6042) Fairscheduler: Dump scheduler state in log

2017-02-15 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-6042:
---
Attachment: YARN-6042.006.patch

Thanks [~wilfreds] for the review. Fixed the issue in patch v6.

> Fairscheduler: Dump scheduler state in log
> --
>
> Key: YARN-6042
> URL: https://issues.apache.org/jira/browse/YARN-6042
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: YARN-6042.001.patch, YARN-6042.002.patch, 
> YARN-6042.003.patch, YARN-6042.004.patch, YARN-6042.005.patch, 
> YARN-6042.006.patch
>
>
> To improve the debugging of scheduler issues it would be a big improvement to 
> be able to dump the scheduler state into a log on request. 
> The Dump the scheduler state at a point in time would allow debugging of a 
> scheduler that is not hung (deadlocked) but also not assigning containers. 
> Currently we do not have a proper overview of what state the scheduler and 
> the queues are in and we have to make assumptions or guess
> The scheduler and queue state needed would include (not exhaustive):
> - instantaneous and steady fair share (app / queue)
> - AM share and resources
> - weight
> - app demand
> - application run state (runnable/non runnable)
> - last time at fair/min share



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6177) Yarn client should exit with an informative error message if an incompatible Jersey library is used at client

2017-02-15 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6177?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15868451#comment-15868451
 ] 

Li Lu commented on YARN-6177:
-

Thanks for the hard work [~cheersyang]! Keep using 
{{YarnConfiguration.TIMELINE_SERVICE_CLIENT_BEST_EFFORT}} looks fine with me. 
However, I'm still a little bit hesitate on swallowing the error when 
{{timelineServiceBestEffort}} is set to true. To me handling errors (but not 
exceptions) is beyond the range of our "best effort". I would like to 
understand if there's anything I'm missing that makes the community think it is 
especially appealing to do so. 

Other than this, the patch LGTM. 

> Yarn client should exit with an informative error message if an incompatible 
> Jersey library is used at client
> -
>
> Key: YARN-6177
> URL: https://issues.apache.org/jira/browse/YARN-6177
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.8.0, 3.0.0-alpha2
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: spark2-job-output-after-besteffort.out, 
> spark2-job-output-after.out, spark2-job-output-before.out, 
> YARN-6177.01.patch, YARN-6177.02.patch, YARN-6177.03.patch, 
> YARN-6177.04.patch, YARN-6177.05.patch
>
>
> Per discussion in YARN-5271, lets provide an error message to suggest user to 
> disable timeline service instead of disabling for them.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-6201) Add node label as a query param for nodes REST end point

2017-02-15 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15868422#comment-15868422
 ] 

Varun Saxena edited comment on YARN-6201 at 2/15/17 7:24 PM:
-

[~sunilg], this is primarily for New WebUI?


was (Author: varun_saxena):
[~sunilg], this is for WebUI ?

> Add node label as a query param for nodes REST end point
> 
>
> Key: YARN-6201
> URL: https://issues.apache.org/jira/browse/YARN-6201
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: webapp
>Reporter: Sunil G
>Assignee: Sunil G
>
> For {{/nodes}} end point, label could be a query param.
> This could help to retrieve Node informations for a give label (with 
> resources etc).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6201) Add node label as a query param for nodes REST end point

2017-02-15 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15868422#comment-15868422
 ] 

Varun Saxena commented on YARN-6201:


[~sunilg], this is for WebUI ?

> Add node label as a query param for nodes REST end point
> 
>
> Key: YARN-6201
> URL: https://issues.apache.org/jira/browse/YARN-6201
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: webapp
>Reporter: Sunil G
>Assignee: Sunil G
>
> For {{/nodes}} end point, label could be a query param.
> This could help to retrieve Node informations for a give label (with 
> resources etc).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5579) Resourcemanager should surface failed state store operation prominently

2017-02-15 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated YARN-5579:
-
Description: 
I found the following in Resourcemanager log when I tried to figure out why 
application got stuck in NEW_SAVING state.

{code}
2016-08-29 18:14:23,486 INFO  recovery.ZKRMStateStore 
(ZKRMStateStore.java:runWithRetries(1242)) - Maxed out ZK retries. Giving up!
2016-08-29 18:14:23,486 ERROR recovery.RMStateStore 
(RMStateStore.java:transition(205)) - Error storing app: 
application_1470517915158_0001
org.apache.zookeeper.KeeperException$AuthFailedException: KeeperErrorCode = 
AuthFailed
at org.apache.zookeeper.KeeperException.create(KeeperException.java:123)
at org.apache.zookeeper.ZooKeeper.multiInternal(ZooKeeper.java:935)
at org.apache.zookeeper.ZooKeeper.multi(ZooKeeper.java:915)
at 
org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore$5.run(ZKRMStateStore.java:998)
at 
org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore$5.run(ZKRMStateStore.java:995)
at 
org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore$ZKAction.runWithCheck(ZKRMStateStore.java:1174)
at 
org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore$ZKAction.runWithRetries(ZKRMStateStore.java:1207)
at 
org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore.doStoreMultiWithRetries(ZKRMStateStore.java:995)
at 
org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore.doStoreMultiWithRetries(ZKRMStateStore.java:1009)
at 
org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore.createWithRetries(ZKRMStateStore.java:1042)
at 
org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore.storeApplicationStateInternal(ZKRMStateStore.java:639)
at 
org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore$StoreAppTransition.transition(RMStateStore.java:201)
at 
org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore$StoreAppTransition.transition(RMStateStore.java:183)
at 
org.apache.hadoop.yarn.state.StateMachineFactory$MultipleInternalArc.doTransition(StateMachineFactory.java:385)
at 
org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:302)
at 
org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46)
at 
org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448)
at 
org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore.handleStoreEvent(RMStateStore.java:955)
at 
org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore$ForwardingEventHandler.handle(RMStateStore.java:1036)
at 
org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore$ForwardingEventHandler.handle(RMStateStore.java:1031)
at 
org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:184)
at 
org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:110)
at java.lang.Thread.run(Thread.java:745)
2016-08-29 18:14:23,486 ERROR recovery.RMStateStore 
(RMStateStore.java:notifyStoreOperationFailedInternal(987)) - State store 
operation failed
org.apache.zookeeper.KeeperException$AuthFailedException: KeeperErrorCode = 
AuthFailed
at org.apache.zookeeper.KeeperException.create(KeeperException.java:123)
at org.apache.zookeeper.ZooKeeper.multiInternal(ZooKeeper.java:935)
at org.apache.zookeeper.ZooKeeper.multi(ZooKeeper.java:915)
at 
org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore$5.run(ZKRMStateStore.java:998)
at 
org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore$5.run(ZKRMStateStore.java:995)
at 
org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore$ZKAction.runWithCheck(ZKRMStateStore.java:1174)
at 
org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore$ZKAction.runWithRetries(ZKRMStateStore.java:1207)
at 
org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore.doStoreMultiWithRetries(ZKRMStateStore.java:995)
at 
org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore.doStoreMultiWithRetries(ZKRMStateStore.java:1009)
{code}
Resourcemanager should surface the above error prominently.
Likely subsequent application submission would encounter the same error.

  was:
I found the following in Resourcemanager log when I tried to figure out why 
application got stuck in NEW_SAVING state.
{code}
2016-08-29 18:14:23,486 INFO  recovery.ZKRMStateStore 
(ZKRMStateStore.java:runWithRetries(1242)) - Maxed out ZK retries. Giving up!
2016-08-29 18:14:23,486 ERROR recovery.RMStateStore 
(RMStateStore.java:transition(205)) - Error storing app: 

[jira] [Commented] (YARN-6186) Handle InvalidResourceRequestException in native services AM onError

2017-02-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15868393#comment-15868393
 ] 

Hadoop QA commented on YARN-6186:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
31s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
25s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
18s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
3s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
43s{color} | {color:green} hadoop-yarn-slider-core in the patch passed. {color} 
|
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 23m 12s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-6186 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12852879/YARN-6186-yarn-native-services.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 283d22aea2ba 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 
15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | yarn-native-services / 752b548 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/14964/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-slider/hadoop-yarn-slider-core
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-slider/hadoop-yarn-slider-core
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/14964/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Handle InvalidResourceRequestException in native services AM onError
> 
>
> Key: YARN-6186
> URL: 

  1   2   >