[jira] [Commented] (YARN-5814) Add druid as storage backend in YARN Timeline Service

2016-11-17 Thread Bingxue Qiu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15676080#comment-15676080
 ] 

Bingxue Qiu commented on YARN-5814:
---

Thanks [~sjlee0] for your Suggestions!

On the Druid reader side, queries are based on the Drill.  So the conditions 
like filter list can supported by self-join,left-join. such as:

{code}
select F.* FROM druid.timeline_service_app F, druid.timeline_service_app S 
WHERE F.appId = S.appId AND F.startTime > 1479440083000 AND S.finishTime > 0 
AND F.appId = 'application_1476875405903_49989';
{code}

I also feel deeply grateful that you reminding me the new issues,  druid 
support order by column,  maybe  add a column named "idPrefix" make sense?

>  Add druid as storage backend in YARN Timeline Service
> --
>
> Key: YARN-5814
> URL: https://issues.apache.org/jira/browse/YARN-5814
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: ATSv2
>Affects Versions: 3.0.0-alpha2
>Reporter: Bingxue Qiu
> Attachments: Add-Druid-in-YARN-Timeline-Service.pdf
>
>
> h3. Introduction
> I propose to add druid as storage backend in YARN Timeline Service.
> We run more than 6000 applications and generate 450 million metrics daily in 
> Alibaba Clusters with thousands of nodes. We need to collect and store 
> meta/events/metrics data, online analyze the utilization reports of various 
> dimensions and display the trends of allocation/usage resources for cluster 
> by joining and aggregating data. It helps us to manage and optimize the 
> cluster by tracking resource utilization.
> To achieve our goal we have changed to use druid as the storage instead of 
> HBase and have achieved sub-second OLAP performance in our production 
> environment for few months. 
> h3. Analysis
> Currently YARN Timeline Service only supports aggregating metrics at a) flow 
> level by FlowRunCoprocessor and b) application level metrics aggregating by 
> AppLevelTimelineCollector, offline (time-based periodic) aggregation for 
> flows/users/queues for reporting and analysis is planned but not yet 
> implemented. YARN Timeline Service chooses Apache HBase as the primary 
> storage backend. As we all know that HBase doesn't fit for OLAP.
>  For arbitrary exploration of data,such as online analyze the utilization 
> reports of various dimensions(Queue,Flow,Users,Application,CPU,Memory) by 
> joining and aggregating data, Druid's custom column format enables ad-hoc 
> queries without pre-computation. The format also enables fast scans on 
> columns, which is important for good aggregation performance.
> To achieve our goal that support to online analyze the utilization reports of 
> various dimensions, display the variation trends of allocation/usage 
> resources for cluster, and arbitrary exploration of data, we propose to add 
> druid storage and implement DruidWriter /DruidReader in YARN Timeline Service.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5866) [YARN-3368] Fix few issues reported by jshint in new YARN UI

2016-11-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15676019#comment-15676019
 ] 

Hadoop QA commented on YARN-5866:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}  0m 54s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-5866 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12839518/YARN-5866.002.patch |
| Optional Tests |  asflicense  |
| uname | Linux 4d810b9324a4 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / c0b1a44 |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/13971/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> [YARN-3368] Fix few issues reported by jshint in new YARN UI
> 
>
> Key: YARN-5866
> URL: https://issues.apache.org/jira/browse/YARN-5866
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-ui-v2
>Reporter: Akhil PB
>Assignee: Akhil PB
> Attachments: YARN-5866.001.patch, YARN-5866.002.patch
>
>
> There are few minor issues reported by jshint (javascript lint tool).
> This jira is to track and fix those issues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5866) [YARN-3368] Fix few issues reported by jshint in new YARN UI

2016-11-17 Thread Akhil PB (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akhil PB updated YARN-5866:
---
Attachment: YARN-5866.002.patch

> [YARN-3368] Fix few issues reported by jshint in new YARN UI
> 
>
> Key: YARN-5866
> URL: https://issues.apache.org/jira/browse/YARN-5866
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-ui-v2
>Reporter: Akhil PB
>Assignee: Akhil PB
> Attachments: YARN-5866.001.patch, YARN-5866.002.patch
>
>
> There are few minor issues reported by jshint (javascript lint tool).
> This jira is to track and fix those issues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5911) DrainDispatcher does not drain all events on stop even if setDrainEventsOnStop is true

2016-11-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15675964#comment-15675964
 ] 

Hadoop QA commented on YARN-5911:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
26s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 18m 18s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-5911 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12839509/YARN-5911.01.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux df8c3dfab8f6 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 
20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / c0b1a44 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/13970/testReport/ |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/13970/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> DrainDispatcher does not drain all events on stop even if 
> setDrainEventsOnStop is true
> --
>
> Key: YARN-5911
> URL: https://issues.apache.org/jira/browse/YARN-5911
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Varun Saxena
>Assignee: Varun Saxena
> Attachments: YARN-5911.01.patch
>
>
> DrainDispatcher#serviceStop sets the stopped flag first before draining the 
> event queue.
> This means that the thread terminates as soon 

[jira] [Commented] (YARN-5911) DrainDispatcher does not drain all events on stop even if setDrainEventsOnStop is true

2016-11-17 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15675927#comment-15675927
 ] 

Varun Saxena commented on YARN-5911:


Added the test case in TestAsyncDispatcher itself instead of adding a new test 
class.

> DrainDispatcher does not drain all events on stop even if 
> setDrainEventsOnStop is true
> --
>
> Key: YARN-5911
> URL: https://issues.apache.org/jira/browse/YARN-5911
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Varun Saxena
>Assignee: Varun Saxena
> Attachments: YARN-5911.01.patch
>
>
> DrainDispatcher#serviceStop sets the stopped flag first before draining the 
> event queue.
> This means that the thread terminates as soon as it encounters stopped flag 
> as true and does not continue to process leftover events in queue, something 
> which it should do if setDrainEventsOnStop is set.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5911) DrainDispatcher does not drain all events on stop even if setDrainEventsOnStop is true

2016-11-17 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5911?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated YARN-5911:
---
Description: 
DrainDispatcher#serviceStop sets the stopped flag first before draining the 
event queue.
This means that the thread terminates as soon as it encounters stopped flag as 
true and does not continue to process leftover events in queue, something which 
it should do if setDrainEventsOnStop is set.

> DrainDispatcher does not drain all events on stop even if 
> setDrainEventsOnStop is true
> --
>
> Key: YARN-5911
> URL: https://issues.apache.org/jira/browse/YARN-5911
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Varun Saxena
>Assignee: Varun Saxena
> Attachments: YARN-5911.01.patch
>
>
> DrainDispatcher#serviceStop sets the stopped flag first before draining the 
> event queue.
> This means that the thread terminates as soon as it encounters stopped flag 
> as true and does not continue to process leftover events in queue, something 
> which it should do if setDrainEventsOnStop is set.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5911) DrainDispatcher does not drain all events on stop even if setDrainEventsOnStop is true

2016-11-17 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5911?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated YARN-5911:
---
Attachment: YARN-5911.01.patch

> DrainDispatcher does not drain all events on stop even if 
> setDrainEventsOnStop is true
> --
>
> Key: YARN-5911
> URL: https://issues.apache.org/jira/browse/YARN-5911
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Varun Saxena
>Assignee: Varun Saxena
> Attachments: YARN-5911.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5911) DrainDispatcher does not drain all events on stop even if setDrainEventsOnStop is true

2016-11-17 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5911?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated YARN-5911:
---
Summary: DrainDispatcher does not drain all events on stop even if 
setDrainEventsOnStop is true  (was: DrainDispatcher does not drain on all 
events on stop even if setDrainEventsOnStop is true)

> DrainDispatcher does not drain all events on stop even if 
> setDrainEventsOnStop is true
> --
>
> Key: YARN-5911
> URL: https://issues.apache.org/jira/browse/YARN-5911
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Varun Saxena
>Assignee: Varun Saxena
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5911) DrainDispatcher does not drain on all events on stop even if setDrainEventsOnStop is true

2016-11-17 Thread Varun Saxena (JIRA)
Varun Saxena created YARN-5911:
--

 Summary: DrainDispatcher does not drain on all events on stop even 
if setDrainEventsOnStop is true
 Key: YARN-5911
 URL: https://issues.apache.org/jira/browse/YARN-5911
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Varun Saxena
Assignee: Varun Saxena






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5888) [YARN-3368] Add test cases in new YARN UI

2016-11-17 Thread Akhil PB (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akhil PB updated YARN-5888:
---
Summary: [YARN-3368] Add test cases in new YARN UI  (was: Add test cases in 
new YARN UI)

> [YARN-3368] Add test cases in new YARN UI
> -
>
> Key: YARN-5888
> URL: https://issues.apache.org/jira/browse/YARN-5888
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-ui-v2
>Reporter: Akhil PB
>Assignee: Akhil PB
>
> Add missing test cases in new YARN UI
> Fix test cases errors in new YARN UI 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5866) [YARN-3368] Fix few issues reported by jshint in new YARN UI

2016-11-17 Thread Akhil PB (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akhil PB updated YARN-5866:
---
Summary: [YARN-3368] Fix few issues reported by jshint in new YARN UI  
(was: Fix few issues reported by jshint in new YARN UI)

> [YARN-3368] Fix few issues reported by jshint in new YARN UI
> 
>
> Key: YARN-5866
> URL: https://issues.apache.org/jira/browse/YARN-5866
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-ui-v2
>Reporter: Akhil PB
>Assignee: Akhil PB
> Attachments: YARN-5866.001.patch
>
>
> There are few minor issues reported by jshint (javascript lint tool).
> This jira is to track and fix those issues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5792) adopt the id prefix for YARN, MR, and DS entities

2016-11-17 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5792?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated YARN-5792:
---
Attachment: YARN-5792-YARN-5355.06.patch

> adopt the id prefix for YARN, MR, and DS entities
> -
>
> Key: YARN-5792
> URL: https://issues.apache.org/jira/browse/YARN-5792
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-5355
>Reporter: Sangjin Lee
>Assignee: Varun Saxena
> Attachments: YARN-5792-YARN-5355.01.patch, 
> YARN-5792-YARN-5355.02.patch, YARN-5792-YARN-5355.03.patch, 
> YARN-5792-YARN-5355.04.patch, YARN-5792-YARN-5355.05.patch, 
> YARN-5792-YARN-5355.06.patch
>
>
> We introduced the entity id prefix to support flexible entity sorting 
> (YARN-5715). We should adopt the id prefix for YARN entities, MR entities, 
> and DS entities to take advantage of the id prefix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5271) ATS client doesn't work with Jersey 2 on the classpath

2016-11-17 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15675854#comment-15675854
 ] 

Weiwei Yang commented on YARN-5271:
---

Great, thanks [~jojochuang] :)

> ATS client doesn't work with Jersey 2 on the classpath
> --
>
> Key: YARN-5271
> URL: https://issues.apache.org/jira/browse/YARN-5271
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: client, timelineserver
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>Assignee: Weiwei Yang
>  Labels: oct16-medium
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: YARN-5271-branch-2.8.01.patch, YARN-5271.01.patch, 
> YARN-5271.02.patch, YARN-5271.branch-2.01.patch
>
>
> see SPARK-15343 : once Jersey 2 is on the CP, you can't instantiate a 
> timeline client, *even if the server is an ATS1.5 server and publishing is 
> via the FS*



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5271) ATS client doesn't work with Jersey 2 on the classpath

2016-11-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15675749#comment-15675749
 ] 

Hudson commented on YARN-5271:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10860 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10860/])
YARN-5271. ATS client doesn't work with Jersey 2 on the classpath.  (weichiu: 
rev 09520cb439f8b002e3f2f3d8f5080ffc34f4bd5c)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/impl/YarnClientImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestYarnClient.java


> ATS client doesn't work with Jersey 2 on the classpath
> --
>
> Key: YARN-5271
> URL: https://issues.apache.org/jira/browse/YARN-5271
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: client, timelineserver
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>Assignee: Weiwei Yang
>  Labels: oct16-medium
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: YARN-5271-branch-2.8.01.patch, YARN-5271.01.patch, 
> YARN-5271.02.patch, YARN-5271.branch-2.01.patch
>
>
> see SPARK-15343 : once Jersey 2 is on the CP, you can't instantiate a 
> timeline client, *even if the server is an ATS1.5 server and publishing is 
> via the FS*



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5271) ATS client doesn't work with Jersey 2 on the classpath

2016-11-17 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated YARN-5271:
--
Release Note: 
A workaround to avoid dependency conflict with Spark2, before a full classpath 
isolation solution is implemented.
Skip instantiating a Timeline Service client if encountering 
NoClassDefFoundError.

  was:Skip instantiating a Timeline Service client if encountering 
NoClassDefFoundError.


> ATS client doesn't work with Jersey 2 on the classpath
> --
>
> Key: YARN-5271
> URL: https://issues.apache.org/jira/browse/YARN-5271
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: client, timelineserver
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>Assignee: Weiwei Yang
>  Labels: oct16-medium
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: YARN-5271-branch-2.8.01.patch, YARN-5271.01.patch, 
> YARN-5271.02.patch, YARN-5271.branch-2.01.patch
>
>
> see SPARK-15343 : once Jersey 2 is on the CP, you can't instantiate a 
> timeline client, *even if the server is an ATS1.5 server and publishing is 
> via the FS*



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5739) Provide timeline reader API to list available timeline entity types for one application

2016-11-17 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15675728#comment-15675728
 ] 

Varun Saxena commented on YARN-5739:


Well, found a link of HBase book which recommends using both KeyOnlyFilter and 
FirstKeyOnlyFilter. Not sure why. But anyways let's use both, as the book says. 
Probably internal implementation of HBase warrants use of both for optimal 
performance. Quoting from the book:

{code}
101.7. Optimal Loading of Row Keys
When performing a table scan where only the row keys are needed (no families, 
qualifiers, values or timestamps), add a FilterList with a MUST_PASS_ALL 
operator to the scanner using setFilter. The filter list should include both a 
FirstKeyOnlyFilter and a KeyOnlyFilter. Using this filter combination will 
result in a worst case scenario of a RegionServer reading a single value from 
disk and minimal network traffic to the client for a single row.
{code}

> Provide timeline reader API to list available timeline entity types for one 
> application
> ---
>
> Key: YARN-5739
> URL: https://issues.apache.org/jira/browse/YARN-5739
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Li Lu
>Assignee: Li Lu
> Attachments: YARN-5739-YARN-5355.001.patch, 
> YARN-5739-YARN-5355.002.patch
>
>
> Right now we only show a part of available timeline entity data in the new 
> YARN UI. However, some data (especially library specific data) are not 
> possible to be queried out by the web UI. It will be appealing for the UI to 
> provide an "entity browser" for each YARN application. Actually, simply 
> dumping out available timeline entities (with proper pagination, of course) 
> would be pretty helpful for UI users. 
> On timeline side, we're not far away from this goal. Right now I believe the 
> only thing missing is to list all available entity types within one 
> application. The challenge here is that we're not storing this data for each 
> application, but given this kind of call is relatively rare (compare to 
> writes and updates) we can perform some scanning during the read time. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-5739) Provide timeline reader API to list available timeline entity types for one application

2016-11-17 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15675693#comment-15675693
 ] 

Varun Saxena edited comment on YARN-5739 at 11/18/16 4:29 AM:
--

[~vrushalic], FirstKeyOnlyFilter will return the first KV from each row and 
KeyOnlyFilter only the key going by the description for each filter. So 
shouldn't KeyOnlyFilter be enough ? I had removed FirstKeyOnlyFilter and then 
ran the tests which Li had written and those passed.

bq. This filter is used to limit the number of results to a specific page size. 
So it will terminate the scanning once the number of filter-passed rows is > 
the given page size on that particular Region Server.
Which should be fine I guess. We apply limit (coming as a query param on reader 
side) using this filter on the reader side elsewhere as well. Because we only 
need one row. Even if we get one row per Region Server it will be a superset 
and once result set is created it will be sorted to ensure we get keys in order 
and we will fetch only the first one.
However setCaching should be fine in our use case. But not sure why we are not 
using it to apply limit and using PageFilter instead. Do you know pros and cons 
of one over other ?


was (Author: varun_saxena):
[~vrushalic], FirstKeyOnlyFilter will return the first KV from each row and 
KeyOnlyFilter only the key going by the description for each filter. So 
shouldn't KeyOnlyFilter be enough ?

bq. This filter is used to limit the number of results to a specific page size. 
So it will terminate the scanning once the number of filter-passed rows is > 
the given page size on that particular Region Server.
Which should be fine I guess. We apply limit (coming as a query param on reader 
side) using this filter on the reader side elsewhere as well. Because we only 
need one row. Even if we get one row per Region Server it will be a superset 
and once result set is created it will be sorted to ensure we get keys in order 
and we will fetch only the first one.
However setCaching should be fine in our use case. But not sure why we are not 
using it to apply limit and using PageFilter instead. Do you know pros and cons 
of one over other ?

> Provide timeline reader API to list available timeline entity types for one 
> application
> ---
>
> Key: YARN-5739
> URL: https://issues.apache.org/jira/browse/YARN-5739
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Li Lu
>Assignee: Li Lu
> Attachments: YARN-5739-YARN-5355.001.patch, 
> YARN-5739-YARN-5355.002.patch
>
>
> Right now we only show a part of available timeline entity data in the new 
> YARN UI. However, some data (especially library specific data) are not 
> possible to be queried out by the web UI. It will be appealing for the UI to 
> provide an "entity browser" for each YARN application. Actually, simply 
> dumping out available timeline entities (with proper pagination, of course) 
> would be pretty helpful for UI users. 
> On timeline side, we're not far away from this goal. Right now I believe the 
> only thing missing is to list all available entity types within one 
> application. The challenge here is that we're not storing this data for each 
> application, but given this kind of call is relatively rare (compare to 
> writes and updates) we can perform some scanning during the read time. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5271) ATS client doesn't work with Jersey 2 on the classpath

2016-11-17 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated YARN-5271:
--
Release Note: Skip instantiating a Timeline Service client if encountering 
NoClassDefFoundError.

> ATS client doesn't work with Jersey 2 on the classpath
> --
>
> Key: YARN-5271
> URL: https://issues.apache.org/jira/browse/YARN-5271
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: client, timelineserver
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>Assignee: Weiwei Yang
>  Labels: oct16-medium
> Attachments: YARN-5271-branch-2.8.01.patch, YARN-5271.01.patch, 
> YARN-5271.02.patch, YARN-5271.branch-2.01.patch
>
>
> see SPARK-15343 : once Jersey 2 is on the CP, you can't instantiate a 
> timeline client, *even if the server is an ATS1.5 server and publishing is 
> via the FS*



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5739) Provide timeline reader API to list available timeline entity types for one application

2016-11-17 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15675693#comment-15675693
 ] 

Varun Saxena commented on YARN-5739:


[~vrushalic], FirstKeyOnlyFilter will return the first KV from each row and 
KeyOnlyFilter only the key going by the description for each filter. So 
shouldn't KeyOnlyFilter be enough ?

bq. This filter is used to limit the number of results to a specific page size. 
So it will terminate the scanning once the number of filter-passed rows is > 
the given page size on that particular Region Server.
Which should be fine I guess. We apply limit (coming as a query param on reader 
side) using this filter on the reader side elsewhere as well. Because we only 
need one row. Even if we get one row per Region Server it will be a superset 
and once result set is created it will be sorted to ensure we get keys in order 
and we will fetch only the first one.
However setCaching should be fine in our use case. But not sure why we are not 
using it to apply limit and using PageFilter instead. Do you know pros and cons 
of one over other ?

> Provide timeline reader API to list available timeline entity types for one 
> application
> ---
>
> Key: YARN-5739
> URL: https://issues.apache.org/jira/browse/YARN-5739
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Li Lu
>Assignee: Li Lu
> Attachments: YARN-5739-YARN-5355.001.patch, 
> YARN-5739-YARN-5355.002.patch
>
>
> Right now we only show a part of available timeline entity data in the new 
> YARN UI. However, some data (especially library specific data) are not 
> possible to be queried out by the web UI. It will be appealing for the UI to 
> provide an "entity browser" for each YARN application. Actually, simply 
> dumping out available timeline entities (with proper pagination, of course) 
> would be pretty helpful for UI users. 
> On timeline side, we're not far away from this goal. Right now I believe the 
> only thing missing is to list all available entity types within one 
> application. The challenge here is that we're not storing this data for each 
> application, but given this kind of call is relatively rare (compare to 
> writes and updates) we can perform some scanning during the read time. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5646) Documentation for scheduling of OPPORTUNISTIC containers

2016-11-17 Thread Konstantinos Karanasos (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15675677#comment-15675677
 ] 

Konstantinos Karanasos commented on YARN-5646:
--

Please let's wait for a few days before committing this.

> Documentation for scheduling of OPPORTUNISTIC containers
> 
>
> Key: YARN-5646
> URL: https://issues.apache.org/jira/browse/YARN-5646
> Project: Hadoop YARN
>  Issue Type: Task
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
>Priority: Blocker
> Attachments: YARN-5646.001.patch
>
>
> This is for adding documentation regarding the scheduling of OPPORTUNISTIC 
> containers.
> It includes both the centralized (YARN-5220) and the distributed (YARN-2877) 
> scheduling.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5646) Documentation for scheduling of OPPORTUNISTIC containers

2016-11-17 Thread Konstantinos Karanasos (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantinos Karanasos updated YARN-5646:
-
Attachment: YARN-5646.001.patch

Attaching documentation.

> Documentation for scheduling of OPPORTUNISTIC containers
> 
>
> Key: YARN-5646
> URL: https://issues.apache.org/jira/browse/YARN-5646
> Project: Hadoop YARN
>  Issue Type: Task
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
>Priority: Blocker
> Attachments: YARN-5646.001.patch
>
>
> This is for adding documentation regarding the scheduling of OPPORTUNISTIC 
> containers.
> It includes both the centralized (YARN-5220) and the distributed (YARN-2877) 
> scheduling.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5667) Move HBase backend code in ATS v2 into its separate module

2016-11-17 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15675617#comment-15675617
 ] 

Sangjin Lee commented on YARN-5667:
---

Originally this was targeted for our feature branch (YARN-5355) which isn't 
merging to 3.0.0-alpha2.

We can pull this in for 3.0.0-alpha2 if needed. The implication then is, we 
need to prepare two separate patches for this: one for the trunk (for 
3.0.0-alpha2) and another for the YARN-5355 branch. I expect both to be 
somewhat different from each other. [~haibochen], thoughts? Can you look into 
it?

> Move HBase backend code in ATS v2  into its separate module
> ---
>
> Key: YARN-5667
> URL: https://issues.apache.org/jira/browse/YARN-5667
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Affects Versions: 3.0.0-alpha1
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>Priority: Blocker
> Attachments: New module structure.png, part1.yarn5667.prelim.patch, 
> part2.yarn5667.prelim.patch, part3.yarn5667.prelim.patch, 
> part4.yarn5667.prelim.patch, part5.yarn5667.prelim.patch, 
> pt1.yarn5667.001.patch, pt2.yarn5667.001.patch, pt3.yarn5667.001.patch, 
> pt4.yarn5667.001.patch, pt5.yarn5667.001.patch, pt6.yarn5667.001.patch, 
> pt9.yarn5667.001.patch, yarn5667-001.tar.gz
>
>
> The HBase backend code currently lives along with the core ATS v2 code in 
> hadoop-yarn-server-timelineservice module. Because Resource Manager depends 
> on hadoop-yarn-server-timelineservice, an unnecessary dependency of the RM 
> module on HBase modules is introduced (HBase backend is pluggable, so we do 
> not need to directly pull in HBase jars). 
> In our internal effort to try ATS v2 with HBase 2.0 which depends on Hadoop 
> 3, we encountered a circular dependency during our builds between HBase2.0 
> and Hadoop3 artifacts.
> {code}
> hadoop-mapreduce-client-common, hadoop-yarn-client, 
> hadoop-yarn-server-resourcemanager, hadoop-yarn-server-timelineservice, 
> hbase-server, hbase-prefix-tree, hbase-hadoop2-compat, 
> hadoop-mapreduce-client-jobclient, hadoop-mapreduce-client-common]
> {code}
> This jira proposes we move all HBase-backend-related code from 
> hadoop-yarn-server-timelineservice into its own module (possible name is 
> yarn-server-timelineservice-storage) so that core RM modules do not depend on 
> HBase modules any more.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5271) ATS client doesn't work with Jersey 2 on the classpath

2016-11-17 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15675616#comment-15675616
 ] 

Weiwei Yang commented on YARN-5271:
---

Thank you!

> ATS client doesn't work with Jersey 2 on the classpath
> --
>
> Key: YARN-5271
> URL: https://issues.apache.org/jira/browse/YARN-5271
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: client, timelineserver
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>Assignee: Weiwei Yang
>  Labels: oct16-medium
> Attachments: YARN-5271-branch-2.8.01.patch, YARN-5271.01.patch, 
> YARN-5271.02.patch, YARN-5271.branch-2.01.patch
>
>
> see SPARK-15343 : once Jersey 2 is on the CP, you can't instantiate a 
> timeline client, *even if the server is an ATS1.5 server and publishing is 
> via the FS*



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-4206) Add life time value in Application report and CLI

2016-11-17 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15675011#comment-15675011
 ] 

Jian He edited comment on YARN-4206 at 11/18/16 3:07 AM:
-

looks good overall ,thanks Rohith, few comments
- what is this code for, we can remove?
{code}
  if (this.applicationTimeouts.isEmpty()) {

  } else {

  }
{code}
- could you add comments for the API in ApplicationTimeout ?
- updateTimeout, maybe call it updateLifeTimeout?


was (Author: jianhe):
looks good overall ,thanks Rohith, few comments
- what is this code for, we can remove?
{code}
  if (this.applicationTimeouts.isEmpty()) {

  } else {

  }
{code}
- could you add comments for the API in ApplicationTimeout ?

> Add life time value in Application report and CLI
> -
>
> Key: YARN-4206
> URL: https://issues.apache.org/jira/browse/YARN-4206
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler
>Reporter: nijel
>Assignee: Rohith Sharma K S
> Attachments: YARN-4506.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5271) ATS client doesn't work with Jersey 2 on the classpath

2016-11-17 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15675562#comment-15675562
 ] 

Wei-Chiu Chuang commented on YARN-5271:
---

Thanks for reminder. +1 and will commit soon.

> ATS client doesn't work with Jersey 2 on the classpath
> --
>
> Key: YARN-5271
> URL: https://issues.apache.org/jira/browse/YARN-5271
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: client, timelineserver
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>Assignee: Weiwei Yang
>  Labels: oct16-medium
> Attachments: YARN-5271-branch-2.8.01.patch, YARN-5271.01.patch, 
> YARN-5271.02.patch, YARN-5271.branch-2.01.patch
>
>
> see SPARK-15343 : once Jersey 2 is on the CP, you can't instantiate a 
> timeline client, *even if the server is an ATS1.5 server and publishing is 
> via the FS*



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5271) ATS client doesn't work with Jersey 2 on the classpath

2016-11-17 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1567#comment-1567
 ] 

Weiwei Yang commented on YARN-5271:
---

Hello [~jojochuang]

Is there anything else you want me to work on for this patch? Please let me 
know.

> ATS client doesn't work with Jersey 2 on the classpath
> --
>
> Key: YARN-5271
> URL: https://issues.apache.org/jira/browse/YARN-5271
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: client, timelineserver
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>Assignee: Weiwei Yang
>  Labels: oct16-medium
> Attachments: YARN-5271-branch-2.8.01.patch, YARN-5271.01.patch, 
> YARN-5271.02.patch, YARN-5271.branch-2.01.patch
>
>
> see SPARK-15343 : once Jersey 2 is on the CP, you can't instantiate a 
> timeline client, *even if the server is an ATS1.5 server and publishing is 
> via the FS*



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5600) Add a parameter to ContainerLaunchContext to emulate yarn.nodemanager.delete.debug-delay-sec on a per-application basis

2016-11-17 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15675547#comment-15675547
 ] 

Miklos Szegedi commented on YARN-5600:
--

Thank you, [~vvasudev] for the comment!
We will get 600 in all of the three cases above. The current rule is 
Math.max(CONF,Math.min(CUSTOM,MAX_CONF)), where CONF is 
yarn.nodemanager.delete.debug-delay-sec, custom is DEBUG_DELETE_DELAY and 
MAX_CONF is delete.max-per-application-debug-delay-sec.
1) We choose 600 over 200. This makes sense, since an administrator expressed 
that they want to keep any file longer than what the applications specified.
2) We chose 600 over 300. We did not let the client choose 400 with the maximum 
limit, but the maximum does not apply to the default, so that the administrator 
does not need to change two values for one configuration option.
3) We choose 600 over 300. A custom application request came in, so by using 
the maximum we fulfill both the administrator's and the client's request
Thank you!


> Add a parameter to ContainerLaunchContext to emulate 
> yarn.nodemanager.delete.debug-delay-sec on a per-application basis
> ---
>
> Key: YARN-5600
> URL: https://issues.apache.org/jira/browse/YARN-5600
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Affects Versions: 3.0.0-alpha1
>Reporter: Daniel Templeton
>Assignee: Miklos Szegedi
>  Labels: oct16-medium
> Attachments: YARN-5600.000.patch, YARN-5600.001.patch, 
> YARN-5600.002.patch, YARN-5600.003.patch, YARN-5600.004.patch, 
> YARN-5600.005.patch, YARN-5600.006.patch, YARN-5600.007.patch, 
> YARN-5600.008.patch, YARN-5600.009.patch, YARN-5600.010.patch, 
> YARN-5600.011.patch, YARN-5600.012.patch, YARN-5600.013.patch, 
> YARN-5600.014.patch
>
>
> To make debugging application launch failures simpler, I'd like to add a 
> parameter to the CLC to allow an application owner to request delayed 
> deletion of the application's launch artifacts.
> This JIRA solves largely the same problem as YARN-5599, but for cases where 
> ATS is not in use, e.g. branch-2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5904) Reduce the number of default server threads for AMRMProxyService

2016-11-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15675525#comment-15675525
 ] 

Hudson commented on YARN-5904:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10859 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10859/])
YARN-5904. Reduce the number of default server threads for (subru: rev 
140b9939da71ec51c178162501740a429b344cac)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java


> Reduce the number of default server threads for AMRMProxyService
> 
>
> Key: YARN-5904
> URL: https://issues.apache.org/jira/browse/YARN-5904
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.8.0, 3.0.0-alpha1
>Reporter: Subru Krishnan
>Assignee: Subru Krishnan
>Priority: Minor
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: YARN-5904-v1.patch
>
>
> The default value of the number of server threads for AMRMProxy uses the 
> standard default viz 25. This is way too many as the max number we need is 
> the number of concurrently active AMs in the node. So this JIRA proposes to 
> reduce the default to 3.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5676) Add a HashBasedRouterPolicy, that routes jobs based on queue name hash.

2016-11-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15675458#comment-15675458
 ] 

Hadoop QA commented on YARN-5676:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 12 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
58s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
32s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m  
6s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
48s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
37s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
4s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} YARN-2915 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  4m 
43s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 46s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 6 new + 206 unchanged - 0 fixed = 212 total (was 206) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
31s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
7s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 40m 29s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-5676 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12839477/YARN-5676-YARN-2915.02.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 2618347c026f 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | YARN-2915 / 4c6ba54 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/13968/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/13968/testReport/ |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common U: 
hadoop-yarn-project/hadoop-yarn |
| Console output | 

[jira] [Commented] (YARN-5890) FairScheduler should log information about AM-resource-usage and max-AM-share for queues

2016-11-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15675436#comment-15675436
 ] 

Hadoop QA commented on YARN-5890:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 22s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 6 new + 241 unchanged - 0 fixed = 247 total (was 241) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager
 generated 0 new + 934 unchanged - 1 fixed = 934 total (was 935) {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 43m 
16s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 59m 22s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-5890 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12839475/YARN-5890.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 8b5af6e550fc 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / b4f1971 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/13967/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/13967/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/13967/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> FairScheduler should log 

[jira] [Commented] (YARN-5904) Reduce the number of default server threads for AMRMProxyService

2016-11-17 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15675420#comment-15675420
 ] 

Arun Suresh commented on YARN-5904:
---

+1, LGTM

> Reduce the number of default server threads for AMRMProxyService
> 
>
> Key: YARN-5904
> URL: https://issues.apache.org/jira/browse/YARN-5904
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.8.0, 3.0.0-alpha1
>Reporter: Subru Krishnan
>Assignee: Subru Krishnan
>Priority: Minor
> Attachments: YARN-5904-v1.patch
>
>
> The default value of the number of server threads for AMRMProxy uses the 
> standard default viz 25. This is way too many as the max number we need is 
> the number of concurrently active AMs in the node. So this JIRA proposes to 
> reduce the default to 3.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5898) Container can not stop, because the call stopContainer NMClient method appears DIGEST-MD5 exception, onGetContainerStatusError NMClientAsync method is also the same

2016-11-17 Thread gaoyanfu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15675413#comment-15675413
 ] 

gaoyanfu commented on YARN-5898:


AppMaster did not restart, usually, appMaster after running for some time, some 
container appear the exception 

> Container can not stop, because the call stopContainer NMClient method 
> appears DIGEST-MD5 exception, onGetContainerStatusError NMClientAsync method 
> is also the same
> 
>
> Key: YARN-5898
> URL: https://issues.apache.org/jira/browse/YARN-5898
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: api
>Affects Versions: 2.6.0
> Environment: cdh5.5,java 7
>Reporter: gaoyanfu
>  Labels: DIGEST-MD5, getContainerStatuses, 
> onGetContainerStatusError, stopContainer
> Fix For: 2.6.0
>
>   Original Estimate: 96h
>  Remaining Estimate: 96h
>
> GetContainerStatusAsync call the NMClientAsync method, the callback method 
> corresponding onGetContainerStatusError method, DIGEST-MD5 SaslException, 
> ContainerStatus stopContainer can not get; call the nmClient method will be 
> the exception, not stop Container.
> ---REST API---
> request:
> http://server3.xdpp.boco:8042/ws/v1/node/containers
> response:
> {"containers":{"container":[
> {"id":"container_e07_1477704520017_0001_01_04","state":"RUNNING","exitCode":-1000,"diagnostics":"","user":"xdpp","totalMemoryNeededMB":8704,"totalVCoresNeeded":1,"containerLogsLink":"http://server3.xdpp.boco:8042/node/containerlogs/container_e07_1477704520017_0001_01_04/xdpp","nodeId":"server3.xdpp.boco:8041"},
> {"id":"container_e09_1477719748865_0003_01_25","state":"RUNNING","exitCode":-1000,"diagnostics":"","user":"xdpp","totalMemoryNeededMB":1536,"totalVCoresNeeded":1,"containerLogsLink":"http://server3.xdpp.boco:8042/node/containerlogs/container_e09_1477719748865_0003_01_25/xdpp","nodeId":"server3.xdpp.boco:8041"},
> {"id":"container_e09_1477719748865_0004_02_000103","state":"RUNNING","exitCode":-1000,"diagnostics":"","user":"xdpp","totalMemoryNeededMB":6656,"totalVCoresNeeded":1,"containerLogsLink":"http://server3.xdpp.boco:8042/node/containerlogs/container_e09_1477719748865_0004_02_000103/xdpp","nodeId":"server3.xdpp.boco:8041"}
> ]}}
> ---exception--
> 2016-11-14 11:17:12.725 ERROR containerStatusLogger 
> [ContainerManager.java:484] *Container onGetContainerStatusError deal 
> begin.containerId:container_e09_1477719748865_0003_01_25
> javax.security.sasl.SaslException: DIGEST-MD5: digest response format 
> violation. Mismatched response.
>   at sun.reflect.GeneratedConstructorAccessor59.newInstance(Unknown 
> Source) ~[na:na]
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>  ~[na:1.7.0_79]
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:526) 
> ~[na:1.7.0_79]
>   at 
> org.apache.hadoop.yarn.ipc.RPCUtil.instantiateException(RPCUtil.java:53) 
> ~[hadoop-yarn-common-2.6.0.jar:na]
>   at 
> org.apache.hadoop.yarn.ipc.RPCUtil.unwrapAndThrowException(RPCUtil.java:104) 
> ~[hadoop-yarn-common-2.6.0.jar:na]
>   at 
> org.apache.hadoop.yarn.api.impl.pb.client.ContainerManagementProtocolPBClientImpl.getContainerStatuses(ContainerManagementProtocolPBClientImpl.java:127)
>  ~[hadoop-yarn-common-2.6.0.jar:na]
>   at sun.reflect.GeneratedMethodAccessor35.invoke(Unknown Source) ~[na:na]
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[na:1.7.0_79]
>   at java.lang.reflect.Method.invoke(Method.java:606) ~[na:1.7.0_79]
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
>  ~[hadoop-common-2.6.0.jar:na]
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>  ~[hadoop-common-2.6.0.jar:na]
>   at com.sun.proxy.$Proxy23.getContainerStatuses(Unknown Source) ~[na:na]
>   at 
> org.apache.hadoop.yarn.client.api.impl.NMClientImpl.getContainerStatus(NMClientImpl.java:267)
>  ~[hadoop-yarn-client-2.6.0.jar:na]
>   at 
> org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl$ContainerEventProcessor.run(NMClientAsyncImpl.java:534)
>  ~[hadoop-yarn-client-2.6.0.jar:na]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  [na:1.7.0_79]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  [na:1.7.0_79]
>   at java.lang.Thread.run(Thread.java:745) [na:1.7.0_79]
> Caused by: 

[jira] [Commented] (YARN-3538) TimelineServer doesn't catch/translate all exceptions raised

2016-11-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15675343#comment-15675343
 ] 

Hudson commented on YARN-3538:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10858 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10858/])
YARN-3538. TimelineWebService doesn't catch runtime exception. (junping_du: rev 
f05a9ceb4a9623517aa1c8d995805e26ae1bde5a)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/timeline/webapp/TimelineWebServices.java


> TimelineServer doesn't catch/translate all exceptions raised
> 
>
> Key: YARN-3538
> URL: https://issues.apache.org/jira/browse/YARN-3538
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: timelineserver
>Affects Versions: 2.6.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
>  Labels: oct16-easy
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: YARN-3538-001.patch, YARN-3538.002.patch
>
>
> Not all exceptions in TimelineServer are uprated to web exceptions; only IOEs



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5905) Update the RM webapp host that is reported as part of Federation membership to current primary RM's IP

2016-11-17 Thread Subru Krishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15675333#comment-15675333
 ] 

Subru Krishnan commented on YARN-5905:
--

Thanks [~curino] for the quick review. And good point on testing (missed 
mentioning it), [~giovanni.fumarola] did test the patch.

I'll wait for Yetus and commit if I get an all clear.

> Update the RM webapp host that is reported as part of Federation membership 
> to current primary RM's IP
> --
>
> Key: YARN-5905
> URL: https://issues.apache.org/jira/browse/YARN-5905
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: federation, resourcemanager
>Affects Versions: YARN-2915
>Reporter: Subru Krishnan
>Assignee: Subru Krishnan
>Priority: Minor
> Attachments: YARN-5905-YARN-2915-v1.patch, YARN-5905-v1.patch
>
>
> Currently when RM HA is enabled, the webapp host is randomly picked from one 
> of the ensemble RMs and relies on redirect to pick the active primary RM. 
> This has a few shortcomings:
>   * There's an overhead of additional network hop.
>   * Sometimes the rmId selected might be an instance which is 
> inactive/decommissioned
>   * In few of our clusters, we have redirects disabled (either in client or 
> server side) and then the invocation fails.
> This JIRA proposes updating the RM webapp host that is reported as part of 
> Federation membership to the current primary RM's IP.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5676) Add a HashBasedRouterPolicy, that routes jobs based on queue name hash.

2016-11-17 Thread Carlo Curino (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carlo Curino updated YARN-5676:
---
Attachment: YARN-5676-YARN-2915.02.patch

> Add a HashBasedRouterPolicy, that routes jobs based on queue name hash.
> ---
>
> Key: YARN-5676
> URL: https://issues.apache.org/jira/browse/YARN-5676
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Affects Versions: YARN-2915
>Reporter: Carlo Curino
>Assignee: Carlo Curino
> Attachments: YARN-5676-YARN-2915.01.patch, 
> YARN-5676-YARN-2915.02.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5905) Update the RM webapp host that is reported as part of Federation membership to current primary RM's IP

2016-11-17 Thread Carlo Curino (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15675324#comment-15675324
 ] 

Carlo Curino commented on YARN-5905:


+1 pending yetus. It would be good to test this running in a live federation 
cluster, before commit.

> Update the RM webapp host that is reported as part of Federation membership 
> to current primary RM's IP
> --
>
> Key: YARN-5905
> URL: https://issues.apache.org/jira/browse/YARN-5905
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: federation, resourcemanager
>Affects Versions: YARN-2915
>Reporter: Subru Krishnan
>Assignee: Subru Krishnan
>Priority: Minor
> Attachments: YARN-5905-YARN-2915-v1.patch, YARN-5905-v1.patch
>
>
> Currently when RM HA is enabled, the webapp host is randomly picked from one 
> of the ensemble RMs and relies on redirect to pick the active primary RM. 
> This has a few shortcomings:
>   * There's an overhead of additional network hop.
>   * Sometimes the rmId selected might be an instance which is 
> inactive/decommissioned
>   * In few of our clusters, we have redirects disabled (either in client or 
> server side) and then the invocation fails.
> This JIRA proposes updating the RM webapp host that is reported as part of 
> Federation membership to the current primary RM's IP.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4770) Auto-restart of containers should work across NM restarts.

2016-11-17 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4770?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15675321#comment-15675321
 ] 

Jian He commented on YARN-4770:
---

[~hex108],
bq. If container crashed during the NM reboot, container would transit to 
RELAUNCHING state. I will check it again.
Is this working now ? if so, we can close this.

> Auto-restart of containers should work across NM restarts.
> --
>
> Key: YARN-4770
> URL: https://issues.apache.org/jira/browse/YARN-4770
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Vinod Kumar Vavilapalli
>
> See my comment 
> [here|https://issues.apache.org/jira/browse/YARN-3998?focusedCommentId=15133367=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15133367]
>  on YARN-3998. Need to take care of two things:
>  - The relaunch feature needs to work across NM restarts, so we should save 
> the retry-context and policy per container into the state-store and reload it 
> for continue relaunching after NM restart.
>  - We should also handle restarting of any containers that may have crashed 
> during the NM reboot.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5890) FairScheduler should log information about AM-resource-usage and max-AM-share for queues

2016-11-17 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-5890:
---
Attachment: YARN-5890.001.patch

> FairScheduler should log information about AM-resource-usage and max-AM-share 
> for queues
> 
>
> Key: YARN-5890
> URL: https://issues.apache.org/jira/browse/YARN-5890
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: YARN-5890.001.patch
>
>
> There are several cases where jobs in a queue or stuck likely because of 
> maxAMShare. It is hard to debug these issues without any information.
> At the very least, we need to log both AM-resource-usage and max-AM-share for 
> queues. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-5355) YARN Timeline Service v.2: alpha 2

2016-11-17 Thread Sangjin Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sangjin Lee reassigned YARN-5355:
-

Assignee: Sangjin Lee

> YARN Timeline Service v.2: alpha 2
> --
>
> Key: YARN-5355
> URL: https://issues.apache.org/jira/browse/YARN-5355
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: timelineserver
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
>Priority: Critical
> Attachments: Timeline Service v2_ Ideas for Next Steps.pdf, 
> YARN-5355-branch-2.01.patch
>
>
> This is an umbrella JIRA for the alpha 2 milestone for YARN Timeline Service 
> v.2.
> This is developed on feature branches: {{YARN-5355}} for the trunk-based 
> development and {{YARN-5355-branch-2}} to maintain backports to branch-2. Any 
> subtask work on this JIRA will be committed to those 2 branches.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5667) Move HBase backend code in ATS v2 into its separate module

2016-11-17 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5667?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated YARN-5667:
--
Affects Version/s: 3.0.0-alpha1
 Target Version/s: 3.0.0-alpha2
 Priority: Blocker  (was: Major)

I'm upgrading this to a 3.0.0-alpha2 blocker, since we need it to compile HBase.

> Move HBase backend code in ATS v2  into its separate module
> ---
>
> Key: YARN-5667
> URL: https://issues.apache.org/jira/browse/YARN-5667
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Affects Versions: 3.0.0-alpha1
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>Priority: Blocker
> Attachments: New module structure.png, part1.yarn5667.prelim.patch, 
> part2.yarn5667.prelim.patch, part3.yarn5667.prelim.patch, 
> part4.yarn5667.prelim.patch, part5.yarn5667.prelim.patch, 
> pt1.yarn5667.001.patch, pt2.yarn5667.001.patch, pt3.yarn5667.001.patch, 
> pt4.yarn5667.001.patch, pt5.yarn5667.001.patch, pt6.yarn5667.001.patch, 
> pt9.yarn5667.001.patch, yarn5667-001.tar.gz
>
>
> The HBase backend code currently lives along with the core ATS v2 code in 
> hadoop-yarn-server-timelineservice module. Because Resource Manager depends 
> on hadoop-yarn-server-timelineservice, an unnecessary dependency of the RM 
> module on HBase modules is introduced (HBase backend is pluggable, so we do 
> not need to directly pull in HBase jars). 
> In our internal effort to try ATS v2 with HBase 2.0 which depends on Hadoop 
> 3, we encountered a circular dependency during our builds between HBase2.0 
> and Hadoop3 artifacts.
> {code}
> hadoop-mapreduce-client-common, hadoop-yarn-client, 
> hadoop-yarn-server-resourcemanager, hadoop-yarn-server-timelineservice, 
> hbase-server, hbase-prefix-tree, hbase-hadoop2-compat, 
> hadoop-mapreduce-client-jobclient, hadoop-mapreduce-client-common]
> {code}
> This jira proposes we move all HBase-backend-related code from 
> hadoop-yarn-server-timelineservice into its own module (possible name is 
> yarn-server-timelineservice-storage) so that core RM modules do not depend on 
> HBase modules any more.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5774) MR Job stuck in ACCEPTED status without any progress in Fair Scheduler if set yarn.scheduler.minimum-allocation-mb to 0.

2016-11-17 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15675216#comment-15675216
 ] 

Daniel Templeton commented on YARN-5774:


I'm still not in love with that exception.  Maybe log it as an error and assume 
the minimum as the increment instead?

Couple of additional comments:

Let's make your messages a little clearer. How about:

{{"StepFactor memory size cannot be zero!"}} -> {{"Memory cannot be allocated 
in increments of zero. Assuming " + minimumResource.getMemorySize() + "MB 
increment size. Please ensure the scheduler configuration is correct."}}

In {{DominantResourceCalculator}}, I think you'll need to test for memory and 
vcores separately and then use the same message as above.

I'd also love to see some tests that validate the changes you made.

It would be good to have javadoc for {{AbstractYarnScheduler. 
normalizeRequests()}}.

There was something else, but I can't think of it now...

> MR Job stuck in ACCEPTED status without any progress in Fair Scheduler if set 
> yarn.scheduler.minimum-allocation-mb to 0.
> 
>
> Key: YARN-5774
> URL: https://issues.apache.org/jira/browse/YARN-5774
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 3.0.0-alpha1
>Reporter: Yufei Gu
>Assignee: Yufei Gu
>  Labels: oct16-easy
> Attachments: YARN-5774.001.patch, YARN-5774.002.patch, 
> YARN-5774.003.patch, YARN-5774.004.patch
>
>
> MR Job stuck in ACCEPTED status without any progress in Fair Scheduler 
> because there is no resource request for the AM. This happened when you 
> configure {{yarn.scheduler.minimum-allocation-mb}} to zero.
> The problem is in the code used by both Capacity Scheduler and Fair 
> Scheduler. {{scheduler.increment-allocation-mb}} is a concept in FS, but not 
> CS. So the common code in class RMAppManager passes the 
> {{yarn.scheduler.minimum-allocation-mb}} as incremental one because there is 
> no incremental one for CS when it tried to normalize the resource requests.
> {code}
>  SchedulerUtils.normalizeRequest(amReq, scheduler.getResourceCalculator(),
>   scheduler.getClusterResource(),
>   scheduler.getMinimumResourceCapability(),
>   scheduler.getMaximumResourceCapability(),
>   scheduler.getMinimumResourceCapability());  --> incrementResource 
> should be passed here.
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5739) Provide timeline reader API to list available timeline entity types for one application

2016-11-17 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15675199#comment-15675199
 ] 

Vrushali C commented on YARN-5739:
--

Thanks [~gtCarrera9] for the patch.

[~varun_saxena] I think we do need both FirstKeyOnlyFilter as well as 
KeyOnlyFilter both for this case. 

I think we should not be setting PageFilter to 1. This filter is used to limit 
the number of results to a specific page size. So it will terminate the 
scanning once the number of filter-passed rows is > the given page size on that 
particular Region Server. 

At line 100 in HBaseTimelineReaderImpl, do we need that warning? Or should be 
just a debug/info message? Is it an error if we see the same entity type again? 

> Provide timeline reader API to list available timeline entity types for one 
> application
> ---
>
> Key: YARN-5739
> URL: https://issues.apache.org/jira/browse/YARN-5739
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Li Lu
>Assignee: Li Lu
> Attachments: YARN-5739-YARN-5355.001.patch, 
> YARN-5739-YARN-5355.002.patch
>
>
> Right now we only show a part of available timeline entity data in the new 
> YARN UI. However, some data (especially library specific data) are not 
> possible to be queried out by the web UI. It will be appealing for the UI to 
> provide an "entity browser" for each YARN application. Actually, simply 
> dumping out available timeline entities (with proper pagination, of course) 
> would be pretty helpful for UI users. 
> On timeline side, we're not far away from this goal. Right now I believe the 
> only thing missing is to list all available entity types within one 
> application. The challenge here is that we're not storing this data for each 
> application, but given this kind of call is relatively rare (compare to 
> writes and updates) we can perform some scanning during the read time. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5905) Update the RM webapp host that is reported as part of Federation membership to current primary RM's IP

2016-11-17 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5905?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated YARN-5905:
-
Attachment: YARN-5905-YARN-2915-v1.patch

Missed the branch suffix so uploading the same patch with the right branch name.

> Update the RM webapp host that is reported as part of Federation membership 
> to current primary RM's IP
> --
>
> Key: YARN-5905
> URL: https://issues.apache.org/jira/browse/YARN-5905
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: federation, resourcemanager
>Affects Versions: YARN-2915
>Reporter: Subru Krishnan
>Assignee: Subru Krishnan
>Priority: Minor
> Attachments: YARN-5905-YARN-2915-v1.patch, YARN-5905-v1.patch
>
>
> Currently when RM HA is enabled, the webapp host is randomly picked from one 
> of the ensemble RMs and relies on redirect to pick the active primary RM. 
> This has a few shortcomings:
>   * There's an overhead of additional network hop.
>   * Sometimes the rmId selected might be an instance which is 
> inactive/decommissioned
>   * In few of our clusters, we have redirects disabled (either in client or 
> server side) and then the invocation fails.
> This JIRA proposes updating the RM webapp host that is reported as part of 
> Federation membership to the current primary RM's IP.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5905) Update the RM webapp host that is reported as part of Federation membership to current primary RM's IP

2016-11-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15675190#comment-15675190
 ] 

Hadoop QA commented on YARN-5905:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  4s{color} 
| {color:red} YARN-5905 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-5905 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12839459/YARN-5905-v1.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/13965/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Update the RM webapp host that is reported as part of Federation membership 
> to current primary RM's IP
> --
>
> Key: YARN-5905
> URL: https://issues.apache.org/jira/browse/YARN-5905
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: federation, resourcemanager
>Affects Versions: YARN-2915
>Reporter: Subru Krishnan
>Assignee: Subru Krishnan
>Priority: Minor
> Attachments: YARN-5905-v1.patch
>
>
> Currently when RM HA is enabled, the webapp host is randomly picked from one 
> of the ensemble RMs and relies on redirect to pick the active primary RM. 
> This has a few shortcomings:
>   * There's an overhead of additional network hop.
>   * Sometimes the rmId selected might be an instance which is 
> inactive/decommissioned
>   * In few of our clusters, we have redirects disabled (either in client or 
> server side) and then the invocation fails.
> This JIRA proposes updating the RM webapp host that is reported as part of 
> Federation membership to the current primary RM's IP.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5905) Update the RM webapp host that is reported as part of Federation membership to current primary RM's IP

2016-11-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15675110#comment-15675110
 ] 

Hadoop QA commented on YARN-5905:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  7s{color} 
| {color:red} YARN-5905 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-5905 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12839459/YARN-5905-v1.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/13964/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Update the RM webapp host that is reported as part of Federation membership 
> to current primary RM's IP
> --
>
> Key: YARN-5905
> URL: https://issues.apache.org/jira/browse/YARN-5905
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: federation, resourcemanager
>Affects Versions: YARN-2915
>Reporter: Subru Krishnan
>Assignee: Subru Krishnan
>Priority: Minor
> Attachments: YARN-5905-v1.patch
>
>
> Currently when RM HA is enabled, the webapp host is randomly picked from one 
> of the ensemble RMs and relies on redirect to pick the active primary RM. 
> This has a few shortcomings:
>   * There's an overhead of additional network hop.
>   * Sometimes the rmId selected might be an instance which is 
> inactive/decommissioned
>   * In few of our clusters, we have redirects disabled (either in client or 
> server side) and then the invocation fails.
> This JIRA proposes updating the RM webapp host that is reported as part of 
> Federation membership to the current primary RM's IP.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5910) Support for multi-cluster delegation tokens

2016-11-17 Thread Clay B. (JIRA)
Clay B. created YARN-5910:
-

 Summary: Support for multi-cluster delegation tokens
 Key: YARN-5910
 URL: https://issues.apache.org/jira/browse/YARN-5910
 Project: Hadoop YARN
  Issue Type: New Feature
  Components: security
Reporter: Clay B.
Priority: Minor


As an administrator running many secure (kerberized) clusters, some which have 
peer clusters managed by other teams, I am looking for a way to run jobs which 
may require services running on other clusters. Particular cases where this 
rears itself are running something as core as a distcp between two kerberized 
clusters (e.g. {{hadoop --config /home/user292/conf/ distcp 
hdfs://LOCALCLUSTER/user/user292/test.out 
hdfs://REMOTECLUSTER/user/user292/test.out.result}}).

Thanks to YARN-3021, once can run for a while but if the delegation token for 
the remote cluster needs renewal the job will fail[1]. One can pre-configure 
their {{hdfs-site.xml}} loaded by the YARN RM to know of all possible HDFSes 
available but that requires coordination that is not always feasible, 
especially as a cluster's peers grow into the tens of clusters or across 
management teams. Ideally, one could have core systems configured this way but 
jobs could also specify their own handling of tokens and management when needed?

[1]: Example stack trace when the RM is unaware of a remote service:

{code}
2016-03-23 14:59:50,528 INFO 
org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer: 
application_1458441356031_3317 found existing hdfs token Kind: 
HDFS_DELEGATION_TOKEN, Service: ha-hdfs:REMOTECLUSTER, Ident: 
(HDFS_DELEGATION_TOKEN token
 10927 for user292)
2016-03-23 14:59:50,557 WARN 
org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer: 
Unable to add the application to the delegation token renewer.
java.io.IOException: Failed to renew token: Kind: HDFS_DELEGATION_TOKEN, 
Service: ha-hdfs:REMOTECLUSTER, Ident: (HDFS_DELEGATION_TOKEN token 10927 for 
user292)
at 
org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer.handleAppSubmitEvent(DelegationTokenRenewer.java:427)
at 
org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer.access$700(DelegationTokenRenewer.java:78)
at 
org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer$DelegationTokenRenewerRunnable.handleDTRenewerAppSubmitEvent(DelegationTokenRenewer.java:781)
at 
org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer$DelegationTokenRenewerRunnable.run(DelegationTokenRenewer.java:762)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
Caused by: java.io.IOException: Unable to map logical nameservice URI 
'hdfs://REMOTECLUSTER' to a NameNode. Local configuration does not have a 
failover proxy provider configured.
at org.apache.hadoop.hdfs.DFSClient$Renewer.getNNProxy(DFSClient.java:1164)
at org.apache.hadoop.hdfs.DFSClient$Renewer.renew(DFSClient.java:1128)
at org.apache.hadoop.security.token.Token.renew(Token.java:377)
at 
org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer$1.run(DelegationTokenRenewer.java:516)
at 
org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer$1.run(DelegationTokenRenewer.java:513)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at 
org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer.renewToken(DelegationTokenRenewer.java:511)
at 
org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer.handleAppSubmitEvent(DelegationTokenRenewer.java:425)
... 6 more
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5905) Update the RM webapp host that is reported as part of Federation membership to current primary RM's IP

2016-11-17 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5905?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated YARN-5905:
-
Attachment: YARN-5905-v1.patch

Attaching a simple patch that updates the RM webapp host that is reported as 
part of Federation membership to the current primary RM's IP.

> Update the RM webapp host that is reported as part of Federation membership 
> to current primary RM's IP
> --
>
> Key: YARN-5905
> URL: https://issues.apache.org/jira/browse/YARN-5905
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: federation, resourcemanager
>Affects Versions: YARN-2915
>Reporter: Subru Krishnan
>Assignee: Subru Krishnan
>Priority: Minor
> Attachments: YARN-5905-v1.patch
>
>
> Currently when RM HA is enabled, the webapp host is randomly picked from one 
> of the ensemble RMs and relies on redirect to pick the active primary RM. 
> This has a few shortcomings:
>   * There's an overhead of additional network hop.
>   * Sometimes the rmId selected might be an instance which is 
> inactive/decommissioned
>   * In few of our clusters, we have redirects disabled (either in client or 
> server side) and then the invocation fails.
> This JIRA proposes updating the RM webapp host that is reported as part of 
> Federation membership to the current primary RM's IP.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5909) Upgrade framework jetty version

2016-11-17 Thread Jian He (JIRA)
Jian He created YARN-5909:
-

 Summary: Upgrade framework jetty version 
 Key: YARN-5909
 URL: https://issues.apache.org/jira/browse/YARN-5909
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Jian He


Hadoop core has upgraded jetty version to 9 ,
The framework (slider AM) also needs to upgrade the jetty version
Problem is that some legacy agent code uses classes which only exist in old 
jetty. Probably it's the time to remove all the agent related code ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5478) [YARN-4902] Define Java API for generalized & unified scheduling-strategies.

2016-11-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15675072#comment-15675072
 ] 

Hadoop QA commented on YARN-5478:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  4m 
36s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 45s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 41 new + 0 unchanged - 0 fixed = 41 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
17s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 generated 13 new + 0 unchanged - 0 fixed = 13 total (was 0) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
23s{color} | {color:red} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api 
generated 2 new + 123 unchanged - 0 fixed = 125 total (was 123) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
28s{color} | {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager
 generated 4 new + 935 unchanged - 0 fixed = 939 total (was 935) {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
32s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 40m 50s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 79m 45s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | 
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
|  |  
org.apache.hadoop.yarn.server.resourcemanager.scheduler.prr.api.PlacementStrategy.getAffinityTargets()
 may expose internal representation by returning 
PlacementStrategy.affinityTargets  At PlacementStrategy.java:by returning 
PlacementStrategy.affinityTargets  At PlacementStrategy.java:[line 73] |
|  |  

[jira] [Commented] (YARN-5676) Add a HashBasedRouterPolicy, that routes jobs based on queue name hash.

2016-11-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15675051#comment-15675051
 ] 

Hadoop QA commented on YARN-5676:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 12 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 11m 
58s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
23s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
47s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
4s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
36s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
13s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} YARN-2915 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  4m 
38s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 45s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 6 new + 206 unchanged - 0 fixed = 212 total (was 206) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
58s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common 
generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
31s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
6s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 43m 55s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | 
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common
 |
|  |  Unread field:HashBroadcastPolicyManager.java:[line 40] |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-5676 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12838639/YARN-5676-YARN-2915.01.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 23e167adf97d 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 
20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | YARN-2915 / 4c6ba54 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 

[jira] [Commented] (YARN-5906) Update AppSchedulingInfo to use SchedulingPlacementSet

2016-11-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15675024#comment-15675024
 ] 

Hadoop QA commented on YARN-5906:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 20s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 13 new + 46 unchanged - 2 fixed = 59 total (was 48) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
11s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 46m 10s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 62m 26s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | 
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
|  |  
org.apache.hadoop.yarn.server.resourcemanager.scheduler.AppSchedulingInfo.updateResourceRequests(List,
 boolean) makes inefficient use of keySet iterator instead of entrySet iterator 
 At AppSchedulingInfo.java:of keySet iterator instead of entrySet iterator  At 
AppSchedulingInfo.java:[line 428] |
| Failed junit tests | hadoop.yarn.server.resourcemanager.TestRMRestart |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-5906 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12839447/YARN-5906.1.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux a2d8a0cf12dc 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / bd37355 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/13961/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| findbugs | 

[jira] [Commented] (YARN-4206) Add life time value in Application report and CLI

2016-11-17 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15675011#comment-15675011
 ] 

Jian He commented on YARN-4206:
---

looks good overall ,thanks Rohith, few comments
- what is this code for, we can remove?
{cod}
  if (this.applicationTimeouts.isEmpty()) {

  } else {

  }
{code}
- could you add comments for the API in ApplicationTimeout ?

> Add life time value in Application report and CLI
> -
>
> Key: YARN-4206
> URL: https://issues.apache.org/jira/browse/YARN-4206
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler
>Reporter: nijel
>Assignee: Rohith Sharma K S
> Attachments: YARN-4506.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-4206) Add life time value in Application report and CLI

2016-11-17 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15675011#comment-15675011
 ] 

Jian He edited comment on YARN-4206 at 11/17/16 10:35 PM:
--

looks good overall ,thanks Rohith, few comments
- what is this code for, we can remove?
{code}
  if (this.applicationTimeouts.isEmpty()) {

  } else {

  }
{code}
- could you add comments for the API in ApplicationTimeout ?


was (Author: jianhe):
looks good overall ,thanks Rohith, few comments
- what is this code for, we can remove?
{cod}
  if (this.applicationTimeouts.isEmpty()) {

  } else {

  }
{code}
- could you add comments for the API in ApplicationTimeout ?

> Add life time value in Application report and CLI
> -
>
> Key: YARN-4206
> URL: https://issues.apache.org/jira/browse/YARN-4206
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler
>Reporter: nijel
>Assignee: Rohith Sharma K S
> Attachments: YARN-4506.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5634) Simplify initialization/use of RouterPolicy via a RouterPolicyFacade

2016-11-17 Thread Carlo Curino (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15674951#comment-15674951
 ] 

Carlo Curino commented on YARN-5634:


Thanks [~subru] for reviewing and committing.

> Simplify initialization/use of RouterPolicy via a RouterPolicyFacade 
> -
>
> Key: YARN-5634
> URL: https://issues.apache.org/jira/browse/YARN-5634
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Affects Versions: YARN-2915
>Reporter: Carlo Curino
>Assignee: Carlo Curino
>  Labels: oct16-medium
> Fix For: YARN-2915
>
> Attachments: YARN-5634-YARN-2915.01.patch, 
> YARN-5634-YARN-2915.02.patch, YARN-5634-YARN-2915.03.patch, 
> YARN-5634-YARN-2915.04.patch, YARN-5634-YARN-2915.05.patch, 
> YARN-5634-YARN-2915.06.patch, YARN-5634-YARN-2915.07.patch, 
> YARN-5634-YARN-2915.08.patch
>
>
> The current set of policies require some machinery to (re)initialize based on 
> changes in the SubClusterPolicyConfiguration. This JIRA tracks the effort to 
> hide much of that behind a simple RouterPolicyFacade, making lifecycle and 
> usage of the policies easier to consumers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5908) Add affinity/anti-affinity field to ResourceRequest API

2016-11-17 Thread Wangda Tan (JIRA)
Wangda Tan created YARN-5908:


 Summary: Add affinity/anti-affinity field to ResourceRequest API
 Key: YARN-5908
 URL: https://issues.apache.org/jira/browse/YARN-5908
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Wangda Tan
Assignee: Wangda Tan






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5907) [Umbrella] [YARN-1042] add ability to specify affinity/anti-affinity in container requests

2016-11-17 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5907?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-5907:
-
Issue Type: New Feature  (was: Sub-task)
Parent: (was: YARN-397)

> [Umbrella] [YARN-1042] add ability to specify affinity/anti-affinity in 
> container requests
> --
>
> Key: YARN-5907
> URL: https://issues.apache.org/jira/browse/YARN-5907
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: resourcemanager
>Affects Versions: 3.0.0-alpha1
>Reporter: Steve Loughran
>Assignee: Wangda Tan
>
> container requests to the AM should be able to request anti-affinity to 
> ensure that things like Region Servers don't come up on the same failure 
> zones. 
> Similarly, you may be able to want to specify affinity to same host or rack 
> without specifying which specific host/rack. Example: bringing up a small 
> giraph cluster in a large YARN cluster would benefit from having the 
> processes in the same rack purely for bandwidth reasons.
> {color:red}
> This JIRA is cloned umbrella JIRA of YARN-1042, discussions / designs / POC 
> patches, etc. please refer to YARN-1042.
> {color}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5907) [Umbrella] [YARN-1042] add ability to specify affinity/anti-affinity in container requests

2016-11-17 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5907?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-5907:
-
Description: 
container requests to the AM should be able to request anti-affinity to ensure 
that things like Region Servers don't come up on the same failure zones. 

Similarly, you may be able to want to specify affinity to same host or rack 
without specifying which specific host/rack. Example: bringing up a small 
giraph cluster in a large YARN cluster would benefit from having the processes 
in the same rack purely for bandwidth reasons.

{color:red}
This JIRA is cloned umbrella JIRA of YARN-1042, discussions / designs / POC 
patches, etc. please refer to YARN-1042.
{color}

  was:
container requests to the AM should be able to request anti-affinity to ensure 
that things like Region Servers don't come up on the same failure zones. 

Similarly, you may be able to want to specify affinity to same host or rack 
without specifying which specific host/rack. Example: bringing up a small 
giraph cluster in a large YARN cluster would benefit from having the processes 
in the same rack purely for bandwidth reasons.


> [Umbrella] [YARN-1042] add ability to specify affinity/anti-affinity in 
> container requests
> --
>
> Key: YARN-5907
> URL: https://issues.apache.org/jira/browse/YARN-5907
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: 3.0.0-alpha1
>Reporter: Steve Loughran
>Assignee: Wangda Tan
>
> container requests to the AM should be able to request anti-affinity to 
> ensure that things like Region Servers don't come up on the same failure 
> zones. 
> Similarly, you may be able to want to specify affinity to same host or rack 
> without specifying which specific host/rack. Example: bringing up a small 
> giraph cluster in a large YARN cluster would benefit from having the 
> processes in the same rack purely for bandwidth reasons.
> {color:red}
> This JIRA is cloned umbrella JIRA of YARN-1042, discussions / designs / POC 
> patches, etc. please refer to YARN-1042.
> {color}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-1042) add ability to specify affinity/anti-affinity in container requests

2016-11-17 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15674879#comment-15674879
 ] 

Wangda Tan commented on YARN-1042:
--

{color:red}
Opened YARN-5907 as umbrella JIRA of this ticket.
{color}

> add ability to specify affinity/anti-affinity in container requests
> ---
>
> Key: YARN-1042
> URL: https://issues.apache.org/jira/browse/YARN-1042
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: 3.0.0-alpha1
>Reporter: Steve Loughran
>Assignee: Wangda Tan
> Attachments: YARN-1042-demo.patch, YARN-1042-design-doc.pdf, 
> YARN-1042-global-scheduling.poc.1.patch, YARN-1042.001.patch, 
> YARN-1042.002.patch
>
>
> container requests to the AM should be able to request anti-affinity to 
> ensure that things like Region Servers don't come up on the same failure 
> zones. 
> Similarly, you may be able to want to specify affinity to same host or rack 
> without specifying which specific host/rack. Example: bringing up a small 
> giraph cluster in a large YARN cluster would benefit from having the 
> processes in the same rack purely for bandwidth reasons.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5907) [Umbrella] [YARN-1042] add ability to specify affinity/anti-affinity in container requests

2016-11-17 Thread Wangda Tan (JIRA)
Wangda Tan created YARN-5907:


 Summary: [Umbrella] [YARN-1042] add ability to specify 
affinity/anti-affinity in container requests
 Key: YARN-5907
 URL: https://issues.apache.org/jira/browse/YARN-5907
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Affects Versions: 3.0.0-alpha1
Reporter: Steve Loughran
Assignee: Wangda Tan


container requests to the AM should be able to request anti-affinity to ensure 
that things like Region Servers don't come up on the same failure zones. 

Similarly, you may be able to want to specify affinity to same host or rack 
without specifying which specific host/rack. Example: bringing up a small 
giraph cluster in a large YARN cluster would benefit from having the processes 
in the same rack purely for bandwidth reasons.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5478) [YARN-4902] Define Java API for generalized & unified scheduling-strategies.

2016-11-17 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-5478:
-
Attachment: YARN-5478.1.patch

Attached ver.1 patch, this patch is targeted to be used inside scheduler so it 
doesn't come with -PBImpl, etc.

Please check the {{prr/example/Examples.java}} as examples of how to this API.

+ [~kkaranasos], [~kasha], [~asuresh], [~jianhe], [~subru].

> [YARN-4902] Define Java API for generalized & unified scheduling-strategies.
> 
>
> Key: YARN-5478
> URL: https://issues.apache.org/jira/browse/YARN-5478
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-5478.1.patch, YARN-5478.preliminary-poc.1.patch, 
> YARN-5478.preliminary-poc.2.patch
>
>
> Define Java API for application to specify generic scheduling requirements 
> described in YARN-4902 design doc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5906) Update AppSchedulingInfo to use SchedulingPlacementSet

2016-11-17 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-5906:
-
Attachment: YARN-5906.1.patch

Attached ver.1 patch for review.

+ [~jianhe], [~kasha], [~asuresh], [~subru] please feel free to share your 
thoughts for the patch and overall plan.

> Update AppSchedulingInfo to use SchedulingPlacementSet
> --
>
> Key: YARN-5906
> URL: https://issues.apache.org/jira/browse/YARN-5906
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-5906.1.patch
>
>
> Currently AppSchedulingInfo simply stores resource request and scheduler make 
> decision according to stored resource request. For example, CS/FS use 
> slightly different approach to get pending resource request and make delay 
> scheduling decision. 
> There're several benefits of moving pending resource request data structure 
> to SchedulingPlacementSet
> 1) Delay scheduling logic should be agnostic to scheduler, for example CS 
> supports count-based delay and FS supports both of count-based and time-based 
> delay. Ideally scheduler should be able to choose which delay scheduling 
> policy to use.
> 2) In addition to 1., YARN-4902 has proposal to support pluggable delay 
> scheduling behavior in addition to locality-based (host->rack->offswitch). 
> Which requires more flexibility.
> 3) To make YARN-4902 becomes real, instead of directly adding the new 
> resource request API to client, we can make scheduler to use it internally to 
> make sure it is well defined. And AppSchedulingInfo/SchedulingPlacementSet 
> will be the perfect place to isolate which ResourceRequest implementation to 
> use.
> 4) Different scheduling requirement needs different behavior of checking 
> ResourceRequest table.
> This JIRA is the 1st patch of several refactorings. Which moves all 
> ResourceRequest data structure and logics to SchedulingPlacementSet. We need 
> follow changes to make it better structured
> - Make delay scheduling to be a plugin of SchedulingPlacementSet
> - After YARN-4902 get committed, change SchedulingPlacementSet to use 
> YARN-4902 internally.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5906) Update AppSchedulingInfo to use SchedulingPlacementSet

2016-11-17 Thread Wangda Tan (JIRA)
Wangda Tan created YARN-5906:


 Summary: Update AppSchedulingInfo to use SchedulingPlacementSet
 Key: YARN-5906
 URL: https://issues.apache.org/jira/browse/YARN-5906
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Wangda Tan
Assignee: Wangda Tan


Currently AppSchedulingInfo simply stores resource request and scheduler make 
decision according to stored resource request. For example, CS/FS use slightly 
different approach to get pending resource request and make delay scheduling 
decision. 

There're several benefits of moving pending resource request data structure to 
SchedulingPlacementSet

1) Delay scheduling logic should be agnostic to scheduler, for example CS 
supports count-based delay and FS supports both of count-based and time-based 
delay. Ideally scheduler should be able to choose which delay scheduling policy 
to use.
2) In addition to 1., YARN-4902 has proposal to support pluggable delay 
scheduling behavior in addition to locality-based (host->rack->offswitch). 
Which requires more flexibility.
3) To make YARN-4902 becomes real, instead of directly adding the new resource 
request API to client, we can make scheduler to use it internally to make sure 
it is well defined. And AppSchedulingInfo/SchedulingPlacementSet will be the 
perfect place to isolate which ResourceRequest implementation to use.
4) Different scheduling requirement needs different behavior of checking 
ResourceRequest table.

This JIRA is the 1st patch of several refactorings. Which moves all 
ResourceRequest data structure and logics to SchedulingPlacementSet. We need 
follow changes to make it better structured
- Make delay scheduling to be a plugin of SchedulingPlacementSet
- After YARN-4902 get committed, change SchedulingPlacementSet to use YARN-4902 
internally.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5904) Reduce the number of default server threads for AMRMProxyService

2016-11-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15674847#comment-15674847
 ] 

Hadoop QA commented on YARN-5904:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
23s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 14m 22s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-5904 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12839441/YARN-5904-v1.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 5d3686ab4811 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / bd37355 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/13960/testReport/ |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/13960/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Reduce the number of default server threads for AMRMProxyService
> 
>
> Key: YARN-5904
> URL: https://issues.apache.org/jira/browse/YARN-5904
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.8.0, 3.0.0-alpha1
>Reporter: Subru Krishnan
>Assignee: Subru Krishnan
>Priority: Minor
>   

[jira] [Updated] (YARN-5904) Reduce the number of default server threads for AMRMProxyService

2016-11-17 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5904?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated YARN-5904:
-
Summary: Reduce the number of default server threads for AMRMProxyService  
(was: Reduce the number of server threads for AMRMProxy)

> Reduce the number of default server threads for AMRMProxyService
> 
>
> Key: YARN-5904
> URL: https://issues.apache.org/jira/browse/YARN-5904
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.8.0, 3.0.0-alpha1
>Reporter: Subru Krishnan
>Assignee: Subru Krishnan
>Priority: Minor
>
> The default value of the number of server threads for AMRMProxy uses the 
> standard default viz 25. This is way too many as the max number we need is 
> the number of concurrently active AMs in the node. So this JIRA proposes to 
> reduce the default to 3.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5904) Reduce the number of default server threads for AMRMProxyService

2016-11-17 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5904?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated YARN-5904:
-
Attachment: YARN-5904-v1.patch

Attaching a trivial patch that updates the default number of server threads for 
AMRMProxyService to be 3.

FYI, there are no test case updates as this is only touching the default value.

> Reduce the number of default server threads for AMRMProxyService
> 
>
> Key: YARN-5904
> URL: https://issues.apache.org/jira/browse/YARN-5904
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.8.0, 3.0.0-alpha1
>Reporter: Subru Krishnan
>Assignee: Subru Krishnan
>Priority: Minor
> Attachments: YARN-5904-v1.patch
>
>
> The default value of the number of server threads for AMRMProxy uses the 
> standard default viz 25. This is way too many as the max number we need is 
> the number of concurrently active AMs in the node. So this JIRA proposes to 
> reduce the default to 3.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5905) Update the RM webapp host that is reported as part of Federation membership to current primary RM's IP

2016-11-17 Thread Subru Krishnan (JIRA)
Subru Krishnan created YARN-5905:


 Summary: Update the RM webapp host that is reported as part of 
Federation membership to current primary RM's IP
 Key: YARN-5905
 URL: https://issues.apache.org/jira/browse/YARN-5905
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: federation, resourcemanager
Affects Versions: YARN-2915
Reporter: Subru Krishnan
Assignee: Subru Krishnan
Priority: Minor


Currently when RM HA is enabled, the webapp host is randomly picked from one of 
the ensemble RMs and relies on redirect to pick the active primary RM. This has 
a few shortcomings:
  * There's an overhead of additional network hop.
  * Sometimes the rmId selected might be an instance which is 
inactive/decommissioned
  * In few of our clusters, we have redirects disabled (either in client or 
server side) and then the invocation fails.

This JIRA proposes updating the RM webapp host that is reported as part of 
Federation membership to the current primary RM's IP.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5257) Fix all Bad Practices

2016-11-17 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5257?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-5257:
---
Summary: Fix all Bad Practices  (was: Fix all Bad Practices flagged in 
Fortify)

> Fix all Bad Practices
> -
>
> Key: YARN-5257
> URL: https://issues.apache.org/jira/browse/YARN-5257
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.9.0
>Reporter: Yufei Gu
>Assignee: Yufei Gu
>
> The following code contain potential problems:
> {code}
> Unreleased Resource: Streams  TopCLI.java:738
> Unreleased Resource: Streams  Graph.java:189
> Unreleased Resource: Streams  CgroupsLCEResourcesHandler.java:291
> Unreleased Resource: Streams  UnmanagedAMLauncher.java:195
> Unreleased Resource: Streams  CGroupsHandlerImpl.java:319
> Unreleased Resource: Streams  TrafficController.java:629
> Portability Flaw: Locale Dependent Comparison TimelineWebServices.java:421
> Null Dereference  ApplicationImpl.java:465
> Null Dereference  VisualizeStateMachine.java:52
> Null Dereference  ContainerImpl.java:1089
> Null Dereference  QueueManager.java:219
> Null Dereference  QueueManager.java:232
> Null Dereference  ResourceLocalizationService.java:1016
> Null Dereference  ResourceLocalizationService.java:1023
> Null Dereference  ResourceLocalizationService.java:1040
> Null Dereference  ResourceLocalizationService.java:1052
> Null Dereference  ProcfsBasedProcessTree.java:802
> Null Dereference  TimelineClientImpl.java:639
> Null Dereference  LocalizedResource.java:206
> Code Correctness: Double-Checked Locking  ResourceHandlerModule.java:142
> Code Correctness: Double-Checked Locking  RMPolicyProvider.java:51
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5904) Reduce the number of server threads for AMRMProxy

2016-11-17 Thread Subru Krishnan (JIRA)
Subru Krishnan created YARN-5904:


 Summary: Reduce the number of server threads for AMRMProxy
 Key: YARN-5904
 URL: https://issues.apache.org/jira/browse/YARN-5904
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Affects Versions: 3.0.0-alpha1, 2.8.0
Reporter: Subru Krishnan
Assignee: Subru Krishnan
Priority: Minor


The default value of the number of server threads for AMRMProxy uses the 
standard default viz 25. This is way too many as the max number we need is the 
number of concurrently active AMs in the node. So this JIRA proposes to reduce 
the default to 3.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-3409) Add constraint node labels

2016-11-17 Thread Naganarasimha G R (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naganarasimha G R updated YARN-3409:

Attachment: Constraint-Node-Labels-Requirements-Design-doc_v1.pdf

As discussed offline with [~wangda], i will handle this jira and hence 
assigning jira to myself. Uploading the initial design doc. will try to upload 
POC patch over the weekend to give a glimpse of how constraint expressions can 
be evaluated. 
Hoping for some early reviews !

> Add constraint node labels
> --
>
> Key: YARN-3409
> URL: https://issues.apache.org/jira/browse/YARN-3409
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api, capacityscheduler, client
>Reporter: Wangda Tan
>Assignee: Naganarasimha G R
> Attachments: Constraint-Node-Labels-Requirements-Design-doc_v1.pdf
>
>
> Specify only one label for each node (IAW, partition a cluster) is a way to 
> determinate how resources of a special set of nodes could be shared by a 
> group of entities (like teams, departments, etc.). Partitions of a cluster 
> has following characteristics:
> - Cluster divided to several disjoint sub clusters.
> - ACL/priority can apply on partition (Only market team / marke team has 
> priority to use the partition).
> - Percentage of capacities can apply on partition (Market team has 40% 
> minimum capacity and Dev team has 60% of minimum capacity of the partition).
> Constraints are orthogonal to partition, they’re describing attributes of 
> node’s hardware/software just for affinity. Some example of constraints:
> - glibc version
> - JDK version
> - Type of CPU (x86_64/i686)
> - Type of OS (windows, linux, etc.)
> With this, application can be able to ask for resource has (glibc.version >= 
> 2.20 && JDK.version >= 8u20 && x86_64).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-3409) Add constraint node labels

2016-11-17 Thread Naganarasimha G R (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naganarasimha G R reassigned YARN-3409:
---

Assignee: Naganarasimha G R  (was: Wangda Tan)

> Add constraint node labels
> --
>
> Key: YARN-3409
> URL: https://issues.apache.org/jira/browse/YARN-3409
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api, capacityscheduler, client
>Reporter: Wangda Tan
>Assignee: Naganarasimha G R
>
> Specify only one label for each node (IAW, partition a cluster) is a way to 
> determinate how resources of a special set of nodes could be shared by a 
> group of entities (like teams, departments, etc.). Partitions of a cluster 
> has following characteristics:
> - Cluster divided to several disjoint sub clusters.
> - ACL/priority can apply on partition (Only market team / marke team has 
> priority to use the partition).
> - Percentage of capacities can apply on partition (Market team has 40% 
> minimum capacity and Dev team has 60% of minimum capacity of the partition).
> Constraints are orthogonal to partition, they’re describing attributes of 
> node’s hardware/software just for affinity. Some example of constraints:
> - glibc version
> - JDK version
> - Type of CPU (x86_64/i686)
> - Type of OS (windows, linux, etc.)
> With this, application can be able to ask for resource has (glibc.version >= 
> 2.20 && JDK.version >= 8u20 && x86_64).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4206) Add life time value in Application report and CLI

2016-11-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15674602#comment-15674602
 ] 

Hadoop QA commented on YARN-4206:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
48s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  4m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  4m 
54s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 49s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 16 new + 368 unchanged - 1 fixed = 384 total (was 369) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
40s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  3m  0s{color} 
| {color:red} hadoop-yarn-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 49m 34s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 17m 17s{color} 
| {color:red} hadoop-yarn-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}123m 16s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.api.TestPBImplRecords |
|   | hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer |
|   | hadoop.yarn.server.resourcemanager.rmapp.TestApplicationLifetimeMonitor |
|   | hadoop.yarn.client.cli.TestYarnCLI |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-4206 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12839398/YARN-4506.1.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  |
| uname | Linux efd4d6b2bfc0 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 

[jira] [Commented] (YARN-5903) Fix race condition in TestResourceManagerAdministrationProtocolPBClientImpl beforeclass setup method

2016-11-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5903?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15674587#comment-15674587
 ] 

Hadoop QA commented on YARN-5903:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
 0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 16m  
7s{color} | {color:green} hadoop-yarn-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 30m 16s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-5903 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12839423/yarn5903.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 6afc0406f4e9 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 
20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / bd37355 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/13959/testReport/ |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/13959/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Fix race condition in TestResourceManagerAdministrationProtocolPBClientImpl 
> beforeclass setup method
> 
>
> Key: YARN-5903
> URL: https://issues.apache.org/jira/browse/YARN-5903
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Affects Versions: 3.0.0-alpha1
>Reporter: Haibo Chen
>Assignee: Haibo Chen
> Attachments: yarn5903.001.patch
>
>
> This is essentially the same race condition as in 

[jira] [Commented] (YARN-5901) Fix race condition in TestGetGroups beforeclass setup()

2016-11-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15674566#comment-15674566
 ] 

Hadoop QA commented on YARN-5901:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client: 
The patch generated 0 new + 1 unchanged - 1 fixed = 1 total (was 2) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 16m 
23s{color} | {color:green} hadoop-yarn-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 29m 37s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-5901 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12839420/yarn5901.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux e30ebba5489c 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / bd37355 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/13958/testReport/ |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/13958/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Fix race condition in TestGetGroups beforeclass setup()
> ---
>
> Key: YARN-5901
> URL: https://issues.apache.org/jira/browse/YARN-5901
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Affects Versions: 3.0.0-alpha1
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>  Labels: unittest
> Attachments: yarn5901.001.patch
>
>
> 

[jira] [Updated] (YARN-5902) yarn.scheduler.increment-allocation-mb and yarn.scheduler.increment-allocation-vcores are undocumented

2016-11-17 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton updated YARN-5902:
---
Summary: yarn.scheduler.increment-allocation-mb and 
yarn.scheduler.increment-allocation-vcores are undocumented  (was: 
yarn.scheduler.increment-allocation-mb and 
yarn.scheduler.increment-allocation-vcores should be documented in 
yarn-default.xml)

> yarn.scheduler.increment-allocation-mb and 
> yarn.scheduler.increment-allocation-vcores are undocumented
> --
>
> Key: YARN-5902
> URL: https://issues.apache.org/jira/browse/YARN-5902
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
> Attachments: YARN-5902.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5355) YARN Timeline Service v.2: alpha 2

2016-11-17 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated YARN-5355:
-
Assignee: (was: Haibo Chen)

> YARN Timeline Service v.2: alpha 2
> --
>
> Key: YARN-5355
> URL: https://issues.apache.org/jira/browse/YARN-5355
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: timelineserver
>Reporter: Sangjin Lee
>Priority: Critical
> Attachments: Timeline Service v2_ Ideas for Next Steps.pdf, 
> YARN-5355-branch-2.01.patch
>
>
> This is an umbrella JIRA for the alpha 2 milestone for YARN Timeline Service 
> v.2.
> This is developed on feature branches: {{YARN-5355}} for the trunk-based 
> development and {{YARN-5355-branch-2}} to maintain backports to branch-2. Any 
> subtask work on this JIRA will be committed to those 2 branches.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5903) Fix race condition in TestResourceManagerAdministrationProtocolPBClientImpl beforeclass setup method

2016-11-17 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5903?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated YARN-5903:
-
Attachment: yarn5903.001.patch

Uploading the same fix as for YARN-5901. If this unreliable check is used often 
in the code base, we could extract it as a util method.

> Fix race condition in TestResourceManagerAdministrationProtocolPBClientImpl 
> beforeclass setup method
> 
>
> Key: YARN-5903
> URL: https://issues.apache.org/jira/browse/YARN-5903
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Affects Versions: 3.0.0-alpha1
>Reporter: Haibo Chen
>Assignee: Haibo Chen
> Attachments: yarn5903.001.patch
>
>
> This is essentially the same race condition as in YARN-5901, that is, 
> resourcemanager.getServiceState() == STATE.STARTED does not guarantee 
> resource manager is fully started.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5903) Fix race condition in TestResourceManagerAdministrationProtocolPBClientImpl beforeclass setup method

2016-11-17 Thread Haibo Chen (JIRA)
Haibo Chen created YARN-5903:


 Summary: Fix race condition in 
TestResourceManagerAdministrationProtocolPBClientImpl beforeclass setup method
 Key: YARN-5903
 URL: https://issues.apache.org/jira/browse/YARN-5903
 Project: Hadoop YARN
  Issue Type: Bug
  Components: yarn
Affects Versions: 3.0.0-alpha1
Reporter: Haibo Chen
Assignee: Haibo Chen


This is essentially the same race condition as in YARN-5901, that is, 
resourcemanager.getServiceState() == STATE.STARTED does not guarantee resource 
manager is fully started.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5901) Fix race condition in TestGetGroups beforeclass setup()

2016-11-17 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated YARN-5901:
-
Attachment: yarn5901.001.patch

> Fix race condition in TestGetGroups beforeclass setup()
> ---
>
> Key: YARN-5901
> URL: https://issues.apache.org/jira/browse/YARN-5901
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Affects Versions: 3.0.0-alpha1
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>  Labels: unittest
> Attachments: yarn5901.001.patch
>
>
> In TestGetGroups, the class-level setup method spins up, in a child thread, a 
> resource manager that Yarn clients can talk to. But it checks whether the 
> resource manager is fully started by doing resourcemanager.getServiceState() 
> == STATE.STARTED. This is not reliable since resourcemanager.start() will 
> first trigger service state change in RM, and then starts up all the services 
> added to RM. We need to wait for RM to fully start before YARN clients  can 
> send requests. Otherwise, the tests can fail due to "connection refused"  
> exception when the main thread sends out client requests to RM and if the RPC 
> server has not fired up in the child thread.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5902) yarn.scheduler.increment-allocation-mb and yarn.scheduler.increment-allocation-vcores should be documented in yarn-default.xml

2016-11-17 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton updated YARN-5902:
---
Attachment: YARN-5902.001.patch

Here's what the change would look like.

> yarn.scheduler.increment-allocation-mb and 
> yarn.scheduler.increment-allocation-vcores should be documented in 
> yarn-default.xml
> --
>
> Key: YARN-5902
> URL: https://issues.apache.org/jira/browse/YARN-5902
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
> Attachments: YARN-5902.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5864) Capacity Scheduler preemption for fragmented cluster

2016-11-17 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15674481#comment-15674481
 ] 

Wangda Tan commented on YARN-5864:
--

Thanks [~curino] for sharing the firmament paper. I just read it, it provided a 
lot of insightful ideas. I believe it can work pretty well for a cluster which 
have homogeneous workload, but it may not be able to solve the mix workloads 
issues, as it stated:

bq. Firmament shows that a single scheduler can attain scalability, but its 
MCMF optimization does not trivially admit multiple independent schedulers. 

So in my mind, for YARN, we need borg-like architecture to make different kinds 
of workload can be scheduled using different pluggable scheduling policies and 
scorers. Firmament could be one of these scheduling policies. 

I agree your comment about we should make a better semantics of the feature, I 
will think it again and keep you posted.

> Capacity Scheduler preemption for fragmented cluster 
> -
>
> Key: YARN-5864
> URL: https://issues.apache.org/jira/browse/YARN-5864
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-5864.poc-0.patch
>
>
> YARN-4390 added preemption for reserved container. However, we found one case 
> that large container cannot be allocated even if all queues are under their 
> limit.
> For example, we have:
> {code}
> Two queues, a and b, capacity 50:50 
> Two nodes: n1 and n2, each of them have 50 resource 
> Now queue-a uses 10 on n1 and 10 on n2
> queue-b asks for one single container with resource=45. 
> {code} 
> The container could be reserved on any of the host, but no preemption will 
> happen because all queues are under their limits. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5902) yarn.scheduler.increment-allocation-mb and yarn.scheduler.increment-allocation-vcores should be documented in yarn-default.xml

2016-11-17 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15674470#comment-15674470
 ] 

Daniel Templeton commented on YARN-5902:


Since these properties are only in the fair scheduler, in yarn-default.xml the 
right place to document them?  If not there, then where?

> yarn.scheduler.increment-allocation-mb and 
> yarn.scheduler.increment-allocation-vcores should be documented in 
> yarn-default.xml
> --
>
> Key: YARN-5902
> URL: https://issues.apache.org/jira/browse/YARN-5902
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5902) yarn.scheduler.increment-allocation-mb and yarn.scheduler.increment-allocation-vcores should be documented in yarn-default.xml

2016-11-17 Thread Daniel Templeton (JIRA)
Daniel Templeton created YARN-5902:
--

 Summary: yarn.scheduler.increment-allocation-mb and 
yarn.scheduler.increment-allocation-vcores should be documented in 
yarn-default.xml
 Key: YARN-5902
 URL: https://issues.apache.org/jira/browse/YARN-5902
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Daniel Templeton
Assignee: Daniel Templeton






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5901) Fix race condition in TestGetGroups beforeclass setup()

2016-11-17 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated YARN-5901:
-
Labels: unittest  (was: )

> Fix race condition in TestGetGroups beforeclass setup()
> ---
>
> Key: YARN-5901
> URL: https://issues.apache.org/jira/browse/YARN-5901
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Affects Versions: 3.0.0-alpha1
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>  Labels: unittest
>
> In TestGetGroups, the class-level setup method spins up, in a child thread, a 
> resource manager that Yarn clients can talk to. But it checks whether the 
> resource manager is fully started by doing resourcemanager.getServiceState() 
> == STATE.STARTED. This is not reliable since resourcemanager.start() will 
> first trigger service state change in RM, and then starts up all the services 
> added to RM. We need to wait for RM to fully start before YARN clients  can 
> send requests. Otherwise, the tests can fail due to "connection refused"  
> exception when the main thread sends out client requests to RM and if the RPC 
> server has not fired up in the child thread.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5901) Fix race condition in TestGetGroups beforeclass setup()

2016-11-17 Thread Haibo Chen (JIRA)
Haibo Chen created YARN-5901:


 Summary: Fix race condition in TestGetGroups beforeclass setup()
 Key: YARN-5901
 URL: https://issues.apache.org/jira/browse/YARN-5901
 Project: Hadoop YARN
  Issue Type: Bug
  Components: yarn
Affects Versions: 3.0.0-alpha1
Reporter: Haibo Chen
Assignee: Haibo Chen


In TestGetGroups, the class-level setup method spins up, in a child thread, a 
resource manager that Yarn clients can talk to. But it checks whether the 
resource manager is fully started by doing resourcemanager.getServiceState() == 
STATE.STARTED. This is not reliable since resourcemanager.start() will first 
trigger service state change in RM, and then starts up all the services added 
to RM. We need to wait for RM to fully start before YARN clients  can send 
requests. Otherwise, the tests can fail due to "connection refused"  exception 
when the main thread sends out client requests to RM and if the RPC server has 
not fired up in the child thread.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5280) Allow YARN containers to run with Java Security Manager

2016-11-17 Thread Greg Phillips (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15674456#comment-15674456
 ] 

Greg Phillips commented on YARN-5280:
-

That sounds fantastic.  I will add those changes to the next patch.

> Allow YARN containers to run with Java Security Manager
> ---
>
> Key: YARN-5280
> URL: https://issues.apache.org/jira/browse/YARN-5280
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: nodemanager, yarn
>Affects Versions: 2.6.4
>Reporter: Greg Phillips
>Assignee: Greg Phillips
>Priority: Minor
>  Labels: oct16-medium
> Attachments: YARN-5280.001.patch, YARN-5280.002.patch, 
> YARN-5280.003.patch, YARN-5280.004.patch, YARN-5280.patch, 
> YARNContainerSandbox.pdf
>
>
> YARN applications have the ability to perform privileged actions which have 
> the potential to add instability into the cluster. The Java Security Manager 
> can be used to prevent users from running privileged actions while still 
> allowing their core data processing use cases. 
> Introduce a YARN flag which will allow a Hadoop administrator to enable the 
> Java Security Manager for user code, while still providing complete 
> permissions to core Hadoop libraries.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3053) [Security] Review and implement security in ATS v.2

2016-11-17 Thread Joep Rottinghuis (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15674445#comment-15674445
 ] 

Joep Rottinghuis commented on YARN-3053:


For delegation tokens there is a mechanism for renewal. If we create a new 
timeline service token, we might have to consider a renewal mechanism as well.

> [Security] Review and implement security in ATS v.2
> ---
>
> Key: YARN-3053
> URL: https://issues.apache.org/jira/browse/YARN-3053
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Sangjin Lee
>Assignee: Varun Saxena
>  Labels: YARN-5355
> Attachments: ATSv2Authentication(draft).pdf
>
>
> Per design in YARN-2928, we want to evaluate and review the system for 
> security, and ensure proper security in the system.
> This includes proper authentication, token management, access control, and 
> any other relevant security aspects.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3053) [Security] Review and implement security in ATS v.2

2016-11-17 Thread Joep Rottinghuis (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15674433#comment-15674433
 ] 

Joep Rottinghuis commented on YARN-3053:


Once we trust that the connection to the collector is secure, the remaining 
part is to ensure that collectors to HBase are secure.
As noted during the status call today, this shouldn't be hard given that the 
collectors run in the NM JVM, so we can configure the NM user (whether is it 
"yarn", "mapred", or something else) to have access to write to HBase.
Wrt. YARN-4061 we can make sure that any spooling that the HBase client does to 
HDFS will be as the same user as the client ("yarn" or "mapred") in a directory 
protected by HDFS permissions.

If collectors are going to run in their own containers, we'll have to deal with 
HBase authentication tokens and (HDFS) delegation tokens.

> [Security] Review and implement security in ATS v.2
> ---
>
> Key: YARN-3053
> URL: https://issues.apache.org/jira/browse/YARN-3053
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Sangjin Lee
>Assignee: Varun Saxena
>  Labels: YARN-5355
> Attachments: ATSv2Authentication(draft).pdf
>
>
> Per design in YARN-2928, we want to evaluate and review the system for 
> security, and ensure proper security in the system.
> This includes proper authentication, token management, access control, and 
> any other relevant security aspects.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-5355) YARN Timeline Service v.2: alpha 2

2016-11-17 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen reassigned YARN-5355:


Assignee: Haibo Chen  (was: Sangjin Lee)

> YARN Timeline Service v.2: alpha 2
> --
>
> Key: YARN-5355
> URL: https://issues.apache.org/jira/browse/YARN-5355
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: timelineserver
>Reporter: Sangjin Lee
>Assignee: Haibo Chen
>Priority: Critical
> Attachments: Timeline Service v2_ Ideas for Next Steps.pdf, 
> YARN-5355-branch-2.01.patch
>
>
> This is an umbrella JIRA for the alpha 2 milestone for YARN Timeline Service 
> v.2.
> This is developed on feature branches: {{YARN-5355}} for the trunk-based 
> development and {{YARN-5355-branch-2}} to maintain backports to branch-2. Any 
> subtask work on this JIRA will be committed to those 2 branches.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5851) TestContainerManagerSecurity testContainerManager[1] failed

2016-11-17 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5851?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated YARN-5851:
-
Labels: unittest  (was: security unittest)

> TestContainerManagerSecurity testContainerManager[1] failed 
> 
>
> Key: YARN-5851
> URL: https://issues.apache.org/jira/browse/YARN-5851
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.0.0-alpha1
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>  Labels: unittest
> Attachments: yarn5851.001.patch
>
>
> ---
> Test set: org.apache.hadoop.yarn.server.TestContainerManagerSecurity
> ---
> Tests run: 2, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 21.727 sec 
> <<< FAILURE! - in org.apache.hadoop.yarn.server.TestContainerManagerSecurity
> testContainerManager[1](org.apache.hadoop.yarn.server.TestContainerManagerSecurity)
>   Time elapsed: 0.005 sec  <<< ERROR!
> java.lang.IllegalArgumentException: Can't get Kerberos realm
>   at sun.security.krb5.Config.getDefaultRealm(Config.java:1029)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at 
> org.apache.hadoop.security.authentication.util.KerberosUtil.getDefaultRealm(KerberosUtil.java:88)
>   at 
> org.apache.hadoop.security.HadoopKerberosName.setConfiguration(HadoopKerberosName.java:63)
>   at 
> org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:291)
>   at 
> org.apache.hadoop.security.UserGroupInformation.setConfiguration(UserGroupInformation.java:337)
>   at 
> org.apache.hadoop.yarn.server.TestContainerManagerSecurity.(TestContainerManagerSecurity.java:151)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3053) [Security] Review and implement security in ATS v.2

2016-11-17 Thread Joep Rottinghuis (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15674398#comment-15674398
 ] 

Joep Rottinghuis commented on YARN-3053:


Two questions:
- are there race conditions going to be possible at the end of the application 
lifecycle when the am is done, we cancel the token and then asynchronous 
communication from containers (through NMs?) arrives?

- How we do deal with AM recovery?
  * What if the AM crashes and has to be re-started, are we going to restart it 
with the same token, or cancel the token and re-generate a new one? If the 
answer is a new token, how do we communicate this out to containers / NMs on 
other hosts?
* What if the entire machine where the AM and collectors run crashes (or worse 
network partitions), then do we treat that the same as the previous case?

> [Security] Review and implement security in ATS v.2
> ---
>
> Key: YARN-3053
> URL: https://issues.apache.org/jira/browse/YARN-3053
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Sangjin Lee
>Assignee: Varun Saxena
>  Labels: YARN-5355
> Attachments: ATSv2Authentication(draft).pdf
>
>
> Per design in YARN-2928, we want to evaluate and review the system for 
> security, and ensure proper security in the system.
> This includes proper authentication, token management, access control, and 
> any other relevant security aspects.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2009) CapacityScheduler: Add intra-queue preemption for app priority support

2016-11-17 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15674397#comment-15674397
 ] 

Wangda Tan commented on YARN-2009:
--

[~eepayne], I'm OK with backporting all required preemption related changes to 
branch-2.8,  please go ahead.

Thanks

> CapacityScheduler: Add intra-queue preemption for app priority support
> --
>
> Key: YARN-2009
> URL: https://issues.apache.org/jira/browse/YARN-2009
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler
>Reporter: Devaraj K
>Assignee: Sunil G
>  Labels: oct16-medium
> Fix For: 2.9.0
>
> Attachments: YARN-2009.0001.patch, YARN-2009.0002.patch, 
> YARN-2009.0003.patch, YARN-2009.0004.patch, YARN-2009.0005.patch, 
> YARN-2009.0006.patch, YARN-2009.0007.patch, YARN-2009.0008.patch, 
> YARN-2009.0009.patch, YARN-2009.0010.patch, YARN-2009.0011.patch, 
> YARN-2009.0012.patch, YARN-2009.0013.patch, YARN-2009.0014.patch, 
> YARN-2009.0015.patch, YARN-2009.0016.patch
>
>
> While preempting containers based on the queue ideal assignment, we may need 
> to consider preempting the low priority application containers first.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5889) Improve user-limit calculation in capacity scheduler

2016-11-17 Thread Eric Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15674342#comment-15674342
 ] 

Eric Payne commented on YARN-5889:
--

Thanks [~sunilg] and [~leftnoteasy] for the heads up. I will look at this in 
the next few days.

> Improve user-limit calculation in capacity scheduler
> 
>
> Key: YARN-5889
> URL: https://issues.apache.org/jira/browse/YARN-5889
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler
>Reporter: Sunil G
>Assignee: Sunil G
> Attachments: YARN-5889.v0.patch
>
>
> Currently user-limit is computed during every heartbeat allocation cycle with 
> a write lock. To improve performance, this tickets is focussing on moving 
> user-limit calculation out of heartbeat allocation flow.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2009) CapacityScheduler: Add intra-queue preemption for app priority support

2016-11-17 Thread Eric Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15674334#comment-15674334
 ] 

Eric Payne commented on YARN-2009:
--

Hey [~leftnoteasy]. The fate of branch-2.8 is still up in the air, but at this 
point it seems to be leaning towards leaving branch-2.8 as its original branch 
(and not re-branching from head of branch-2). I would like to go ahead and do 
the backport of YARN-4108/YARN-4822 and YARN-4390 to branch-2.8, regardless. 
Any objections?

> CapacityScheduler: Add intra-queue preemption for app priority support
> --
>
> Key: YARN-2009
> URL: https://issues.apache.org/jira/browse/YARN-2009
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler
>Reporter: Devaraj K
>Assignee: Sunil G
>  Labels: oct16-medium
> Fix For: 2.9.0
>
> Attachments: YARN-2009.0001.patch, YARN-2009.0002.patch, 
> YARN-2009.0003.patch, YARN-2009.0004.patch, YARN-2009.0005.patch, 
> YARN-2009.0006.patch, YARN-2009.0007.patch, YARN-2009.0008.patch, 
> YARN-2009.0009.patch, YARN-2009.0010.patch, YARN-2009.0011.patch, 
> YARN-2009.0012.patch, YARN-2009.0013.patch, YARN-2009.0014.patch, 
> YARN-2009.0015.patch, YARN-2009.0016.patch
>
>
> While preempting containers based on the queue ideal assignment, we may need 
> to consider preempting the low priority application containers first.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-4206) Add life time value in Application report and CLI

2016-11-17 Thread Rohith Sharma K S (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S reassigned YARN-4206:
---

Assignee: Rohith Sharma K S  (was: nijel)

> Add life time value in Application report and CLI
> -
>
> Key: YARN-4206
> URL: https://issues.apache.org/jira/browse/YARN-4206
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler
>Reporter: nijel
>Assignee: Rohith Sharma K S
> Attachments: YARN-4506.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4206) Add life time value in Application report and CLI

2016-11-17 Thread Rohith Sharma K S (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S updated YARN-4206:

Attachment: YARN-4506.1.patch

Updating patch for supporting YarnClient API, ApplicationCLI to print timeout 
in status command , support for new option -updateTimeout.  And modification is 
in ApplicationReport to get list of timeouts values. 

> Add life time value in Application report and CLI
> -
>
> Key: YARN-4206
> URL: https://issues.apache.org/jira/browse/YARN-4206
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler
>Reporter: nijel
>Assignee: nijel
> Attachments: YARN-4506.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5859) TestResourceLocalizationService#testParallelDownloadAttemptsForPublicResource sometimes fails

2016-11-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15674046#comment-15674046
 ] 

Hadoop QA commented on YARN-5859:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 14s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 The patch generated 2 new + 93 unchanged - 1 fixed = 95 total (was 94) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 12m 
38s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 26m 10s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-5859 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12839377/YARN-5859.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 651549df8df4 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / b2d4b7b |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/13956/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/13956/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/13956/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> TestResourceLocalizationService#testParallelDownloadAttemptsForPublicResource 
> sometimes fails
> 

[jira] [Comment Edited] (YARN-5600) Add a parameter to ContainerLaunchContext to emulate yarn.nodemanager.delete.debug-delay-sec on a per-application basis

2016-11-17 Thread Varun Vasudev (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15674060#comment-15674060
 ] 

Varun Vasudev edited comment on YARN-5600 at 11/17/16 3:59 PM:
---

[~miklos.szeg...@cloudera.com] - can you explain when the container and 
application resources will be cleaned up in the following scenarios -
1)
yarn.nodemanager.delete.debug-delay-sec=600
delete.max-per-application-debug-delay-sec=300
container sets DEBUG_DELETE_DELAY to 200

2)
yarn.nodemanager.delete.debug-delay-sec=600
delete.max-per-application-debug-delay-sec=300
container sets DEBUG_DELETE_DELAY to 400

3)
yarn.nodemanager.delete.debug-delay-sec=300
delete.max-per-application-debug-delay-sec=600
container sets DEBUG_DELETE_DELAY to 600

Thanks!


was (Author: vvasudev):
[~miklos.szeg...@cloudera.com] - can you explain when the container and 
application resources will be cleaned up in the following scenarios -
1)
yarn.nodemanager.delete.debug-delay-sec=600
delete.max-per-application-debug-delay-sec=300
container sets DEBUG_DELETE_DELAY to 200

2)
yarn.nodemanager.delete.debug-delay-sec=600
delete.max-per-application-debug-delay-sec=300
container sets DEBUG_DELETE_DELAY to 400

3)
yarn.nodemanager.delete.debug-delay-sec=300
delete.max-per-application-debug-delay-sec=600
container sets DEBUG_DELETE_DELAY to 600

Thanks!

Thanks!

> Add a parameter to ContainerLaunchContext to emulate 
> yarn.nodemanager.delete.debug-delay-sec on a per-application basis
> ---
>
> Key: YARN-5600
> URL: https://issues.apache.org/jira/browse/YARN-5600
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Affects Versions: 3.0.0-alpha1
>Reporter: Daniel Templeton
>Assignee: Miklos Szegedi
>  Labels: oct16-medium
> Attachments: YARN-5600.000.patch, YARN-5600.001.patch, 
> YARN-5600.002.patch, YARN-5600.003.patch, YARN-5600.004.patch, 
> YARN-5600.005.patch, YARN-5600.006.patch, YARN-5600.007.patch, 
> YARN-5600.008.patch, YARN-5600.009.patch, YARN-5600.010.patch, 
> YARN-5600.011.patch, YARN-5600.012.patch, YARN-5600.013.patch, 
> YARN-5600.014.patch
>
>
> To make debugging application launch failures simpler, I'd like to add a 
> parameter to the CLC to allow an application owner to request delayed 
> deletion of the application's launch artifacts.
> This JIRA solves largely the same problem as YARN-5599, but for cases where 
> ATS is not in use, e.g. branch-2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5600) Add a parameter to ContainerLaunchContext to emulate yarn.nodemanager.delete.debug-delay-sec on a per-application basis

2016-11-17 Thread Varun Vasudev (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15674060#comment-15674060
 ] 

Varun Vasudev commented on YARN-5600:
-

[~miklos.szeg...@cloudera.com] - can you explain when the container and 
application resources will be cleaned up in the following scenarios -
1)
yarn.nodemanager.delete.debug-delay-sec=600
delete.max-per-application-debug-delay-sec=300
container sets DEBUG_DELETE_DELAY to 200

2)
yarn.nodemanager.delete.debug-delay-sec=600
delete.max-per-application-debug-delay-sec=300
container sets DEBUG_DELETE_DELAY to 400

3)
yarn.nodemanager.delete.debug-delay-sec=300
delete.max-per-application-debug-delay-sec=600
container sets DEBUG_DELETE_DELAY to 600

Thanks!

Thanks!

> Add a parameter to ContainerLaunchContext to emulate 
> yarn.nodemanager.delete.debug-delay-sec on a per-application basis
> ---
>
> Key: YARN-5600
> URL: https://issues.apache.org/jira/browse/YARN-5600
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Affects Versions: 3.0.0-alpha1
>Reporter: Daniel Templeton
>Assignee: Miklos Szegedi
>  Labels: oct16-medium
> Attachments: YARN-5600.000.patch, YARN-5600.001.patch, 
> YARN-5600.002.patch, YARN-5600.003.patch, YARN-5600.004.patch, 
> YARN-5600.005.patch, YARN-5600.006.patch, YARN-5600.007.patch, 
> YARN-5600.008.patch, YARN-5600.009.patch, YARN-5600.010.patch, 
> YARN-5600.011.patch, YARN-5600.012.patch, YARN-5600.013.patch, 
> YARN-5600.014.patch
>
>
> To make debugging application launch failures simpler, I'd like to add a 
> parameter to the CLC to allow an application owner to request delayed 
> deletion of the application's launch artifacts.
> This JIRA solves largely the same problem as YARN-5599, but for cases where 
> ATS is not in use, e.g. branch-2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5280) Allow YARN containers to run with Java Security Manager

2016-11-17 Thread Varun Vasudev (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15673999#comment-15673999
 ] 

Varun Vasudev commented on YARN-5280:
-

{quote}
The difficulty arises when moving the functionality from prepareContainer to 
launchContainer. In particular I need to modify the actual java run command 
instead of the container launch command. The only way I have found to modify 
the run command found within the launch_container.sh is through the 
LinuxContainerExecutor#writeLaunchEnv. A method which links the 
LinuxContainerExecutor with the ContainerRuntime prior to the environment being 
written seems necessary for this feature. I am very interested in your thoughts 
on this matter.
{quote}

Ah you're correct. I missed this. How about we add a new method called 
prepareContainer in the ContainerExecutor base class which does nothing by 
default and override it in the LinuxContainerExecutor class to call the 
runtime's prepareContainer method? We can call this method before we call 
writeLaunchEnv. That should solve your requirement, correct?

> Allow YARN containers to run with Java Security Manager
> ---
>
> Key: YARN-5280
> URL: https://issues.apache.org/jira/browse/YARN-5280
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: nodemanager, yarn
>Affects Versions: 2.6.4
>Reporter: Greg Phillips
>Assignee: Greg Phillips
>Priority: Minor
>  Labels: oct16-medium
> Attachments: YARN-5280.001.patch, YARN-5280.002.patch, 
> YARN-5280.003.patch, YARN-5280.004.patch, YARN-5280.patch, 
> YARNContainerSandbox.pdf
>
>
> YARN applications have the ability to perform privileged actions which have 
> the potential to add instability into the cluster. The Java Security Manager 
> can be used to prevent users from running privileged actions while still 
> allowing their core data processing use cases. 
> Introduce a YARN flag which will allow a Hadoop administrator to enable the 
> Java Security Manager for user code, while still providing complete 
> permissions to core Hadoop libraries.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5877) Allow all nm-whitelist-env to get overridden during launch

2016-11-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15673988#comment-15673988
 ] 

Hadoop QA commented on YARN-5877:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 18s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 The patch generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 14m 15s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 31m 39s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-5877 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12839375/YARN-5877.0001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 6bb71e5fa2be 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / b2d4b7b |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/13955/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/13955/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/13955/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 

  1   2   >