[jira] [Updated] (YARN-679) add an entry point that can start any Yarn service

2014-06-04 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-679?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated YARN-679:


 Priority: Major  (was: Minor)
Affects Version/s: 2.4.0

> add an entry point that can start any Yarn service
> --
>
> Key: YARN-679
> URL: https://issues.apache.org/jira/browse/YARN-679
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api
>Affects Versions: 2.4.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: YARN-679-001.patch
>
>
> There's no need to write separate .main classes for every Yarn service, given 
> that the startup mechanism should be identical: create, init, start, wait for 
> stopped -with an interrupt handler to trigger a clean shutdown on a control-c 
> interrrupt.
> Provide one that takes any classname, and a list of config files/options



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (YARN-679) add an entry point that can start any Yarn service

2014-06-04 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14017618#comment-14017618
 ] 

Steve Loughran commented on YARN-679:
-

This turns out to be useful both client-side and server-side. as any client 
that directly subclasses {{YarnClientImpl }} or hosts it within its own service 
composite can become a launched service. Similarly, AMs and containers 
are/contain YARN services, and need their own entry points.

Having a single entry point means more effort can be put in to having an entry 
point, rather than per-service ones that are implemented by cut-and-paste and 
may be under-maintained.
# effectively guarantees a well-tested shutdown/interrupt handler
# effectively guarantees more functional testing of failure paths
# with a good factoring of operations it also enables good unit test coverage
# makes it easier to write new YARN services
# provides a standard base set of exit codes
# allows for a single entry point script to directly create and run YARN 
services

> add an entry point that can start any Yarn service
> --
>
> Key: YARN-679
> URL: https://issues.apache.org/jira/browse/YARN-679
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api
>Affects Versions: 2.4.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: YARN-679-001.patch
>
>
> There's no need to write separate .main classes for every Yarn service, given 
> that the startup mechanism should be identical: create, init, start, wait for 
> stopped -with an interrupt handler to trigger a clean shutdown on a control-c 
> interrrupt.
> Provide one that takes any classname, and a list of config files/options



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (YARN-2022) Preempting an Application Master container can be kept as least priority when multiple applications are marked for preemption by ProportionalCapacityPreemptionPolicy

2014-06-04 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14017633#comment-14017633
 ] 

Sunil G commented on YARN-2022:
---

Thank your very much Carlo for the review. Yes, I understood the idea of 
sticking to the existing queue invariants alone for decision making.
New configuration can be removed here. I will write more UT cases capturing all 
possible corner scenarios and also will test in real cluster. 

> Preempting an Application Master container can be kept as least priority when 
> multiple applications are marked for preemption by 
> ProportionalCapacityPreemptionPolicy
> -
>
> Key: YARN-2022
> URL: https://issues.apache.org/jira/browse/YARN-2022
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: 2.4.0
>Reporter: Sunil G
>Assignee: Sunil G
> Attachments: YARN-2022-DesignDraft.docx, Yarn-2022.1.patch
>
>
> Cluster Size = 16GB [2NM's]
> Queue A Capacity = 50%
> Queue B Capacity = 50%
> Consider there are 3 applications running in Queue A which has taken the full 
> cluster capacity. 
> J1 = 2GB AM + 1GB * 4 Maps
> J2 = 2GB AM + 1GB * 4 Maps
> J3 = 2GB AM + 1GB * 2 Maps
> Another Job J4 is submitted in Queue B [J4 needs a 2GB AM + 1GB * 2 Maps ].
> Currently in this scenario, Jobs J3 will get killed including its AM.
> It is better if AM can be given least priority among multiple applications. 
> In this same scenario, map tasks from J3 and J2 can be preempted.
> Later when cluster is free, maps can be allocated to these Jobs.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (YARN-2119) DEFAULT_PROXY_ADDRESS should use DEFAULT_PROXY_PORT

2014-06-04 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated YARN-2119:
---

Summary: DEFAULT_PROXY_ADDRESS should use DEFAULT_PROXY_PORT  (was: Fix the 
DEFAULT_PROXY_ADDRESS used for getBindAddress)

> DEFAULT_PROXY_ADDRESS should use DEFAULT_PROXY_PORT
> ---
>
> Key: YARN-2119
> URL: https://issues.apache.org/jira/browse/YARN-2119
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Anubhav Dhoot
>Assignee: Anubhav Dhoot
> Attachments: YARN-2119.patch
>
>
> The fix for [YARN-1590|https://issues.apache.org/jira/browse/YARN-1590] 
> introduced an method to get web proxy bind address with the incorrect default 
> port. Because all the users of the method (only 1 user) ignores the port, its 
> not breaking anything yet. Fixing it in case someone else uses this in the 
> future. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (YARN-2119) Fix the DEFAULT_PROXY_ADDRESS used for getBindAddress

2014-06-04 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated YARN-2119:
---

Summary: Fix the DEFAULT_PROXY_ADDRESS used for getBindAddress  (was: Fix 
the DEFAULT_PROXY_ADDRESS used for getBindAddress to fix 1590)

> Fix the DEFAULT_PROXY_ADDRESS used for getBindAddress
> -
>
> Key: YARN-2119
> URL: https://issues.apache.org/jira/browse/YARN-2119
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Anubhav Dhoot
>Assignee: Anubhav Dhoot
> Attachments: YARN-2119.patch
>
>
> The fix for [YARN-1590|https://issues.apache.org/jira/browse/YARN-1590] 
> introduced an method to get web proxy bind address with the incorrect default 
> port. Because all the users of the method (only 1 user) ignores the port, its 
> not breaking anything yet. Fixing it in case someone else uses this in the 
> future. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (YARN-2061) Revisit logging levels in ZKRMStateStore

2014-06-04 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14018110#comment-14018110
 ] 

Karthik Kambatla commented on YARN-2061:


+1. Committing this.

> Revisit logging levels in ZKRMStateStore 
> -
>
> Key: YARN-2061
> URL: https://issues.apache.org/jira/browse/YARN-2061
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Affects Versions: 2.4.0
>Reporter: Karthik Kambatla
>Assignee: Ray Chiang
>Priority: Minor
>  Labels: newbie
> Attachments: YARN2061-01.patch
>
>
> ZKRMStateStore has a few places where it is logging at the INFO level. We 
> should change these to DEBUG or TRACE level messages.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (YARN-2061) Revisit logging levels in ZKRMStateStore

2014-06-04 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14018226#comment-14018226
 ] 

Karthik Kambatla commented on YARN-2061:


Just committed to trunk and branch-2. Thanks Ray.

> Revisit logging levels in ZKRMStateStore 
> -
>
> Key: YARN-2061
> URL: https://issues.apache.org/jira/browse/YARN-2061
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Affects Versions: 2.4.0
>Reporter: Karthik Kambatla
>Assignee: Ray Chiang
>Priority: Minor
>  Labels: newbie
> Fix For: 2.5.0
>
> Attachments: YARN2061-01.patch
>
>
> ZKRMStateStore has a few places where it is logging at the INFO level. We 
> should change these to DEBUG or TRACE level messages.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (YARN-2061) Revisit logging levels in ZKRMStateStore

2014-06-04 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla resolved YARN-2061.


   Resolution: Fixed
Fix Version/s: 2.5.0

> Revisit logging levels in ZKRMStateStore 
> -
>
> Key: YARN-2061
> URL: https://issues.apache.org/jira/browse/YARN-2061
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Affects Versions: 2.4.0
>Reporter: Karthik Kambatla
>Assignee: Ray Chiang
>Priority: Minor
>  Labels: newbie
> Fix For: 2.5.0
>
> Attachments: YARN2061-01.patch
>
>
> ZKRMStateStore has a few places where it is logging at the INFO level. We 
> should change these to DEBUG or TRACE level messages.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (YARN-2061) Revisit logging levels in ZKRMStateStore

2014-06-04 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14018228#comment-14018228
 ] 

Karthik Kambatla commented on YARN-2061:


My bad. Just realized we haven't run Jenkins on this patch.

I ran ZKRMStateStore tests before committing. I ll keep an eye out for anything 
else that this could cause, and fix it up.

> Revisit logging levels in ZKRMStateStore 
> -
>
> Key: YARN-2061
> URL: https://issues.apache.org/jira/browse/YARN-2061
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Affects Versions: 2.4.0
>Reporter: Karthik Kambatla
>Assignee: Ray Chiang
>Priority: Minor
>  Labels: newbie
> Fix For: 2.5.0
>
> Attachments: YARN2061-01.patch
>
>
> ZKRMStateStore has a few places where it is logging at the INFO level. We 
> should change these to DEBUG or TRACE level messages.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (YARN-1424) RMAppAttemptImpl should precompute a zeroed ApplicationResourceUsageReport to return when attempt not active

2014-06-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14018298#comment-14018298
 ] 

Hadoop QA commented on YARN-1424:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12645270/YARN1424-01.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/3910//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/3910//console

This message is automatically generated.

> RMAppAttemptImpl should precompute a zeroed ApplicationResourceUsageReport to 
> return when attempt not active
> 
>
> Key: YARN-1424
> URL: https://issues.apache.org/jira/browse/YARN-1424
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Affects Versions: 2.4.0
>Reporter: Sandy Ryza
>Assignee: Ray Chiang
>Priority: Minor
>  Labels: newbie
> Attachments: YARN1424-01.patch
>
>
> RMAppImpl has a DUMMY_APPLICATION_RESOURCE_USAGE_REPORT to return when the 
> caller of createAndGetApplicationReport doesn't have access.
> RMAppAttemptImpl should have something similar for 
> getApplicationResourceUsageReport.
> It also might make sense to put the dummy report into 
> ApplicationResourceUsageReport and allow both to use it.
> A test would also be useful to verify that 
> RMAppAttemptImpl#getApplicationResourceUsageReport doesn't return null if the 
> scheduler doesn't have a report to return.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (YARN-2091) Add ContainerExitStatus.KILL_EXCEEDED_MEMORY and pass it to app masters

2014-06-04 Thread Tsuyoshi OZAWA (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2091?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi OZAWA updated YARN-2091:
-

Attachment: YARN-2091.7.patch

> Add ContainerExitStatus.KILL_EXCEEDED_MEMORY and pass it to app masters
> ---
>
> Key: YARN-2091
> URL: https://issues.apache.org/jira/browse/YARN-2091
> Project: Hadoop YARN
>  Issue Type: Task
>Reporter: Bikas Saha
>Assignee: Tsuyoshi OZAWA
> Attachments: YARN-2091.1.patch, YARN-2091.2.patch, YARN-2091.3.patch, 
> YARN-2091.4.patch, YARN-2091.5.patch, YARN-2091.6.patch, YARN-2091.7.patch
>
>
> Currently, the AM cannot programmatically determine if the task was killed 
> due to using excessive memory. The NM kills it without passing this 
> information in the container status back to the RM. So the AM cannot take any 
> action here. The jira tracks adding this exit status and passing it from the 
> NM to the RM and then the AM. In general, there may be other such actions 
> taken by YARN that are currently opaque to the AM. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (YARN-2119) DEFAULT_PROXY_ADDRESS should use DEFAULT_PROXY_PORT

2014-06-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14018307#comment-14018307
 ] 

Hudson commented on YARN-2119:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #5650 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/5650/])
YARN-2119. DEFAULT_PROXY_ADDRESS should use DEFAULT_PROXY_PORT. (Anubhav Dhoot 
via kasha) (kasha: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1600484)
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy/src/test/java/org/apache/hadoop/yarn/server/webproxy/TestWebAppProxyServer.java


> DEFAULT_PROXY_ADDRESS should use DEFAULT_PROXY_PORT
> ---
>
> Key: YARN-2119
> URL: https://issues.apache.org/jira/browse/YARN-2119
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Anubhav Dhoot
>Assignee: Anubhav Dhoot
> Fix For: 2.5.0
>
> Attachments: YARN-2119.patch
>
>
> The fix for [YARN-1590|https://issues.apache.org/jira/browse/YARN-1590] 
> introduced an method to get web proxy bind address with the incorrect default 
> port. Because all the users of the method (only 1 user) ignores the port, its 
> not breaking anything yet. Fixing it in case someone else uses this in the 
> future. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (YARN-2061) Revisit logging levels in ZKRMStateStore

2014-06-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14018308#comment-14018308
 ] 

Hudson commented on YARN-2061:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #5650 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/5650/])
YARN-2061. Revisit logging levels in ZKRMStateStore. (Ray Chiang via kasha) 
(kasha: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1600498)
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/ZKRMStateStore.java


> Revisit logging levels in ZKRMStateStore 
> -
>
> Key: YARN-2061
> URL: https://issues.apache.org/jira/browse/YARN-2061
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Affects Versions: 2.4.0
>Reporter: Karthik Kambatla
>Assignee: Ray Chiang
>Priority: Minor
>  Labels: newbie
> Fix For: 2.5.0
>
> Attachments: YARN2061-01.patch
>
>
> ZKRMStateStore has a few places where it is logging at the INFO level. We 
> should change these to DEBUG or TRACE level messages.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (YARN-1874) Cleanup: Move RMActiveServices out of ResourceManager into its own file

2014-06-04 Thread Tsuyoshi OZAWA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14018310#comment-14018310
 ] 

Tsuyoshi OZAWA commented on YARN-1874:
--

[~kkambatl], could you take a look?

> Cleanup: Move RMActiveServices out of ResourceManager into its own file
> ---
>
> Key: YARN-1874
> URL: https://issues.apache.org/jira/browse/YARN-1874
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Reporter: Karthik Kambatla
>Assignee: Tsuyoshi OZAWA
> Attachments: YARN-1874.1.patch, YARN-1874.2.patch, YARN-1874.3.patch, 
> YARN-1874.4.patch
>
>
> As [~vinodkv] noticed on YARN-1867, ResourceManager is hard to maintain. We 
> should move RMActiveServices out to make it more manageable. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (YARN-2030) Use StateMachine to simplify handleStoreEvent() in RMStateStore

2014-06-04 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14018335#comment-14018335
 ] 

Junping Du commented on YARN-2030:
--

[~decster], I think He Jian was reviewing your patch. 
[~jianhe], I know you are quite busy on Hadoop Summit recently. Do you mind to 
review this patch again after that or you want me to review it?

> Use StateMachine to simplify handleStoreEvent() in RMStateStore
> ---
>
> Key: YARN-2030
> URL: https://issues.apache.org/jira/browse/YARN-2030
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Junping Du
>Assignee: Binglin Chang
> Attachments: YARN-2030.v1.patch, YARN-2030.v2.patch
>
>
> Now the logic to handle different store events in handleStoreEvent() is as 
> following:
> {code}
> if (event.getType().equals(RMStateStoreEventType.STORE_APP)
> || event.getType().equals(RMStateStoreEventType.UPDATE_APP)) {
>   ...
>   if (event.getType().equals(RMStateStoreEventType.STORE_APP)) {
> ...
>   } else {
> ...
>   }
>   ...
>   try {
> if (event.getType().equals(RMStateStoreEventType.STORE_APP)) {
>   ...
> } else {
>   ...
> }
>   } 
>   ...
> } else if (event.getType().equals(RMStateStoreEventType.STORE_APP_ATTEMPT)
> || event.getType().equals(RMStateStoreEventType.UPDATE_APP_ATTEMPT)) {
>   ...
>   if (event.getType().equals(RMStateStoreEventType.STORE_APP_ATTEMPT)) {
> ...
>   } else {
> ...
>   }
> ...
> if (event.getType().equals(RMStateStoreEventType.STORE_APP_ATTEMPT)) {
>   ...
> } else {
>   ...
> }
>   }
>   ...
> } else if (event.getType().equals(RMStateStoreEventType.REMOVE_APP)) {
> ...
> } else {
>   ...
> }
> }
> {code}
> This is not only confuse people but also led to mistake easily. We may 
> leverage state machine to simply this even no state transitions.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (YARN-2091) Add ContainerExitStatus.KILL_EXCEEDED_MEMORY and pass it to app masters

2014-06-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14018336#comment-14018336
 ] 

Hadoop QA commented on YARN-2091:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12648400/YARN-2091.7.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 2 
warning messages.
See 
https://builds.apache.org/job/PreCommit-YARN-Build/3911//artifact/trunk/patchprocess/diffJavadocWarnings.txt
 for details.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/3911//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/3911//console

This message is automatically generated.

> Add ContainerExitStatus.KILL_EXCEEDED_MEMORY and pass it to app masters
> ---
>
> Key: YARN-2091
> URL: https://issues.apache.org/jira/browse/YARN-2091
> Project: Hadoop YARN
>  Issue Type: Task
>Reporter: Bikas Saha
>Assignee: Tsuyoshi OZAWA
> Attachments: YARN-2091.1.patch, YARN-2091.2.patch, YARN-2091.3.patch, 
> YARN-2091.4.patch, YARN-2091.5.patch, YARN-2091.6.patch, YARN-2091.7.patch
>
>
> Currently, the AM cannot programmatically determine if the task was killed 
> due to using excessive memory. The NM kills it without passing this 
> information in the container status back to the RM. So the AM cannot take any 
> action here. The jira tracks adding this exit status and passing it from the 
> NM to the RM and then the AM. In general, there may be other such actions 
> taken by YARN that are currently opaque to the AM. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (YARN-1879) Mark Idempotent/AtMostOnce annotations to ApplicationMasterProtocol

2014-06-04 Thread Tsuyoshi OZAWA (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi OZAWA updated YARN-1879:
-

Attachment: YARN-1879.7.patch

Rebased on trunk.

> Mark Idempotent/AtMostOnce annotations to ApplicationMasterProtocol
> ---
>
> Key: YARN-1879
> URL: https://issues.apache.org/jira/browse/YARN-1879
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Jian He
>Assignee: Tsuyoshi OZAWA
>Priority: Critical
> Attachments: YARN-1879.1.patch, YARN-1879.1.patch, 
> YARN-1879.2-wip.patch, YARN-1879.2.patch, YARN-1879.3.patch, 
> YARN-1879.4.patch, YARN-1879.5.patch, YARN-1879.6.patch, YARN-1879.7.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (YARN-2091) Add ContainerExitStatus.KILL_EXCEEDED_MEMORY and pass it to app masters

2014-06-04 Thread Tsuyoshi OZAWA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14018394#comment-14018394
 ] 

Tsuyoshi OZAWA commented on YARN-2091:
--

* Added isDefaultExitCode() method to {{ContainerImpl}} and updated to use it. 
ContainerImpl#exitCode is initialized as ContainerExitStatus.INVALID already, 
so I did not change about that.
* Updated docs/comments like this:
{code}
+  /**
+   * Containers killed by AppMaster's issuing RPC(
+   * {@link org.apache.hadoop.yarn.api.ContainerManagementProtocol#
+   * stopContainers(org.apache.hadoop.yarn.api.protocolrecords.
+   * StopContainersRequest)}) explicitly.
+   */
+  public static final int KILL_AM_STOP_CONTAINER = -105;
+
+  /**
+   * Containers killed by ResourceManager's request or resync between
+   * ResourceManager and NodeManager.
+   */
+  public static final int KILL_BY_RESOURCEMANAGER = -106;
+
+  /**
+   * Containers killed by ResourceManager, due to accomplish of applications.
+   */
+  public static final int KILL_FINISHED_APPMASTER = -107;
{code}

[~bikassaha], please review it.

> Add ContainerExitStatus.KILL_EXCEEDED_MEMORY and pass it to app masters
> ---
>
> Key: YARN-2091
> URL: https://issues.apache.org/jira/browse/YARN-2091
> Project: Hadoop YARN
>  Issue Type: Task
>Reporter: Bikas Saha
>Assignee: Tsuyoshi OZAWA
> Attachments: YARN-2091.1.patch, YARN-2091.2.patch, YARN-2091.3.patch, 
> YARN-2091.4.patch, YARN-2091.5.patch, YARN-2091.6.patch, YARN-2091.7.patch
>
>
> Currently, the AM cannot programmatically determine if the task was killed 
> due to using excessive memory. The NM kills it without passing this 
> information in the container status back to the RM. So the AM cannot take any 
> action here. The jira tracks adding this exit status and passing it from the 
> NM to the RM and then the AM. In general, there may be other such actions 
> taken by YARN that are currently opaque to the AM. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (YARN-1879) Mark Idempotent/AtMostOnce annotations to ApplicationMasterProtocol

2014-06-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14018408#comment-14018408
 ] 

Hadoop QA commented on YARN-1879:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12648409/YARN-1879.7.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 6 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:

  org.apache.hadoop.yarn.client.TestRMAdminCLI

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/3912//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/3912//console

This message is automatically generated.

> Mark Idempotent/AtMostOnce annotations to ApplicationMasterProtocol
> ---
>
> Key: YARN-1879
> URL: https://issues.apache.org/jira/browse/YARN-1879
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Jian He
>Assignee: Tsuyoshi OZAWA
>Priority: Critical
> Attachments: YARN-1879.1.patch, YARN-1879.1.patch, 
> YARN-1879.2-wip.patch, YARN-1879.2.patch, YARN-1879.3.patch, 
> YARN-1879.4.patch, YARN-1879.5.patch, YARN-1879.6.patch, YARN-1879.7.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (YARN-1879) Mark Idempotent/AtMostOnce annotations to ApplicationMasterProtocol

2014-06-04 Thread Tsuyoshi OZAWA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14018414#comment-14018414
 ] 

Tsuyoshi OZAWA commented on YARN-1879:
--

The test failure is not related to this patch. It's filed as YARN-2075.

> Mark Idempotent/AtMostOnce annotations to ApplicationMasterProtocol
> ---
>
> Key: YARN-1879
> URL: https://issues.apache.org/jira/browse/YARN-1879
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Jian He
>Assignee: Tsuyoshi OZAWA
>Priority: Critical
> Attachments: YARN-1879.1.patch, YARN-1879.1.patch, 
> YARN-1879.2-wip.patch, YARN-1879.2.patch, YARN-1879.3.patch, 
> YARN-1879.4.patch, YARN-1879.5.patch, YARN-1879.6.patch, YARN-1879.7.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (YARN-2030) Use StateMachine to simplify handleStoreEvent() in RMStateStore

2014-06-04 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14018478#comment-14018478
 ] 

Jian He commented on YARN-2030:
---

bq.  looks like PBImpl already has ProtoBase as super class, so we can't change 
interface to abstract class
We can merge the stuff from ProtoBase into the pb class and get rid of the 
ProtoBase, as was done for other user-facing records.
[~djp], can you help with the review and commit ? thx.

> Use StateMachine to simplify handleStoreEvent() in RMStateStore
> ---
>
> Key: YARN-2030
> URL: https://issues.apache.org/jira/browse/YARN-2030
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Junping Du
>Assignee: Binglin Chang
> Attachments: YARN-2030.v1.patch, YARN-2030.v2.patch
>
>
> Now the logic to handle different store events in handleStoreEvent() is as 
> following:
> {code}
> if (event.getType().equals(RMStateStoreEventType.STORE_APP)
> || event.getType().equals(RMStateStoreEventType.UPDATE_APP)) {
>   ...
>   if (event.getType().equals(RMStateStoreEventType.STORE_APP)) {
> ...
>   } else {
> ...
>   }
>   ...
>   try {
> if (event.getType().equals(RMStateStoreEventType.STORE_APP)) {
>   ...
> } else {
>   ...
> }
>   } 
>   ...
> } else if (event.getType().equals(RMStateStoreEventType.STORE_APP_ATTEMPT)
> || event.getType().equals(RMStateStoreEventType.UPDATE_APP_ATTEMPT)) {
>   ...
>   if (event.getType().equals(RMStateStoreEventType.STORE_APP_ATTEMPT)) {
> ...
>   } else {
> ...
>   }
> ...
> if (event.getType().equals(RMStateStoreEventType.STORE_APP_ATTEMPT)) {
>   ...
> } else {
>   ...
> }
>   }
>   ...
> } else if (event.getType().equals(RMStateStoreEventType.REMOVE_APP)) {
> ...
> } else {
>   ...
> }
> }
> {code}
> This is not only confuse people but also led to mistake easily. We may 
> leverage state machine to simply this even no state transitions.



--
This message was sent by Atlassian JIRA
(v6.2#6252)