[jira] [Commented] (YARN-1482) WebApplicationProxy should be always-on w.r.t HA even if it is embedded in the RM

2014-01-06 Thread Xuan Gong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13863994#comment-13863994
 ] 

Xuan Gong commented on YARN-1482:
-

merge two test cases : testWebAppProxyInStandAloneMode and 
testEmbeddedWebAppProxy into TestRMFailover

> WebApplicationProxy should be always-on w.r.t HA even if it is embedded in 
> the RM
> -
>
> Key: YARN-1482
> URL: https://issues.apache.org/jira/browse/YARN-1482
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Xuan Gong
> Attachments: YARN-1482.1.patch, YARN-1482.2.patch, YARN-1482.3.patch, 
> YARN-1482.4.patch, YARN-1482.4.patch, YARN-1482.5.patch, YARN-1482.5.patch, 
> YARN-1482.6.patch
>
>
> This way, even if an RM goes to standby mode, we can affect a redirect to the 
> active. And more importantly, users will not suddenly see all their links 
> stop working.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (YARN-1482) WebApplicationProxy should be always-on w.r.t HA even if it is embedded in the RM

2014-01-06 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated YARN-1482:


Attachment: YARN-1482.6.patch

Create the patch based on the latest trunk

> WebApplicationProxy should be always-on w.r.t HA even if it is embedded in 
> the RM
> -
>
> Key: YARN-1482
> URL: https://issues.apache.org/jira/browse/YARN-1482
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Xuan Gong
> Attachments: YARN-1482.1.patch, YARN-1482.2.patch, YARN-1482.3.patch, 
> YARN-1482.4.patch, YARN-1482.4.patch, YARN-1482.5.patch, YARN-1482.5.patch, 
> YARN-1482.6.patch
>
>
> This way, even if an RM goes to standby mode, we can affect a redirect to the 
> active. And more importantly, users will not suddenly see all their links 
> stop working.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (YARN-1531) Update yarn command document

2014-01-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13863956#comment-13863956
 ] 

Hadoop QA commented on YARN-1531:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12621757/YARN-1531.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+0 tests included{color}.  The patch appears to be a 
documentation patch that doesn't require tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/2810//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/2810//console

This message is automatically generated.

> Update yarn command document
> 
>
> Key: YARN-1531
> URL: https://issues.apache.org/jira/browse/YARN-1531
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: documentation
>Reporter: Akira AJISAKA
>Assignee: Akira AJISAKA
>  Labels: documentaion
> Attachments: YARN-1531.patch
>
>
> There are some options which are not written to Yarn Command document.
> For example, "yarn rmadmin" command options are as follows:
> {code}
>  Usage: yarn rmadmin
>-refreshQueues 
>-refreshNodes 
>-refreshSuperUserGroupsConfiguration 
>-refreshUserToGroupsMappings 
>-refreshAdminAcls 
>-refreshServiceAcl 
>-getGroups [username]
>-help [cmd]
>-transitionToActive 
>-transitionToStandby 
>-failover [--forcefence] [--forceactive]  
>-getServiceState 
>-checkHealth 
> {code}
> But some of the new options such as "-getGroups", "-transitionToActive", and 
> "-transitionToStandby" are not documented.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (YARN-1531) Update yarn command document

2014-01-06 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1531?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated YARN-1531:


Attachment: YARN-1531.patch

Attaching a patch.

> Update yarn command document
> 
>
> Key: YARN-1531
> URL: https://issues.apache.org/jira/browse/YARN-1531
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: documentation
>Reporter: Akira AJISAKA
>Assignee: Akira AJISAKA
> Attachments: YARN-1531.patch
>
>
> There are some options which are not written to Yarn Command document.
> For example, "yarn rmadmin" command options are as follows:
> {code}
>  Usage: yarn rmadmin
>-refreshQueues 
>-refreshNodes 
>-refreshSuperUserGroupsConfiguration 
>-refreshUserToGroupsMappings 
>-refreshAdminAcls 
>-refreshServiceAcl 
>-getGroups [username]
>-help [cmd]
>-transitionToActive 
>-transitionToStandby 
>-failover [--forcefence] [--forceactive]  
>-getServiceState 
>-checkHealth 
> {code}
> But some of the new options such as "-getGroups", "-transitionToActive", and 
> "-transitionToStandby" are not documented.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (YARN-1506) Replace set resource change on RMNode/SchedulerNode directly with event notification.

2014-01-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13863924#comment-13863924
 ] 

Hadoop QA commented on YARN-1506:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12621751/YARN-1506-v5.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-tools/hadoop-sls hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/2808//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/2808//console

This message is automatically generated.

> Replace set resource change on RMNode/SchedulerNode directly with event 
> notification.
> -
>
> Key: YARN-1506
> URL: https://issues.apache.org/jira/browse/YARN-1506
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, scheduler
>Reporter: Junping Du
>Assignee: Junping Du
>Priority: Blocker
> Attachments: YARN-1506-v1.patch, YARN-1506-v2.patch, 
> YARN-1506-v3.patch, YARN-1506-v4.patch, YARN-1506-v5.patch
>
>
> According to Vinod's comments on YARN-312 
> (https://issues.apache.org/jira/browse/YARN-312?focusedCommentId=13846087&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13846087),
>  we should replace RMNode.setResourceOption() with some resource change event.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (YARN-1506) Replace set resource change on RMNode/SchedulerNode directly with event notification.

2014-01-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13863907#comment-13863907
 ] 

Hadoop QA commented on YARN-1506:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12621751/YARN-1506-v5.patch
  against trunk revision .

{color:red}-1 patch{color}.  Trunk compilation may be broken.

Console output: https://builds.apache.org/job/PreCommit-YARN-Build/2809//console

This message is automatically generated.

> Replace set resource change on RMNode/SchedulerNode directly with event 
> notification.
> -
>
> Key: YARN-1506
> URL: https://issues.apache.org/jira/browse/YARN-1506
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, scheduler
>Reporter: Junping Du
>Assignee: Junping Du
>Priority: Blocker
> Attachments: YARN-1506-v1.patch, YARN-1506-v2.patch, 
> YARN-1506-v3.patch, YARN-1506-v4.patch, YARN-1506-v5.patch
>
>
> According to Vinod's comments on YARN-312 
> (https://issues.apache.org/jira/browse/YARN-312?focusedCommentId=13846087&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13846087),
>  we should replace RMNode.setResourceOption() with some resource change event.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (YARN-1506) Replace set resource change on RMNode/SchedulerNode directly with event notification.

2014-01-06 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1506?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated YARN-1506:
-

Attachment: YARN-1506-v5.patch

Address Jian's comments in v5 patch.

> Replace set resource change on RMNode/SchedulerNode directly with event 
> notification.
> -
>
> Key: YARN-1506
> URL: https://issues.apache.org/jira/browse/YARN-1506
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, scheduler
>Reporter: Junping Du
>Assignee: Junping Du
>Priority: Blocker
> Attachments: YARN-1506-v1.patch, YARN-1506-v2.patch, 
> YARN-1506-v3.patch, YARN-1506-v4.patch, YARN-1506-v5.patch
>
>
> According to Vinod's comments on YARN-312 
> (https://issues.apache.org/jira/browse/YARN-312?focusedCommentId=13846087&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13846087),
>  we should replace RMNode.setResourceOption() with some resource change event.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (YARN-1293) TestContainerLaunch.testInvalidEnvSyntaxDiagnostics fails on trunk

2014-01-06 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13863903#comment-13863903
 ] 

Akira AJISAKA commented on YARN-1293:
-

+1, I confirmed the test passed on my environment(LANG=ja_JP.UTF-8) with the 
patch.

> TestContainerLaunch.testInvalidEnvSyntaxDiagnostics fails on trunk
> --
>
> Key: YARN-1293
> URL: https://issues.apache.org/jira/browse/YARN-1293
> Project: Hadoop YARN
>  Issue Type: Bug
> Environment: linux
>Reporter: Tsuyoshi OZAWA
>Assignee: Tsuyoshi OZAWA
> Fix For: 2.3.0
>
> Attachments: YARN-1293.1.patch
>
>
> {quote}
> ---
> Test set: 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch
> ---
> Tests run: 8, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 12.655 sec 
> <<< FAILURE! - in 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch
> testInvalidEnvSyntaxDiagnostics(org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch)
>   Time elapsed: 0.114 sec  <<< FAILURE!
> junit.framework.AssertionFailedError: null
> at junit.framework.Assert.fail(Assert.java:48)
> at junit.framework.Assert.assertTrue(Assert.java:20)
> at junit.framework.Assert.assertTrue(Assert.java:27)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch.testInvalidEnvSyntaxDiagnostics(TestContainerLaunch.java:273)
> {quote}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (YARN-1506) Replace set resource change on RMNode/SchedulerNode directly with event notification.

2014-01-06 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13863865#comment-13863865
 ] 

Junping Du commented on YARN-1506:
--

bq. I thought this way before as following common practice. However, given we 
agreed that some information (OvercommitTimeout) is unnecessary to go to RMNode 
and cached there. I think it could be better to send separated events to RMNode 
and Scheduler by AdminService. Thoughts?
Just think it again, another simpler way may be we just sent event with 
OvercommitTimeout to RMNode but not cached there and send scheduler event in 
RMNode transition. Will update patch soon.

> Replace set resource change on RMNode/SchedulerNode directly with event 
> notification.
> -
>
> Key: YARN-1506
> URL: https://issues.apache.org/jira/browse/YARN-1506
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, scheduler
>Reporter: Junping Du
>Assignee: Junping Du
>Priority: Blocker
> Attachments: YARN-1506-v1.patch, YARN-1506-v2.patch, 
> YARN-1506-v3.patch, YARN-1506-v4.patch
>
>
> According to Vinod's comments on YARN-312 
> (https://issues.apache.org/jira/browse/YARN-312?focusedCommentId=13846087&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13846087),
>  we should replace RMNode.setResourceOption() with some resource change event.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (YARN-1506) Replace set resource change on RMNode/SchedulerNode directly with event notification.

2014-01-06 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13863854#comment-13863854
 ] 

Junping Du commented on YARN-1506:
--

Thanks Jian for review and great comments! Please see my reply:
bq. What about other node states other than RUNNING, is RESOURCE_UPDATE event 
never possible to come at those states?
Very good point. I think I missed other cases before. Basically, now I want to 
allow node in RUNNING, NEW and REBOOT state to be updated with resource. And 
node in unusable state {UNHEALTHY, DECOMMISSIONED, LOST} will log/throw 
exception. Make sense? 
Also, I am curious on it looks like we are missing some transitions, i.e.: 
REBOOT -> RUNNING for a rebooted node come back as running for accepting 
RECONNECTED/CLEAN_CONTAINER/APP
DECOMMISSIONED -> RUNNING for a decommissioned node be recommissoned again 
LOST -> NEW/UNHELATHY/DECOMMISSONED for a expired node heartbeat again
UNHEALTHY -> RUNNING for a unhealthy node report to be healthy again 
Am I missing anything here?
bq. ResourceOption.build() is not used anywhere?
Nice catch! Will remove it in next patch.
bq. Maybe send the NODE_RESOURCE_UPDATE event from RMNode instead from 
AdminService? as inside SchedulerEventType, it’s actually saying the source is 
from node.
I thought this way before as following common practice. However, given we 
agreed that some information (OvercommitTimeout) is unnecessary to go to RMNode 
and cached there. I think it could be better to send separated events to RMNode 
and Scheduler by AdminService. Thoughts?

> Replace set resource change on RMNode/SchedulerNode directly with event 
> notification.
> -
>
> Key: YARN-1506
> URL: https://issues.apache.org/jira/browse/YARN-1506
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, scheduler
>Reporter: Junping Du
>Assignee: Junping Du
>Priority: Blocker
> Attachments: YARN-1506-v1.patch, YARN-1506-v2.patch, 
> YARN-1506-v3.patch, YARN-1506-v4.patch
>
>
> According to Vinod's comments on YARN-312 
> (https://issues.apache.org/jira/browse/YARN-312?focusedCommentId=13846087&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13846087),
>  we should replace RMNode.setResourceOption() with some resource change event.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (YARN-1029) Allow embedding leader election into the RM

2014-01-06 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13863823#comment-13863823
 ] 

Karthik Kambatla commented on YARN-1029:


Awesome. Thanks Bikas, Sandy and Vinod for your reviews.

Create HADOOP-10209 to track the findBugs warnings in ActiveStandbyElector.

> Allow embedding leader election into the RM
> ---
>
> Key: YARN-1029
> URL: https://issues.apache.org/jira/browse/YARN-1029
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Bikas Saha
>Assignee: Karthik Kambatla
> Fix For: 2.4.0
>
> Attachments: embedded-zkfc-approach.patch, yarn-1029-0.patch, 
> yarn-1029-0.patch, yarn-1029-1.patch, yarn-1029-10.patch, yarn-1029-2.patch, 
> yarn-1029-3.patch, yarn-1029-4.patch, yarn-1029-5.patch, yarn-1029-6.patch, 
> yarn-1029-7.patch, yarn-1029-7.patch, yarn-1029-8.patch, yarn-1029-9.patch, 
> yarn-1029-approach.patch
>
>
> It should be possible to embed common ActiveStandyElector into the RM such 
> that ZooKeeper based leader election and notification is in-built. In 
> conjunction with a ZK state store, this configuration will be a simple 
> deployment option.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (YARN-1029) Allow embedding leader election into the RM

2014-01-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13863811#comment-13863811
 ] 

Hudson commented on YARN-1029:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #4966 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/4966/])
YARN-1029. Added embedded leader election in the ResourceManager. Contributed 
by Karthik Kambatla. (vinodkv: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1556103)
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/ActiveStandbyElector.java
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/HAUtil.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/proto/server/yarn_server_resourcemanager_service_protos.proto
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/pom.xml
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/TestRMFailover.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/RMHAServiceTarget.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/AdminService.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/EmbeddedElectorService.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMFatalEvent.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMFatalEventType.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ResourceManager.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/RMStateStore.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/RMStateStoreOperationFailedEvent.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/RMStateStoreOperationFailedEventType.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/ZKRMStateStore.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/MockRM.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestRMHA.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/TestZKRMStateStore.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/TestZKRMStateStoreZKClientConnections.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/src/test/java/org/apache/hadoop/yarn/server/MiniYARNCluster.java


> Allow embedding leader election into the RM
> ---
>
> Key: YARN-1029
> URL: https://issues.apache.org/jira/browse/YARN-1029
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Bikas Saha
>Assignee: Karthik Kambatla
> Fix For: 2.4.0
>
> Attachments: embedded-zkfc-approach.patch, yarn-1029-0.patch, 
> yarn-1029-0.patch, yarn-1029-1.patch, yarn-1029-10.patch, yarn-1029-2.patch, 
> yarn-1029-3.patch, yarn-1029-4.patch, yarn-1029-5.patch, yarn-1029-6.patch, 
> yarn-1029-7.patch, yarn-1029-7.patch, yarn-1029-8.patc

[jira] [Commented] (YARN-1029) Allow embedding leader election into the RM

2014-01-06 Thread Vinod Kumar Vavilapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13863802#comment-13863802
 ] 

Vinod Kumar Vavilapalli commented on YARN-1029:
---

Excellent. Looks good. Checking this in.

Let's make sure the common findBugs warnings are tracked somewhere.

> Allow embedding leader election into the RM
> ---
>
> Key: YARN-1029
> URL: https://issues.apache.org/jira/browse/YARN-1029
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Bikas Saha
>Assignee: Karthik Kambatla
> Attachments: embedded-zkfc-approach.patch, yarn-1029-0.patch, 
> yarn-1029-0.patch, yarn-1029-1.patch, yarn-1029-10.patch, yarn-1029-2.patch, 
> yarn-1029-3.patch, yarn-1029-4.patch, yarn-1029-5.patch, yarn-1029-6.patch, 
> yarn-1029-7.patch, yarn-1029-7.patch, yarn-1029-8.patch, yarn-1029-9.patch, 
> yarn-1029-approach.patch
>
>
> It should be possible to embed common ActiveStandyElector into the RM such 
> that ZooKeeper based leader election and notification is in-built. In 
> conjunction with a ZK state store, this configuration will be a simple 
> deployment option.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (YARN-1567) In Fair Scheduler, allow empty leaf queues to become parent queues on allocation file reload

2014-01-06 Thread Sandy Ryza (JIRA)
Sandy Ryza created YARN-1567:


 Summary: In Fair Scheduler, allow empty leaf queues to become 
parent queues on allocation file reload
 Key: YARN-1567
 URL: https://issues.apache.org/jira/browse/YARN-1567
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: scheduler
Affects Versions: 2.2.0
Reporter: Sandy Ryza
Assignee: Sandy Ryza






--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (YARN-1496) Protocol additions to allow moving apps between queues

2014-01-06 Thread Sandy Ryza (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13863661#comment-13863661
 ] 

Sandy Ryza commented on YARN-1496:
--

Thanks Karthik.  Will commit this to trunk tomorrow to unblock the rest of the 
work that relies on this.  Happy to come back and change names later if others 
have concerns.

> Protocol additions to allow moving apps between queues
> --
>
> Key: YARN-1496
> URL: https://issues.apache.org/jira/browse/YARN-1496
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler
>Reporter: Sandy Ryza
>Assignee: Sandy Ryza
> Attachments: YARN-1496-1.patch, YARN-1496-2.patch, YARN-1496-3.patch, 
> YARN-1496.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (YARN-1496) Protocol additions to allow moving apps between queues

2014-01-06 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13863658#comment-13863658
 ] 

Karthik Kambatla commented on YARN-1496:


Patch looks good to me. Given that the patch is primarily protocol additions 
without actual functionality, I think we should hold off on merging this to 
branch-2 until said functionality is actually implemented.

+1 to committing this to trunk. 

> Protocol additions to allow moving apps between queues
> --
>
> Key: YARN-1496
> URL: https://issues.apache.org/jira/browse/YARN-1496
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler
>Reporter: Sandy Ryza
>Assignee: Sandy Ryza
> Attachments: YARN-1496-1.patch, YARN-1496-2.patch, YARN-1496-3.patch, 
> YARN-1496.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (YARN-1506) Replace set resource change on RMNode/SchedulerNode directly with event notification.

2014-01-06 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13863644#comment-13863644
 ] 

Jian He commented on YARN-1506:
---

Wasn’t following YARN-291 too much. Some things I noticed.
- What about other node states other than RUNNING, is RESOURCE_UPDATE event 
never possible to come at those states?
- ResourceOption.build() is not used anywhere?
- Maybe send the NODE_RESOURCE_UPDATE event from RMNode instead from 
AdminService? as inside SchedulerEventType, it’s actually saying the source is 
from node.

> Replace set resource change on RMNode/SchedulerNode directly with event 
> notification.
> -
>
> Key: YARN-1506
> URL: https://issues.apache.org/jira/browse/YARN-1506
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, scheduler
>Reporter: Junping Du
>Assignee: Junping Du
>Priority: Blocker
> Attachments: YARN-1506-v1.patch, YARN-1506-v2.patch, 
> YARN-1506-v3.patch, YARN-1506-v4.patch
>
>
> According to Vinod's comments on YARN-312 
> (https://issues.apache.org/jira/browse/YARN-312?focusedCommentId=13846087&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13846087),
>  we should replace RMNode.setResourceOption() with some resource change event.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (YARN-1496) Protocol additions to allow moving apps between queues

2014-01-06 Thread Sandy Ryza (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13863579#comment-13863579
 ] 

Sandy Ryza commented on YARN-1496:
--

Remaining javadoc warnings are unrelated to this patch (come from 
MAPREDUCE-3310)

> Protocol additions to allow moving apps between queues
> --
>
> Key: YARN-1496
> URL: https://issues.apache.org/jira/browse/YARN-1496
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler
>Reporter: Sandy Ryza
>Assignee: Sandy Ryza
> Attachments: YARN-1496-1.patch, YARN-1496-2.patch, YARN-1496-3.patch, 
> YARN-1496.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (YARN-1496) Protocol additions to allow moving apps between queues

2014-01-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13863562#comment-13863562
 ] 

Hadoop QA commented on YARN-1496:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12621667/YARN-1496-3.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
14 warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/2807//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/2807//console

This message is automatically generated.

> Protocol additions to allow moving apps between queues
> --
>
> Key: YARN-1496
> URL: https://issues.apache.org/jira/browse/YARN-1496
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler
>Reporter: Sandy Ryza
>Assignee: Sandy Ryza
> Attachments: YARN-1496-1.patch, YARN-1496-2.patch, YARN-1496-3.patch, 
> YARN-1496.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (YARN-1566) Change distributed-shell to retain containers from previous AppAttempt

2014-01-06 Thread Jian He (JIRA)
Jian He created YARN-1566:
-

 Summary: Change distributed-shell to retain containers from 
previous AppAttempt
 Key: YARN-1566
 URL: https://issues.apache.org/jira/browse/YARN-1566
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Jian He
Assignee: Jian He


Change distributed-shell to reuse previous AM's running containers when AM is 
restarting.  It can also be made configurable whether to enable this feature or 
not.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (YARN-1326) RM should log using RMStore at startup time

2014-01-06 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1326?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13863470#comment-13863470
 ] 

Karthik Kambatla commented on YARN-1326:


Is this really required? I believe it is easier for a user to check the 
Configuration being used by the RM, than pull up the logs. The logs can get 
rolled over and such.

> RM should log using RMStore at startup time
> ---
>
> Key: YARN-1326
> URL: https://issues.apache.org/jira/browse/YARN-1326
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Tsuyoshi OZAWA
>Assignee: Tsuyoshi OZAWA
> Attachments: YARN-1326.1.patch
>
>   Original Estimate: 3h
>  Remaining Estimate: 3h
>
> Currently there are no way to know which RMStore RM uses. It's useful to log 
> the information at RM's startup time.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (YARN-1197) Support changing resources of an allocated container

2014-01-06 Thread Sandy Ryza (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13863469#comment-13863469
 ] 

Sandy Ryza commented on YARN-1197:
--

[~acmurthy], any progress on the branch?  If not, I'd be happy to take care of 
it.

> Support changing resources of an allocated container
> 
>
> Key: YARN-1197
> URL: https://issues.apache.org/jira/browse/YARN-1197
> Project: Hadoop YARN
>  Issue Type: Task
>  Components: api, nodemanager, resourcemanager
>Affects Versions: 2.1.0-beta
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: mapreduce-project.patch.ver.1, 
> tools-project.patch.ver.1, yarn-1197-scheduler-v1.pdf, yarn-1197-v2.pdf, 
> yarn-1197-v3.pdf, yarn-1197-v4.pdf, yarn-1197-v5.pdf, yarn-1197.pdf, 
> yarn-api-protocol.patch.ver.1, yarn-pb-impl.patch.ver.1, 
> yarn-server-common.patch.ver.1, yarn-server-nodemanager.patch.ver.1, 
> yarn-server-resourcemanager.patch.ver.1
>
>
> The current YARN resource management logic assumes resource allocated to a 
> container is fixed during the lifetime of it. When users want to change a 
> resource 
> of an allocated container the only way is releasing it and allocating a new 
> container with expected size.
> Allowing run-time changing resources of an allocated container will give us 
> better control of resource usage in application side



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (YARN-1029) Allow embedding leader election into the RM

2014-01-06 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13863409#comment-13863409
 ] 

Karthik Kambatla commented on YARN-1029:


The same javadoc warnings showed up on YARN-1482 aas well. I think the latest 
patch is good to go.

> Allow embedding leader election into the RM
> ---
>
> Key: YARN-1029
> URL: https://issues.apache.org/jira/browse/YARN-1029
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Bikas Saha
>Assignee: Karthik Kambatla
> Attachments: embedded-zkfc-approach.patch, yarn-1029-0.patch, 
> yarn-1029-0.patch, yarn-1029-1.patch, yarn-1029-10.patch, yarn-1029-2.patch, 
> yarn-1029-3.patch, yarn-1029-4.patch, yarn-1029-5.patch, yarn-1029-6.patch, 
> yarn-1029-7.patch, yarn-1029-7.patch, yarn-1029-8.patch, yarn-1029-9.patch, 
> yarn-1029-approach.patch
>
>
> It should be possible to embed common ActiveStandyElector into the RM such 
> that ZooKeeper based leader election and notification is in-built. In 
> conjunction with a ZK state store, this configuration will be a simple 
> deployment option.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (YARN-1029) Allow embedding leader election into the RM

2014-01-06 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13863406#comment-13863406
 ] 

Karthik Kambatla commented on YARN-1029:


Weird. Running javadoc locally doesn't reveal any warnings - I ran mvn 
javadoc:javadoc with and without the patch for the hadoop-yarn-project. The 
findbugs are in common code, and the test failure is unrelated.

> Allow embedding leader election into the RM
> ---
>
> Key: YARN-1029
> URL: https://issues.apache.org/jira/browse/YARN-1029
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Bikas Saha
>Assignee: Karthik Kambatla
> Attachments: embedded-zkfc-approach.patch, yarn-1029-0.patch, 
> yarn-1029-0.patch, yarn-1029-1.patch, yarn-1029-10.patch, yarn-1029-2.patch, 
> yarn-1029-3.patch, yarn-1029-4.patch, yarn-1029-5.patch, yarn-1029-6.patch, 
> yarn-1029-7.patch, yarn-1029-7.patch, yarn-1029-8.patch, yarn-1029-9.patch, 
> yarn-1029-approach.patch
>
>
> It should be possible to embed common ActiveStandyElector into the RM such 
> that ZooKeeper based leader election and notification is in-built. In 
> conjunction with a ZK state store, this configuration will be a simple 
> deployment option.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (YARN-1482) WebApplicationProxy should be always-on w.r.t HA even if it is embedded in the RM

2014-01-06 Thread Xuan Gong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13863407#comment-13863407
 ] 

Xuan Gong commented on YARN-1482:
-

-1 on javadoc is un-related

> WebApplicationProxy should be always-on w.r.t HA even if it is embedded in 
> the RM
> -
>
> Key: YARN-1482
> URL: https://issues.apache.org/jira/browse/YARN-1482
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Xuan Gong
> Attachments: YARN-1482.1.patch, YARN-1482.2.patch, YARN-1482.3.patch, 
> YARN-1482.4.patch, YARN-1482.4.patch, YARN-1482.5.patch, YARN-1482.5.patch
>
>
> This way, even if an RM goes to standby mode, we can affect a redirect to the 
> active. And more importantly, users will not suddenly see all their links 
> stop working.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (YARN-1496) Protocol additions to allow moving apps between queues

2014-01-06 Thread Sandy Ryza (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandy Ryza updated YARN-1496:
-

Attachment: YARN-1496-3.patch

> Protocol additions to allow moving apps between queues
> --
>
> Key: YARN-1496
> URL: https://issues.apache.org/jira/browse/YARN-1496
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler
>Reporter: Sandy Ryza
>Assignee: Sandy Ryza
> Attachments: YARN-1496-1.patch, YARN-1496-2.patch, YARN-1496-3.patch, 
> YARN-1496.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (YARN-1482) WebApplicationProxy should be always-on w.r.t HA even if it is embedded in the RM

2014-01-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13863382#comment-13863382
 ] 

Hadoop QA commented on YARN-1482:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12621657/YARN-1482.5.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
14 warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/2806//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/2806//console

This message is automatically generated.

> WebApplicationProxy should be always-on w.r.t HA even if it is embedded in 
> the RM
> -
>
> Key: YARN-1482
> URL: https://issues.apache.org/jira/browse/YARN-1482
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Xuan Gong
> Attachments: YARN-1482.1.patch, YARN-1482.2.patch, YARN-1482.3.patch, 
> YARN-1482.4.patch, YARN-1482.4.patch, YARN-1482.5.patch, YARN-1482.5.patch
>
>
> This way, even if an RM goes to standby mode, we can affect a redirect to the 
> active. And more importantly, users will not suddenly see all their links 
> stop working.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (YARN-1482) WebApplicationProxy should be always-on w.r.t HA even if it is embedded in the RM

2014-01-06 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated YARN-1482:


Attachment: YARN-1482.5.patch

> WebApplicationProxy should be always-on w.r.t HA even if it is embedded in 
> the RM
> -
>
> Key: YARN-1482
> URL: https://issues.apache.org/jira/browse/YARN-1482
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Xuan Gong
> Attachments: YARN-1482.1.patch, YARN-1482.2.patch, YARN-1482.3.patch, 
> YARN-1482.4.patch, YARN-1482.4.patch, YARN-1482.5.patch, YARN-1482.5.patch
>
>
> This way, even if an RM goes to standby mode, we can affect a redirect to the 
> active. And more importantly, users will not suddenly see all their links 
> stop working.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (YARN-1029) Allow embedding leader election into the RM

2014-01-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13863331#comment-13863331
 ] 

Hadoop QA commented on YARN-1029:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12621525/yarn-1029-10.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 6 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
14 warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 4 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests:

  org.apache.hadoop.yarn.client.api.impl.TestYarnClient

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/2804//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-YARN-Build/2804//artifact/trunk/patchprocess/newPatchFindbugsWarningshadoop-common.html
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/2804//console

This message is automatically generated.

> Allow embedding leader election into the RM
> ---
>
> Key: YARN-1029
> URL: https://issues.apache.org/jira/browse/YARN-1029
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Bikas Saha
>Assignee: Karthik Kambatla
> Attachments: embedded-zkfc-approach.patch, yarn-1029-0.patch, 
> yarn-1029-0.patch, yarn-1029-1.patch, yarn-1029-10.patch, yarn-1029-2.patch, 
> yarn-1029-3.patch, yarn-1029-4.patch, yarn-1029-5.patch, yarn-1029-6.patch, 
> yarn-1029-7.patch, yarn-1029-7.patch, yarn-1029-8.patch, yarn-1029-9.patch, 
> yarn-1029-approach.patch
>
>
> It should be possible to embed common ActiveStandyElector into the RM such 
> that ZooKeeper based leader election and notification is in-built. In 
> conjunction with a ZK state store, this configuration will be a simple 
> deployment option.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (YARN-1496) Protocol additions to allow moving apps between queues

2014-01-06 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13863326#comment-13863326
 ] 

Karthik Kambatla commented on YARN-1496:


Patch looks reasonable to me. Can you take care of the javadoc issue and check 
why TestRMRestart seems to be failing?  

> Protocol additions to allow moving apps between queues
> --
>
> Key: YARN-1496
> URL: https://issues.apache.org/jira/browse/YARN-1496
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler
>Reporter: Sandy Ryza
>Assignee: Sandy Ryza
> Attachments: YARN-1496-1.patch, YARN-1496-2.patch, YARN-1496.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (YARN-1482) WebApplicationProxy should be always-on w.r.t HA even if it is embedded in the RM

2014-01-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13863320#comment-13863320
 ] 

Hadoop QA commented on YARN-1482:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12621647/YARN-1482.5.patch
  against trunk revision .

{color:red}-1 patch{color}.  Trunk compilation may be broken.

Console output: https://builds.apache.org/job/PreCommit-YARN-Build/2805//console

This message is automatically generated.

> WebApplicationProxy should be always-on w.r.t HA even if it is embedded in 
> the RM
> -
>
> Key: YARN-1482
> URL: https://issues.apache.org/jira/browse/YARN-1482
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Xuan Gong
> Attachments: YARN-1482.1.patch, YARN-1482.2.patch, YARN-1482.3.patch, 
> YARN-1482.4.patch, YARN-1482.4.patch, YARN-1482.5.patch
>
>
> This way, even if an RM goes to standby mode, we can affect a redirect to the 
> active. And more importantly, users will not suddenly see all their links 
> stop working.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (YARN-1482) WebApplicationProxy should be always-on w.r.t HA even if it is embedded in the RM

2014-01-06 Thread Xuan Gong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13863304#comment-13863304
 ] 

Xuan Gong commented on YARN-1482:
-

create the patch based on the latest trunk

> WebApplicationProxy should be always-on w.r.t HA even if it is embedded in 
> the RM
> -
>
> Key: YARN-1482
> URL: https://issues.apache.org/jira/browse/YARN-1482
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Xuan Gong
> Attachments: YARN-1482.1.patch, YARN-1482.2.patch, YARN-1482.3.patch, 
> YARN-1482.4.patch, YARN-1482.4.patch, YARN-1482.5.patch
>
>
> This way, even if an RM goes to standby mode, we can affect a redirect to the 
> active. And more importantly, users will not suddenly see all their links 
> stop working.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Assigned] (YARN-304) RM Tracking Links for purged applications needs a long-term solution

2014-01-06 Thread Zhijie Shen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhijie Shen reassigned YARN-304:


Assignee: Zhijie Shen

> RM Tracking Links for purged applications needs a long-term solution
> 
>
> Key: YARN-304
> URL: https://issues.apache.org/jira/browse/YARN-304
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.0.0, 0.23.5
>Reporter: Derek Dagit
>Assignee: Zhijie Shen
>
> This JIRA is intended to track a proper long-term fix for the issue described 
> in YARN-285.
> The following is from the original description:
> As applications complete, the RM tracks their IDs in a completed list. This 
> list is routinely truncated to limit the total number of application 
> remembered by the RM.
> When a user clicks the History for a job, either the browser is redirected to 
> the application's tracking link obtained from the stored application 
> instance. But when the application has been purged from the RM, an error is 
> displayed.
> In very busy clusters the rate at which applications complete can cause 
> applications to be purged from the RM's internal list within hours, which 
> breaks the proxy URLs users have saved for their jobs.
> We would like the RM to provide valid tracking links persist so that users 
> are not frustrated by broken links.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (YARN-1482) WebApplicationProxy should be always-on w.r.t HA even if it is embedded in the RM

2014-01-06 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated YARN-1482:


Attachment: YARN-1482.5.patch

> WebApplicationProxy should be always-on w.r.t HA even if it is embedded in 
> the RM
> -
>
> Key: YARN-1482
> URL: https://issues.apache.org/jira/browse/YARN-1482
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Xuan Gong
> Attachments: YARN-1482.1.patch, YARN-1482.2.patch, YARN-1482.3.patch, 
> YARN-1482.4.patch, YARN-1482.4.patch, YARN-1482.5.patch
>
>
> This way, even if an RM goes to standby mode, we can affect a redirect to the 
> active. And more importantly, users will not suddenly see all their links 
> stop working.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (YARN-304) RM Tracking Links for purged applications needs a long-term solution

2014-01-06 Thread Zhijie Shen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13863297#comment-13863297
 ] 

Zhijie Shen commented on YARN-304:
--

bq. In very busy clusters the rate at which applications complete can cause 
applications to be purged from the RM's internal list within hours, which 
breaks the proxy URLs users have saved for their jobs.

With AHS is deployed, users are able to track the completed applications even 
when they are purged from RM in-memory store. The problem is to help users to 
seamlessly access the applications on AHS given they only have the proxy URLs 
on hand. RM needs to translate the URLs and redirect the users to AHS.

bq. When a user clicks the History for a job, either the browser is redirected 
to the application's tracking link obtained from the stored application 
instance. But when the application has been purged from the RM, an error is 
displayed.

A related issue here is that if an application is killed/failed, or it doesn't 
provide an updated tracking URL when unregistering, users may not have the 
proxy URL to track the application after it is completed. "History" link is 
pointing to the app page in RM itself. It should be good to by default set the 
tracking URL to AHS if it is not updated when an application is to complete.

> RM Tracking Links for purged applications needs a long-term solution
> 
>
> Key: YARN-304
> URL: https://issues.apache.org/jira/browse/YARN-304
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.0.0, 0.23.5
>Reporter: Derek Dagit
>
> This JIRA is intended to track a proper long-term fix for the issue described 
> in YARN-285.
> The following is from the original description:
> As applications complete, the RM tracks their IDs in a completed list. This 
> list is routinely truncated to limit the total number of application 
> remembered by the RM.
> When a user clicks the History for a job, either the browser is redirected to 
> the application's tracking link obtained from the stored application 
> instance. But when the application has been purged from the RM, an error is 
> displayed.
> In very busy clusters the rate at which applications complete can cause 
> applications to be purged from the RM's internal list within hours, which 
> breaks the proxy URLs users have saved for their jobs.
> We would like the RM to provide valid tracking links persist so that users 
> are not frustrated by broken links.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (YARN-1560) TestYarnClient#testAMMRTokens fails with null AMRM token

2014-01-06 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13863249#comment-13863249
 ] 

Jian He commented on YARN-1560:
---

Committed to trunk and branch-2,  thanks Ted !

> TestYarnClient#testAMMRTokens fails with null AMRM token
> 
>
> Key: YARN-1560
> URL: https://issues.apache.org/jira/browse/YARN-1560
> Project: Hadoop YARN
>  Issue Type: Test
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: yarn-1560-v1.txt, yarn-1560-v2.txt
>
>
> The following can be reproduced locally:
> {code}
> testAMMRTokens(org.apache.hadoop.yarn.client.api.impl.TestYarnClient)  Time 
> elapsed: 3.341 sec  <<< FAILURE!
> junit.framework.AssertionFailedError: null
>   at junit.framework.Assert.fail(Assert.java:48)
>   at junit.framework.Assert.assertTrue(Assert.java:20)
>   at junit.framework.Assert.assertNotNull(Assert.java:218)
>   at junit.framework.Assert.assertNotNull(Assert.java:211)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TestYarnClient.testAMMRTokens(TestYarnClient.java:382)
> {code}
> This test didn't appear in 
> https://builds.apache.org/job/Hadoop-Yarn-trunk/442/consoleFull



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (YARN-1560) TestYarnClient#testAMMRTokens fails with null AMRM token

2014-01-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13863253#comment-13863253
 ] 

Hudson commented on YARN-1560:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #4960 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/4960/])
YARN-1560. Fixed TestYarnClient#testAMMRTokens failure with null AMRM token. 
(Contributed by Ted Yu) (jianhe: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1555975)
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestYarnClient.java


> TestYarnClient#testAMMRTokens fails with null AMRM token
> 
>
> Key: YARN-1560
> URL: https://issues.apache.org/jira/browse/YARN-1560
> Project: Hadoop YARN
>  Issue Type: Test
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: yarn-1560-v1.txt, yarn-1560-v2.txt
>
>
> The following can be reproduced locally:
> {code}
> testAMMRTokens(org.apache.hadoop.yarn.client.api.impl.TestYarnClient)  Time 
> elapsed: 3.341 sec  <<< FAILURE!
> junit.framework.AssertionFailedError: null
>   at junit.framework.Assert.fail(Assert.java:48)
>   at junit.framework.Assert.assertTrue(Assert.java:20)
>   at junit.framework.Assert.assertNotNull(Assert.java:218)
>   at junit.framework.Assert.assertNotNull(Assert.java:211)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TestYarnClient.testAMMRTokens(TestYarnClient.java:382)
> {code}
> This test didn't appear in 
> https://builds.apache.org/job/Hadoop-Yarn-trunk/442/consoleFull



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (YARN-1559) Race between ServerRMProxy and ClientRMProxy setting RMProxy#INSTANCE

2014-01-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13863222#comment-13863222
 ] 

Hudson commented on YARN-1559:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #4959 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/4959/])
YARN-1559. Race between ServerRMProxy and ClientRMProxy setting 
RMProxy#INSTANCE. (kasha and vinodkv via kasha) (kasha: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1555970)
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/dev-support/findbugs-exclude.xml
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/ClientRMProxy.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/RMProxy.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/api/ServerRMProxy.java


> Race between ServerRMProxy and ClientRMProxy setting RMProxy#INSTANCE
> -
>
> Key: YARN-1559
> URL: https://issues.apache.org/jira/browse/YARN-1559
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: 2.4.0
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
>Priority: Blocker
> Fix For: 2.4.0
>
> Attachments: YARN-1559-20140105.txt, yarn-1559-1.patch, 
> yarn-1559-2.patch, yarn-1559-3.patch
>
>
> RMProxy#INSTANCE is a non-final static field and both ServerRMProxy and 
> ClientRMProxy set it. This leads to races as witnessed on - YARN-1482.
> Sample trace:
> {noformat}
> java.lang.IllegalArgumentException: RM does not support this client protocol
> at 
> com.google.common.base.Preconditions.checkArgument(Preconditions.java:88)
> at 
> org.apache.hadoop.yarn.client.ClientRMProxy.checkAllowedProtocols(ClientRMProxy.java:119)
> at 
> org.apache.hadoop.yarn.client.ConfiguredRMFailoverProxyProvider.init(ConfiguredRMFailoverProxyProvider.java:58)
> at 
> org.apache.hadoop.yarn.client.RMProxy.createRMFailoverProxyProvider(RMProxy.java:158)
> at 
> org.apache.hadoop.yarn.client.RMProxy.createRMProxy(RMProxy.java:88)
> at 
> org.apache.hadoop.yarn.server.api.ServerRMProxy.createRMProxy(ServerRMProxy.java:56)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (YARN-1559) Race between ServerRMProxy and ClientRMProxy setting RMProxy#INSTANCE

2014-01-06 Thread Xuan Gong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13863203#comment-13863203
 ] 

Xuan Gong commented on YARN-1559:
-

[~kkambatl] Can we commit this now ? 

> Race between ServerRMProxy and ClientRMProxy setting RMProxy#INSTANCE
> -
>
> Key: YARN-1559
> URL: https://issues.apache.org/jira/browse/YARN-1559
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: 2.4.0
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
>Priority: Blocker
> Attachments: YARN-1559-20140105.txt, yarn-1559-1.patch, 
> yarn-1559-2.patch, yarn-1559-3.patch
>
>
> RMProxy#INSTANCE is a non-final static field and both ServerRMProxy and 
> ClientRMProxy set it. This leads to races as witnessed on - YARN-1482.
> Sample trace:
> {noformat}
> java.lang.IllegalArgumentException: RM does not support this client protocol
> at 
> com.google.common.base.Preconditions.checkArgument(Preconditions.java:88)
> at 
> org.apache.hadoop.yarn.client.ClientRMProxy.checkAllowedProtocols(ClientRMProxy.java:119)
> at 
> org.apache.hadoop.yarn.client.ConfiguredRMFailoverProxyProvider.init(ConfiguredRMFailoverProxyProvider.java:58)
> at 
> org.apache.hadoop.yarn.client.RMProxy.createRMFailoverProxyProvider(RMProxy.java:158)
> at 
> org.apache.hadoop.yarn.client.RMProxy.createRMProxy(RMProxy.java:88)
> at 
> org.apache.hadoop.yarn.server.api.ServerRMProxy.createRMProxy(ServerRMProxy.java:56)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (YARN-1559) Race between ServerRMProxy and ClientRMProxy setting RMProxy#INSTANCE

2014-01-06 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13863187#comment-13863187
 ] 

Karthik Kambatla commented on YARN-1559:


Thanks Bikas. I am committing this shortly, then.

> Race between ServerRMProxy and ClientRMProxy setting RMProxy#INSTANCE
> -
>
> Key: YARN-1559
> URL: https://issues.apache.org/jira/browse/YARN-1559
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: 2.4.0
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
>Priority: Blocker
> Attachments: YARN-1559-20140105.txt, yarn-1559-1.patch, 
> yarn-1559-2.patch, yarn-1559-3.patch
>
>
> RMProxy#INSTANCE is a non-final static field and both ServerRMProxy and 
> ClientRMProxy set it. This leads to races as witnessed on - YARN-1482.
> Sample trace:
> {noformat}
> java.lang.IllegalArgumentException: RM does not support this client protocol
> at 
> com.google.common.base.Preconditions.checkArgument(Preconditions.java:88)
> at 
> org.apache.hadoop.yarn.client.ClientRMProxy.checkAllowedProtocols(ClientRMProxy.java:119)
> at 
> org.apache.hadoop.yarn.client.ConfiguredRMFailoverProxyProvider.init(ConfiguredRMFailoverProxyProvider.java:58)
> at 
> org.apache.hadoop.yarn.client.RMProxy.createRMFailoverProxyProvider(RMProxy.java:158)
> at 
> org.apache.hadoop.yarn.client.RMProxy.createRMProxy(RMProxy.java:88)
> at 
> org.apache.hadoop.yarn.server.api.ServerRMProxy.createRMProxy(ServerRMProxy.java:56)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (YARN-1559) Race between ServerRMProxy and ClientRMProxy setting RMProxy#INSTANCE

2014-01-06 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13863178#comment-13863178
 ] 

Bikas Saha commented on YARN-1559:
--

Never mind then. +1. Lets commit this patch and not randomize YARN-1029 any 
further.

> Race between ServerRMProxy and ClientRMProxy setting RMProxy#INSTANCE
> -
>
> Key: YARN-1559
> URL: https://issues.apache.org/jira/browse/YARN-1559
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: 2.4.0
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
>Priority: Blocker
> Attachments: YARN-1559-20140105.txt, yarn-1559-1.patch, 
> yarn-1559-2.patch, yarn-1559-3.patch
>
>
> RMProxy#INSTANCE is a non-final static field and both ServerRMProxy and 
> ClientRMProxy set it. This leads to races as witnessed on - YARN-1482.
> Sample trace:
> {noformat}
> java.lang.IllegalArgumentException: RM does not support this client protocol
> at 
> com.google.common.base.Preconditions.checkArgument(Preconditions.java:88)
> at 
> org.apache.hadoop.yarn.client.ClientRMProxy.checkAllowedProtocols(ClientRMProxy.java:119)
> at 
> org.apache.hadoop.yarn.client.ConfiguredRMFailoverProxyProvider.init(ConfiguredRMFailoverProxyProvider.java:58)
> at 
> org.apache.hadoop.yarn.client.RMProxy.createRMFailoverProxyProvider(RMProxy.java:158)
> at 
> org.apache.hadoop.yarn.client.RMProxy.createRMProxy(RMProxy.java:88)
> at 
> org.apache.hadoop.yarn.server.api.ServerRMProxy.createRMProxy(ServerRMProxy.java:56)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (YARN-1559) Race between ServerRMProxy and ClientRMProxy setting RMProxy#INSTANCE

2014-01-06 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13863171#comment-13863171
 ] 

Karthik Kambatla commented on YARN-1559:


bq. I am guessing stability of the flaky test is the testing for this patch. 
But I see TestRMFailover reported failed in Jenkins for this patch?
I ran into this issue while adding a second test in TestRMFailover. The test 
for this fix here is the latest patch on YARN-1029 which has two tests and 
[~xgong]'s test on YARN-1482 which also has an additional test. TestRMFailover 
failure here is due to the NM timing out while connecting to the RM, YARN-1029 
bumps up this timeout to 20 seconds. I ran the test on YARN-1029 multiple 
times, and it continues to pass. Not necessarily a proof that it is not flaky, 
but at least it doesn't seem to be suffering from the race fixed here.

Do you think we should handle the race as well in YARN-1029 itself and resolve 
this as a part of that JIRA?

> Race between ServerRMProxy and ClientRMProxy setting RMProxy#INSTANCE
> -
>
> Key: YARN-1559
> URL: https://issues.apache.org/jira/browse/YARN-1559
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: 2.4.0
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
>Priority: Blocker
> Attachments: YARN-1559-20140105.txt, yarn-1559-1.patch, 
> yarn-1559-2.patch, yarn-1559-3.patch
>
>
> RMProxy#INSTANCE is a non-final static field and both ServerRMProxy and 
> ClientRMProxy set it. This leads to races as witnessed on - YARN-1482.
> Sample trace:
> {noformat}
> java.lang.IllegalArgumentException: RM does not support this client protocol
> at 
> com.google.common.base.Preconditions.checkArgument(Preconditions.java:88)
> at 
> org.apache.hadoop.yarn.client.ClientRMProxy.checkAllowedProtocols(ClientRMProxy.java:119)
> at 
> org.apache.hadoop.yarn.client.ConfiguredRMFailoverProxyProvider.init(ConfiguredRMFailoverProxyProvider.java:58)
> at 
> org.apache.hadoop.yarn.client.RMProxy.createRMFailoverProxyProvider(RMProxy.java:158)
> at 
> org.apache.hadoop.yarn.client.RMProxy.createRMProxy(RMProxy.java:88)
> at 
> org.apache.hadoop.yarn.server.api.ServerRMProxy.createRMProxy(ServerRMProxy.java:56)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (YARN-1521) Mark appropriate protocol methods with the idempotent annotation or AtMostOnce annotation

2014-01-06 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated YARN-1521:


Summary: Mark appropriate protocol methods with the idempotent annotation 
or AtMostOnce annotation  (was: Mark appropriate protocol methods with the 
idempotent annotation)

> Mark appropriate protocol methods with the idempotent annotation or 
> AtMostOnce annotation
> -
>
> Key: YARN-1521
> URL: https://issues.apache.org/jira/browse/YARN-1521
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xuan Gong
>Assignee: Xuan Gong
>
> After YARN-1028, we add the automatically failover into RMProxy. This JIRA is 
> to identify whether we need to add idempotent annotation and which methods 
> can be marked as idempotent.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (YARN-1559) Race between ServerRMProxy and ClientRMProxy setting RMProxy#INSTANCE

2014-01-06 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13863163#comment-13863163
 ] 

Bikas Saha commented on YARN-1559:
--

I am guessing stability of the flaky test is the testing for this patch. But I 
see TestRMFailover reported failed in Jenkins for this patch?

> Race between ServerRMProxy and ClientRMProxy setting RMProxy#INSTANCE
> -
>
> Key: YARN-1559
> URL: https://issues.apache.org/jira/browse/YARN-1559
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: 2.4.0
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
>Priority: Blocker
> Attachments: YARN-1559-20140105.txt, yarn-1559-1.patch, 
> yarn-1559-2.patch, yarn-1559-3.patch
>
>
> RMProxy#INSTANCE is a non-final static field and both ServerRMProxy and 
> ClientRMProxy set it. This leads to races as witnessed on - YARN-1482.
> Sample trace:
> {noformat}
> java.lang.IllegalArgumentException: RM does not support this client protocol
> at 
> com.google.common.base.Preconditions.checkArgument(Preconditions.java:88)
> at 
> org.apache.hadoop.yarn.client.ClientRMProxy.checkAllowedProtocols(ClientRMProxy.java:119)
> at 
> org.apache.hadoop.yarn.client.ConfiguredRMFailoverProxyProvider.init(ConfiguredRMFailoverProxyProvider.java:58)
> at 
> org.apache.hadoop.yarn.client.RMProxy.createRMFailoverProxyProvider(RMProxy.java:158)
> at 
> org.apache.hadoop.yarn.client.RMProxy.createRMProxy(RMProxy.java:88)
> at 
> org.apache.hadoop.yarn.server.api.ServerRMProxy.createRMProxy(ServerRMProxy.java:56)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (YARN-1559) Race between ServerRMProxy and ClientRMProxy setting RMProxy#INSTANCE

2014-01-06 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13863156#comment-13863156
 ] 

Karthik Kambatla commented on YARN-1559:


FailoverUptoMaximumPolicy is not used anymore. As discussed in YARN-1028 
(specifically, http://s.apache.org/cNT), we updated the retry policies while 
using HA to tune number of failover attempts instead of the basePolicy. 

> Race between ServerRMProxy and ClientRMProxy setting RMProxy#INSTANCE
> -
>
> Key: YARN-1559
> URL: https://issues.apache.org/jira/browse/YARN-1559
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: 2.4.0
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
>Priority: Blocker
> Attachments: YARN-1559-20140105.txt, yarn-1559-1.patch, 
> yarn-1559-2.patch, yarn-1559-3.patch
>
>
> RMProxy#INSTANCE is a non-final static field and both ServerRMProxy and 
> ClientRMProxy set it. This leads to races as witnessed on - YARN-1482.
> Sample trace:
> {noformat}
> java.lang.IllegalArgumentException: RM does not support this client protocol
> at 
> com.google.common.base.Preconditions.checkArgument(Preconditions.java:88)
> at 
> org.apache.hadoop.yarn.client.ClientRMProxy.checkAllowedProtocols(ClientRMProxy.java:119)
> at 
> org.apache.hadoop.yarn.client.ConfiguredRMFailoverProxyProvider.init(ConfiguredRMFailoverProxyProvider.java:58)
> at 
> org.apache.hadoop.yarn.client.RMProxy.createRMFailoverProxyProvider(RMProxy.java:158)
> at 
> org.apache.hadoop.yarn.client.RMProxy.createRMProxy(RMProxy.java:88)
> at 
> org.apache.hadoop.yarn.server.api.ServerRMProxy.createRMProxy(ServerRMProxy.java:56)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (YARN-1559) Race between ServerRMProxy and ClientRMProxy setting RMProxy#INSTANCE

2014-01-06 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13863145#comment-13863145
 ] 

Bikas Saha commented on YARN-1559:
--


bq. In the latest patch, I don't see any duplication. Am I missing something?
yeah. the latest patch does not have it. I was looking at the .2 patch from 
yesterday which was the latest at that time.

bq. Not sure I understand what you are referring to.
{code}
-   * A RetryPolicy to allow failing over upto the specified maximum time.
-   */
-  private static class FailoverUptoMaximumTimePolicy implements RetryPolicy {
-private long maxTime;
-
-FailoverUptoMaximumTimePolicy(long maxTime) {
-  this.maxTime = maxTime;
-}
-
-@Override
-public RetryAction shouldRetry(Exception e, int retries, int failovers,
-boolean isIdempotentOrAtMostOnce) throws Exception {
-  return System.currentTimeMillis() < maxTime
-  ? RetryAction.FAILOVER_AND_RETRY
-  : RetryAction.FAIL;
-}
-  }
{code}

> Race between ServerRMProxy and ClientRMProxy setting RMProxy#INSTANCE
> -
>
> Key: YARN-1559
> URL: https://issues.apache.org/jira/browse/YARN-1559
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: 2.4.0
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
>Priority: Blocker
> Attachments: YARN-1559-20140105.txt, yarn-1559-1.patch, 
> yarn-1559-2.patch, yarn-1559-3.patch
>
>
> RMProxy#INSTANCE is a non-final static field and both ServerRMProxy and 
> ClientRMProxy set it. This leads to races as witnessed on - YARN-1482.
> Sample trace:
> {noformat}
> java.lang.IllegalArgumentException: RM does not support this client protocol
> at 
> com.google.common.base.Preconditions.checkArgument(Preconditions.java:88)
> at 
> org.apache.hadoop.yarn.client.ClientRMProxy.checkAllowedProtocols(ClientRMProxy.java:119)
> at 
> org.apache.hadoop.yarn.client.ConfiguredRMFailoverProxyProvider.init(ConfiguredRMFailoverProxyProvider.java:58)
> at 
> org.apache.hadoop.yarn.client.RMProxy.createRMFailoverProxyProvider(RMProxy.java:158)
> at 
> org.apache.hadoop.yarn.client.RMProxy.createRMProxy(RMProxy.java:88)
> at 
> org.apache.hadoop.yarn.server.api.ServerRMProxy.createRMProxy(ServerRMProxy.java:56)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (YARN-1565) Add a way for YARN clients to get critical YARN system properties from the RM

2014-01-06 Thread Steve Loughran (JIRA)
Steve Loughran created YARN-1565:


 Summary: Add a way for YARN clients to get critical YARN system 
properties from the RM
 Key: YARN-1565
 URL: https://issues.apache.org/jira/browse/YARN-1565
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: resourcemanager
Affects Versions: 2.2.0
Reporter: Steve Loughran


If you are trying to build up an AM request, you need to know
# the limits of memory, core &c for the chosen queue
# the existing YARN classpath
# the path separator for the target platform (so your classpath comes out right)
# cluster OS: in case you need some OS-specific changes

The classpath can be in yarn-site.xml, but a remote client may not have that. 
The site-xml file doesn't list Queue resource limits, cluster OS or the path 
separator.

A way to query the RM for these values would make it easier for YARN clients to 
build up AM submissions with less guesswork and client-side config.




--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (YARN-1564) add some basic workflow YARN services

2014-01-06 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13862988#comment-13862988
 ] 

Steve Loughran commented on YARN-1564:
--

Services are all in 
[[https://github.com/hortonworks/hoya/tree/develop/hoya-core/src/main/java/org/apache/hadoop/hoya/yarn/service]]

* 
[[Parent|https://github.com/hortonworks/hoya/blob/develop/hoya-core/src/main/java/org/apache/hadoop/hoya/yarn/service/Parent.java]]:
 interface that makes {{addService}} public and adds an option to list 
services. This could be retrofitted to {{CompositeService}}
* 
[[CompoundService|https://github.com/hortonworks/hoya/blob/develop/hoya-core/src/main/java/org/apache/hadoop/hoya/yarn/service/CompoundService.java]
 : subclass of {{CompositeService}} that finishes when all of its children have 
*successfully* completed, or when any of its children fail.
* 
[[SequenceService|https://github.com/hortonworks/hoya/blob/develop/hoya-core/src/main/java/org/apache/hadoop/hoya/yarn/service/SequenceService.java]]
  subclass of {{CompositeService}} that executes its children in sequence, 
starting one when the previous one successfully finishes. Again, failures are 
propagated up immediately.
* 
[[EventNotifyingService|https://github.com/hortonworks/hoya/blob/develop/hoya-core/src/main/java/org/apache/hadoop/hoya/yarn/service/EventNotifyingService.java]]
 service which triggers a callback to a supplied interface implementation when 
started -or after a specified delay from the start time.
** 
[[ForkedProcessService.java|https://github.com/hortonworks/hoya/blob/develop/hoya-core/src/main/java/org/apache/hadoop/hoya/yarn/service/ForkedProcessService.java]]:
 more complex, this forks a potentially long lived application (via 
[[RunLongLivedApp|https://github.com/hortonworks/hoya/blob/develop/hoya-core/src/main/java/org/apache/hadoop/hoya/exec/RunLongLivedApp.java]]),
 completing the service when that process finishes.

The set allows you build up sequences of operations, as well as actions to run 
in parallel, and to notify a parent service or other object when they complete 
or fail.

There's tests for the simple ones (Sequence, compound, events) in groovy; 
nothing yet for forked processes.


> add some basic workflow YARN services
> -
>
> Key: YARN-1564
> URL: https://issues.apache.org/jira/browse/YARN-1564
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: api
>Affects Versions: 2.2.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> I've been using some alternative composite services to help build workflows 
> of process execution in a YARN AM.
> They and their tests could be moved in YARN for the use by others -this would 
> make it easier to build aggregate services in an AM



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (YARN-1563) HADOOP_CONF_DIR don't support multiple directories

2014-01-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13862982#comment-13862982
 ] 

Hadoop QA commented on YARN-1563:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12621593/YARN-1563.diff
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/2803//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/2803//console

This message is automatically generated.

> HADOOP_CONF_DIR don't support multiple directories
> --
>
> Key: YARN-1563
> URL: https://issues.apache.org/jira/browse/YARN-1563
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.2.0
>Reporter: Liyin Liang
> Attachments: YARN-1563.diff
>
>
> If the environmental variable $HADOOP_CONF_DIR is set with multiple 
> directories, like this:
> {code}
>  export HADOOP_CONF_DIR=/mypath/conf1:/mypath/conf2
> {code}
> then, bin/yarn will fail as following:
> {code}
> $ bin/yarn application
> No HADOOP_CONF_DIR set.
> Please specify it either in yarn-env.sh or in the environment.
> {code}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (YARN-1564) add some basic workflow YARN services

2014-01-06 Thread Steve Loughran (JIRA)
Steve Loughran created YARN-1564:


 Summary: add some basic workflow YARN services
 Key: YARN-1564
 URL: https://issues.apache.org/jira/browse/YARN-1564
 Project: Hadoop YARN
  Issue Type: New Feature
  Components: api
Affects Versions: 2.2.0
Reporter: Steve Loughran
Assignee: Steve Loughran
Priority: Minor


I've been using some alternative composite services to help build workflows of 
process execution in a YARN AM.

They and their tests could be moved in YARN for the use by others -this would 
make it easier to build aggregate services in an AM



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Assigned] (YARN-1563) HADOOP_CONF_DIR don't support multiple directories

2014-01-06 Thread Liyin Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Liyin Liang reassigned YARN-1563:
-

Assignee: Liyin Liang

> HADOOP_CONF_DIR don't support multiple directories
> --
>
> Key: YARN-1563
> URL: https://issues.apache.org/jira/browse/YARN-1563
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.2.0
>Reporter: Liyin Liang
>Assignee: Liyin Liang
> Attachments: YARN-1563.diff
>
>
> If the environmental variable $HADOOP_CONF_DIR is set with multiple 
> directories, like this:
> {code}
>  export HADOOP_CONF_DIR=/mypath/conf1:/mypath/conf2
> {code}
> then, bin/yarn will fail as following:
> {code}
> $ bin/yarn application
> No HADOOP_CONF_DIR set.
> Please specify it either in yarn-env.sh or in the environment.
> {code}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (YARN-1563) HADOOP_CONF_DIR don't support multiple directories

2014-01-06 Thread Liyin Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Liyin Liang updated YARN-1563:
--

Assignee: (was: Liyin Liang)

> HADOOP_CONF_DIR don't support multiple directories
> --
>
> Key: YARN-1563
> URL: https://issues.apache.org/jira/browse/YARN-1563
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.2.0
>Reporter: Liyin Liang
> Attachments: YARN-1563.diff
>
>
> If the environmental variable $HADOOP_CONF_DIR is set with multiple 
> directories, like this:
> {code}
>  export HADOOP_CONF_DIR=/mypath/conf1:/mypath/conf2
> {code}
> then, bin/yarn will fail as following:
> {code}
> $ bin/yarn application
> No HADOOP_CONF_DIR set.
> Please specify it either in yarn-env.sh or in the environment.
> {code}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (YARN-1563) HADOOP_CONF_DIR don't support multiple directories

2014-01-06 Thread Liyin Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Liyin Liang updated YARN-1563:
--

Attachment: YARN-1563.diff

Attach a patch to fix this issue.

> HADOOP_CONF_DIR don't support multiple directories
> --
>
> Key: YARN-1563
> URL: https://issues.apache.org/jira/browse/YARN-1563
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.2.0
>Reporter: Liyin Liang
> Attachments: YARN-1563.diff
>
>
> If the environmental variable $HADOOP_CONF_DIR is set with multiple 
> directories, like this:
> {code}
>  export HADOOP_CONF_DIR=/mypath/conf1:/mypath/conf2
> {code}
> then, bin/yarn will fail as following:
> {code}
> $ bin/yarn application
> No HADOOP_CONF_DIR set.
> Please specify it either in yarn-env.sh or in the environment.
> {code}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (YARN-1563) HADOOP_CONF_DIR don't support multiple directories

2014-01-06 Thread Liyin Liang (JIRA)
Liyin Liang created YARN-1563:
-

 Summary: HADOOP_CONF_DIR don't support multiple directories
 Key: YARN-1563
 URL: https://issues.apache.org/jira/browse/YARN-1563
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 2.2.0
Reporter: Liyin Liang


If the environmental variable $HADOOP_CONF_DIR is set with multiple 
directories, like this:
{code}
 export HADOOP_CONF_DIR=/mypath/conf1:/mypath/conf2
{code}
then, bin/yarn will fail as following:
{code}
$ bin/yarn application
No HADOOP_CONF_DIR set.
Please specify it either in yarn-env.sh or in the environment.
{code}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)