[jira] [Commented] (YARN-1358) TestYarnCLI fails on Windows due to line endings

2013-10-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13810123#comment-13810123
 ] 

Hudson commented on YARN-1358:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #379 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/379/])
YARN-1358. TestYarnCLI fails on Windows due to line endings. Contributed by 
Chuan Liu. (cnauroth: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1537305)
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/cli/TestYarnCLI.java


 TestYarnCLI fails on Windows due to line endings
 

 Key: YARN-1358
 URL: https://issues.apache.org/jira/browse/YARN-1358
 Project: Hadoop YARN
  Issue Type: Test
  Components: client
Affects Versions: 3.0.0, 2.2.0
Reporter: Chuan Liu
Assignee: Chuan Liu
Priority: Minor
 Fix For: 3.0.0, 2.2.1

 Attachments: YARN-1358.2.patch, YARN-1358.patch


 The unit test fails on Windows due to incorrect line endings was used for 
 comparing the output from command line output. Error messages are as follows.
 {noformat}
 junit.framework.ComparisonFailure: expected:...argument for options[]
 usage: application
 ... but was:...argument for options[
 ]
 usage: application
 ...
   at junit.framework.Assert.assertEquals(Assert.java:85)
   at junit.framework.Assert.assertEquals(Assert.java:91)
   at 
 org.apache.hadoop.yarn.client.cli.TestYarnCLI.testMissingArguments(TestYarnCLI.java:878)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (YARN-1357) TestContainerLaunch.testContainerEnvVariables fails on Windows

2013-10-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13810122#comment-13810122
 ] 

Hudson commented on YARN-1357:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #379 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/379/])
YARN-1357. TestContainerLaunch.testContainerEnvVariables fails on Windows. 
Contributed by Chuan Liu. (cnauroth: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1537293)
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/launcher/TestContainerLaunch.java


 TestContainerLaunch.testContainerEnvVariables fails on Windows
 --

 Key: YARN-1357
 URL: https://issues.apache.org/jira/browse/YARN-1357
 Project: Hadoop YARN
  Issue Type: Test
  Components: nodemanager
Affects Versions: 3.0.0, 2.2.0
Reporter: Chuan Liu
Assignee: Chuan Liu
Priority: Minor
 Fix For: 3.0.0, 2.2.1

 Attachments: YARN-1357.patch


 This test fails on Windows due to incorrect use of batch script command. 
 Error messages are as follows.
 {noformat}
 junit.framework.AssertionFailedError: expected:java.nio.HeapByteBuffer[pos=0 
 lim=19 cap=19] but was:java.nio.HeapByteBuffer[pos=0 lim=19 cap=19]
   at junit.framework.Assert.fail(Assert.java:50)
   at junit.framework.Assert.failNotEquals(Assert.java:287)
   at junit.framework.Assert.assertEquals(Assert.java:67)
   at junit.framework.Assert.assertEquals(Assert.java:74)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch.testContainerEnvVariables(TestContainerLaunch.java:508)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (YARN-1343) NodeManagers additions/restarts are not reported as node updates in AllocateResponse responses to AMs

2013-10-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13810120#comment-13810120
 ] 

Hudson commented on YARN-1343:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #379 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/379/])
YARN-1343. NodeManagers additions/restarts are not reported as node updates in 
AllocateResponse responses to AMs. (tucu) (tucu: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1537368)
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/NodesListManager.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmnode/RMNodeImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestRMNodeTransitions.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/resourcetracker/TestNMReconnect.java


 NodeManagers additions/restarts are not reported as node updates in 
 AllocateResponse responses to AMs
 -

 Key: YARN-1343
 URL: https://issues.apache.org/jira/browse/YARN-1343
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Affects Versions: 2.2.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
Priority: Critical
 Fix For: 2.2.1

 Attachments: YARN-1343.patch, YARN-1343.patch, YARN-1343.patch, 
 YARN-1343.patch


 If a NodeManager joins the cluster or gets restarted, running AMs never 
 receive the node update indicating the Node is running.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (YARN-1321) NMTokenCache is a singleton, prevents multiple AMs running in a single JVM to work correctly

2013-10-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13810127#comment-13810127
 ] 

Hudson commented on YARN-1321:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #379 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/379/])
YARN-1321. Changed NMTokenCache to support both singleton and an instance 
usage. Contributed by Alejandro Abdelnur. (vinodkv: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1537334)
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/AMRMClient.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/NMClient.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/NMTokenCache.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/impl/AMRMClientImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/impl/ContainerManagementProtocolProxy.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/impl/NMClientImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestAMRMClient.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestNMClient.java


 NMTokenCache is a singleton, prevents multiple AMs running in a single JVM to 
 work correctly
 

 Key: YARN-1321
 URL: https://issues.apache.org/jira/browse/YARN-1321
 Project: Hadoop YARN
  Issue Type: Bug
  Components: client
Affects Versions: 2.2.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
Priority: Blocker
 Fix For: 2.2.1

 Attachments: YARN-1321-20131029.txt, YARN-1321.patch, 
 YARN-1321.patch, YARN-1321.patch, YARN-1321.patch, YARN-1321.patch, 
 YARN-1321.patch


 NMTokenCache is a singleton. Because of this, if running multiple AMs in a 
 single JVM NMTokens for the same node from different AMs step on each other 
 and starting containers fail due to mismatch tokens.
 The error observed in the client side is something like:
 {code}
 ERROR org.apache.hadoop.security.UserGroupInformation: 
 PriviledgedActionException as:llama (auth:PROXY) via llama (auth:SIMPLE) 
 cause:org.apache.hadoop.yarn.exceptions.YarnException: Unauthorized request 
 to start container. 
 NMToken for application attempt : appattempt_1382038445650_0002_01 was 
 used for starting container with container token issued for application 
 attempt : appattempt_1382038445650_0001_01
 {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (YARN-1031) JQuery UI components reference external css in branch-23

2013-10-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13810156#comment-13810156
 ] 

Hudson commented on YARN-1031:
--

FAILURE: Integrated in Hadoop-Hdfs-0.23-Build #777 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/777/])
YARN-1031. JQuery UI components reference external css in branch-23. 
Contributed by Jonathan Eagles and Jason Lowe (jlowe: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1537207)
* /hadoop/common/branches/branch-0.23/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/view/JQueryUI.java
* 
/hadoop/common/branches/branch-0.23/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/jquery
* 
/hadoop/common/branches/branch-0.23/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/jquery/themes-1.8.16
* 
/hadoop/common/branches/branch-0.23/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/jquery/themes-1.8.16/base
* 
/hadoop/common/branches/branch-0.23/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/jquery/themes-1.8.16/base/images
* 
/hadoop/common/branches/branch-0.23/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/jquery/themes-1.8.16/base/images/ui-bg_flat_0_aa_40x100.png
* 
/hadoop/common/branches/branch-0.23/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/jquery/themes-1.8.16/base/images/ui-bg_flat_75_ff_40x100.png
* 
/hadoop/common/branches/branch-0.23/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/jquery/themes-1.8.16/base/images/ui-bg_glass_55_fbf9ee_1x400.png
* 
/hadoop/common/branches/branch-0.23/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/jquery/themes-1.8.16/base/images/ui-bg_glass_65_ff_1x400.png
* 
/hadoop/common/branches/branch-0.23/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/jquery/themes-1.8.16/base/images/ui-bg_glass_75_dadada_1x400.png
* 
/hadoop/common/branches/branch-0.23/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/jquery/themes-1.8.16/base/images/ui-bg_glass_75_e6e6e6_1x400.png
* 
/hadoop/common/branches/branch-0.23/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/jquery/themes-1.8.16/base/images/ui-bg_glass_95_fef1ec_1x400.png
* 
/hadoop/common/branches/branch-0.23/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/jquery/themes-1.8.16/base/images/ui-bg_highlight-soft_75_cc_1x100.png
* 
/hadoop/common/branches/branch-0.23/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/jquery/themes-1.8.16/base/images/ui-icons_22_256x240.png
* 
/hadoop/common/branches/branch-0.23/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/jquery/themes-1.8.16/base/images/ui-icons_2e83ff_256x240.png
* 
/hadoop/common/branches/branch-0.23/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/jquery/themes-1.8.16/base/images/ui-icons_454545_256x240.png
* 
/hadoop/common/branches/branch-0.23/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/jquery/themes-1.8.16/base/images/ui-icons_88_256x240.png
* 
/hadoop/common/branches/branch-0.23/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/jquery/themes-1.8.16/base/images/ui-icons_cd0a0a_256x240.png
* 
/hadoop/common/branches/branch-0.23/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/jquery/themes-1.8.16/base/jquery-ui.css


 JQuery UI components reference external css in branch-23
 

 Key: YARN-1031
 URL: https://issues.apache.org/jira/browse/YARN-1031
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 0.23.9
Reporter: Jonathan Eagles
Assignee: Jonathan Eagles
 Fix For: 0.23.10

 Attachments: YARN-1031-2-branch-0.23.patch, 
 YARN-1031-3-branch-0.23.patch, YARN-1031-branch-0.23.patch






--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (YARN-1321) NMTokenCache is a singleton, prevents multiple AMs running in a single JVM to work correctly

2013-10-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13810164#comment-13810164
 ] 

Hudson commented on YARN-1321:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #1569 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1569/])
YARN-1321. Changed NMTokenCache to support both singleton and an instance 
usage. Contributed by Alejandro Abdelnur. (vinodkv: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1537334)
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/AMRMClient.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/NMClient.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/NMTokenCache.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/impl/AMRMClientImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/impl/ContainerManagementProtocolProxy.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/impl/NMClientImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestAMRMClient.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestNMClient.java


 NMTokenCache is a singleton, prevents multiple AMs running in a single JVM to 
 work correctly
 

 Key: YARN-1321
 URL: https://issues.apache.org/jira/browse/YARN-1321
 Project: Hadoop YARN
  Issue Type: Bug
  Components: client
Affects Versions: 2.2.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
Priority: Blocker
 Fix For: 2.2.1

 Attachments: YARN-1321-20131029.txt, YARN-1321.patch, 
 YARN-1321.patch, YARN-1321.patch, YARN-1321.patch, YARN-1321.patch, 
 YARN-1321.patch


 NMTokenCache is a singleton. Because of this, if running multiple AMs in a 
 single JVM NMTokens for the same node from different AMs step on each other 
 and starting containers fail due to mismatch tokens.
 The error observed in the client side is something like:
 {code}
 ERROR org.apache.hadoop.security.UserGroupInformation: 
 PriviledgedActionException as:llama (auth:PROXY) via llama (auth:SIMPLE) 
 cause:org.apache.hadoop.yarn.exceptions.YarnException: Unauthorized request 
 to start container. 
 NMToken for application attempt : appattempt_1382038445650_0002_01 was 
 used for starting container with container token issued for application 
 attempt : appattempt_1382038445650_0001_01
 {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (YARN-1357) TestContainerLaunch.testContainerEnvVariables fails on Windows

2013-10-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13810159#comment-13810159
 ] 

Hudson commented on YARN-1357:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #1569 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1569/])
YARN-1357. TestContainerLaunch.testContainerEnvVariables fails on Windows. 
Contributed by Chuan Liu. (cnauroth: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1537293)
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/launcher/TestContainerLaunch.java


 TestContainerLaunch.testContainerEnvVariables fails on Windows
 --

 Key: YARN-1357
 URL: https://issues.apache.org/jira/browse/YARN-1357
 Project: Hadoop YARN
  Issue Type: Test
  Components: nodemanager
Affects Versions: 3.0.0, 2.2.0
Reporter: Chuan Liu
Assignee: Chuan Liu
Priority: Minor
 Fix For: 3.0.0, 2.2.1

 Attachments: YARN-1357.patch


 This test fails on Windows due to incorrect use of batch script command. 
 Error messages are as follows.
 {noformat}
 junit.framework.AssertionFailedError: expected:java.nio.HeapByteBuffer[pos=0 
 lim=19 cap=19] but was:java.nio.HeapByteBuffer[pos=0 lim=19 cap=19]
   at junit.framework.Assert.fail(Assert.java:50)
   at junit.framework.Assert.failNotEquals(Assert.java:287)
   at junit.framework.Assert.assertEquals(Assert.java:67)
   at junit.framework.Assert.assertEquals(Assert.java:74)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch.testContainerEnvVariables(TestContainerLaunch.java:508)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (YARN-1358) TestYarnCLI fails on Windows due to line endings

2013-10-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13810160#comment-13810160
 ] 

Hudson commented on YARN-1358:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #1569 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1569/])
YARN-1358. TestYarnCLI fails on Windows due to line endings. Contributed by 
Chuan Liu. (cnauroth: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1537305)
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/cli/TestYarnCLI.java


 TestYarnCLI fails on Windows due to line endings
 

 Key: YARN-1358
 URL: https://issues.apache.org/jira/browse/YARN-1358
 Project: Hadoop YARN
  Issue Type: Test
  Components: client
Affects Versions: 3.0.0, 2.2.0
Reporter: Chuan Liu
Assignee: Chuan Liu
Priority: Minor
 Fix For: 3.0.0, 2.2.1

 Attachments: YARN-1358.2.patch, YARN-1358.patch


 The unit test fails on Windows due to incorrect line endings was used for 
 comparing the output from command line output. Error messages are as follows.
 {noformat}
 junit.framework.ComparisonFailure: expected:...argument for options[]
 usage: application
 ... but was:...argument for options[
 ]
 usage: application
 ...
   at junit.framework.Assert.assertEquals(Assert.java:85)
   at junit.framework.Assert.assertEquals(Assert.java:91)
   at 
 org.apache.hadoop.yarn.client.cli.TestYarnCLI.testMissingArguments(TestYarnCLI.java:878)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (YARN-1343) NodeManagers additions/restarts are not reported as node updates in AllocateResponse responses to AMs

2013-10-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13810157#comment-13810157
 ] 

Hudson commented on YARN-1343:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #1569 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1569/])
YARN-1343. NodeManagers additions/restarts are not reported as node updates in 
AllocateResponse responses to AMs. (tucu) (tucu: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1537368)
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/NodesListManager.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmnode/RMNodeImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestRMNodeTransitions.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/resourcetracker/TestNMReconnect.java


 NodeManagers additions/restarts are not reported as node updates in 
 AllocateResponse responses to AMs
 -

 Key: YARN-1343
 URL: https://issues.apache.org/jira/browse/YARN-1343
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Affects Versions: 2.2.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
Priority: Critical
 Fix For: 2.2.1

 Attachments: YARN-1343.patch, YARN-1343.patch, YARN-1343.patch, 
 YARN-1343.patch


 If a NodeManager joins the cluster or gets restarted, running AMs never 
 receive the node update indicating the Node is running.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (YARN-1321) NMTokenCache is a singleton, prevents multiple AMs running in a single JVM to work correctly

2013-10-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13810223#comment-13810223
 ] 

Hudson commented on YARN-1321:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1595 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1595/])
YARN-1321. Changed NMTokenCache to support both singleton and an instance 
usage. Contributed by Alejandro Abdelnur. (vinodkv: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1537334)
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/AMRMClient.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/NMClient.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/NMTokenCache.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/impl/AMRMClientImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/impl/ContainerManagementProtocolProxy.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/impl/NMClientImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestAMRMClient.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestNMClient.java


 NMTokenCache is a singleton, prevents multiple AMs running in a single JVM to 
 work correctly
 

 Key: YARN-1321
 URL: https://issues.apache.org/jira/browse/YARN-1321
 Project: Hadoop YARN
  Issue Type: Bug
  Components: client
Affects Versions: 2.2.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
Priority: Blocker
 Fix For: 2.2.1

 Attachments: YARN-1321-20131029.txt, YARN-1321.patch, 
 YARN-1321.patch, YARN-1321.patch, YARN-1321.patch, YARN-1321.patch, 
 YARN-1321.patch


 NMTokenCache is a singleton. Because of this, if running multiple AMs in a 
 single JVM NMTokens for the same node from different AMs step on each other 
 and starting containers fail due to mismatch tokens.
 The error observed in the client side is something like:
 {code}
 ERROR org.apache.hadoop.security.UserGroupInformation: 
 PriviledgedActionException as:llama (auth:PROXY) via llama (auth:SIMPLE) 
 cause:org.apache.hadoop.yarn.exceptions.YarnException: Unauthorized request 
 to start container. 
 NMToken for application attempt : appattempt_1382038445650_0002_01 was 
 used for starting container with container token issued for application 
 attempt : appattempt_1382038445650_0001_01
 {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (YARN-1357) TestContainerLaunch.testContainerEnvVariables fails on Windows

2013-10-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13810218#comment-13810218
 ] 

Hudson commented on YARN-1357:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1595 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1595/])
YARN-1357. TestContainerLaunch.testContainerEnvVariables fails on Windows. 
Contributed by Chuan Liu. (cnauroth: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1537293)
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/launcher/TestContainerLaunch.java


 TestContainerLaunch.testContainerEnvVariables fails on Windows
 --

 Key: YARN-1357
 URL: https://issues.apache.org/jira/browse/YARN-1357
 Project: Hadoop YARN
  Issue Type: Test
  Components: nodemanager
Affects Versions: 3.0.0, 2.2.0
Reporter: Chuan Liu
Assignee: Chuan Liu
Priority: Minor
 Fix For: 3.0.0, 2.2.1

 Attachments: YARN-1357.patch


 This test fails on Windows due to incorrect use of batch script command. 
 Error messages are as follows.
 {noformat}
 junit.framework.AssertionFailedError: expected:java.nio.HeapByteBuffer[pos=0 
 lim=19 cap=19] but was:java.nio.HeapByteBuffer[pos=0 lim=19 cap=19]
   at junit.framework.Assert.fail(Assert.java:50)
   at junit.framework.Assert.failNotEquals(Assert.java:287)
   at junit.framework.Assert.assertEquals(Assert.java:67)
   at junit.framework.Assert.assertEquals(Assert.java:74)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch.testContainerEnvVariables(TestContainerLaunch.java:508)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (YARN-1343) NodeManagers additions/restarts are not reported as node updates in AllocateResponse responses to AMs

2013-10-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13810216#comment-13810216
 ] 

Hudson commented on YARN-1343:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1595 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1595/])
YARN-1343. NodeManagers additions/restarts are not reported as node updates in 
AllocateResponse responses to AMs. (tucu) (tucu: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1537368)
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/NodesListManager.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmnode/RMNodeImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestRMNodeTransitions.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/resourcetracker/TestNMReconnect.java


 NodeManagers additions/restarts are not reported as node updates in 
 AllocateResponse responses to AMs
 -

 Key: YARN-1343
 URL: https://issues.apache.org/jira/browse/YARN-1343
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Affects Versions: 2.2.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
Priority: Critical
 Fix For: 2.2.1

 Attachments: YARN-1343.patch, YARN-1343.patch, YARN-1343.patch, 
 YARN-1343.patch


 If a NodeManager joins the cluster or gets restarted, running AMs never 
 receive the node update indicating the Node is running.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (YARN-1358) TestYarnCLI fails on Windows due to line endings

2013-10-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13810219#comment-13810219
 ] 

Hudson commented on YARN-1358:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1595 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1595/])
YARN-1358. TestYarnCLI fails on Windows due to line endings. Contributed by 
Chuan Liu. (cnauroth: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1537305)
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/cli/TestYarnCLI.java


 TestYarnCLI fails on Windows due to line endings
 

 Key: YARN-1358
 URL: https://issues.apache.org/jira/browse/YARN-1358
 Project: Hadoop YARN
  Issue Type: Test
  Components: client
Affects Versions: 3.0.0, 2.2.0
Reporter: Chuan Liu
Assignee: Chuan Liu
Priority: Minor
 Fix For: 3.0.0, 2.2.1

 Attachments: YARN-1358.2.patch, YARN-1358.patch


 The unit test fails on Windows due to incorrect line endings was used for 
 comparing the output from command line output. Error messages are as follows.
 {noformat}
 junit.framework.ComparisonFailure: expected:...argument for options[]
 usage: application
 ... but was:...argument for options[
 ]
 usage: application
 ...
   at junit.framework.Assert.assertEquals(Assert.java:85)
   at junit.framework.Assert.assertEquals(Assert.java:91)
   at 
 org.apache.hadoop.yarn.client.cli.TestYarnCLI.testMissingArguments(TestYarnCLI.java:878)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (YARN-1324) NodeManager potentially causes unnecessary operations on all its disks

2013-10-31 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13810228#comment-13810228
 ] 

Jason Lowe commented on YARN-1324:
--

+1 for randomizing the list of directories.  Even if we allow an app to ask for 
multiple paths and they do so, it's more likely to spread the load around for 
apps with unsophisticated load balancing algorithms.

 NodeManager potentially causes unnecessary operations on all its disks
 --

 Key: YARN-1324
 URL: https://issues.apache.org/jira/browse/YARN-1324
 Project: Hadoop YARN
  Issue Type: Improvement
Affects Versions: 2.2.0
Reporter: Bikas Saha

 Currently, for every container, the NM creates a directory on every disk and 
 expects the container-task to choose 1 of them and load balance the use of 
 the disks across all containers. 
 1) This may have worked fine in the MR world where MR tasks would randomly 
 choose dirs but in general we cannot expect every app/task writer to 
 understand these nuances and randomly pick disks. So we could end up 
 overloading the first disk if most people decide to use the first disk.
 2) This makes a number of NM operations to scan every disk (thus randomizing 
 that disk) to locate the dir which the task has actually chosen to use for 
 its files. Makes all these operations expensive for the NM as well as 
 disruptive for users of disks that did not have the real task working dirs.
 I propose that NM should up-front decide the disk it is assigning to tasks. 
 It could choose to do so randomly or weighted-randomly by looking at space 
 and load on each disk. So it could do a better job of load balancing. Then, 
 it would associate the chosen working directory with the container context so 
 that subsequent operations on the NM can directly seek to the correct 
 location instead of having to seek on every disk.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (YARN-1374) Resource Manager fails to start due to ConcurrentModificationException

2013-10-31 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13810364#comment-13810364
 ] 

Steve Loughran commented on YARN-1374:
--

while this patch fixes a symptom, it doesn't fix the underlying problem: you 
get an exception if you add a new service to a composite while it is being 
inited.

We can fix that by making sure that the service list is cloned before iterating 
through the init/start/stop methods. That way, code can add it, it just doesn't 
get processed in the current loop

 Resource Manager fails to start due to ConcurrentModificationException
 --

 Key: YARN-1374
 URL: https://issues.apache.org/jira/browse/YARN-1374
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Affects Versions: 2.3.0
Reporter: Devaraj K
Assignee: Karthik Kambatla
Priority: Blocker
 Attachments: yarn-1374-1.patch, yarn-1374-1.patch


 Resource Manager is failing to start with the below 
 ConcurrentModificationException.
 {code:xml}
 2013-10-30 20:22:42,371 INFO org.apache.hadoop.util.HostsFileReader: 
 Refreshing hosts (include/exclude) list
 2013-10-30 20:22:42,376 INFO org.apache.hadoop.service.AbstractService: 
 Service ResourceManager failed in state INITED; cause: 
 java.util.ConcurrentModificationException
 java.util.ConcurrentModificationException
   at 
 java.util.AbstractList$Itr.checkForComodification(AbstractList.java:372)
   at java.util.AbstractList$Itr.next(AbstractList.java:343)
   at 
 java.util.Collections$UnmodifiableCollection$1.next(Collections.java:1010)
   at 
 org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:107)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceInit(ResourceManager.java:187)
   at 
 org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:944)
 2013-10-30 20:22:42,378 INFO 
 org.apache.hadoop.yarn.server.resourcemanager.RMHAProtocolService: 
 Transitioning to standby
 2013-10-30 20:22:42,378 INFO 
 org.apache.hadoop.yarn.server.resourcemanager.RMHAProtocolService: 
 Transitioned to standby
 2013-10-30 20:22:42,378 FATAL 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error starting 
 ResourceManager
 java.util.ConcurrentModificationException
   at 
 java.util.AbstractList$Itr.checkForComodification(AbstractList.java:372)
   at java.util.AbstractList$Itr.next(AbstractList.java:343)
   at 
 java.util.Collections$UnmodifiableCollection$1.next(Collections.java:1010)
   at 
 org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:107)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceInit(ResourceManager.java:187)
   at 
 org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:944)
 2013-10-30 20:22:42,379 INFO 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: SHUTDOWN_MSG: 
 /
 SHUTDOWN_MSG: Shutting down ResourceManager at HOST-10-18-40-24/10.18.40.24
 /
 {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (YARN-1381) Same relaxLocality appears twice in exception message of AMRMClientImpl#checkLocalityRelaxationConflict()

2013-10-31 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated YARN-1381:
-

Attachment: yarn-1381.patch

 Same relaxLocality appears twice in exception message of 
 AMRMClientImpl#checkLocalityRelaxationConflict() 
 --

 Key: YARN-1381
 URL: https://issues.apache.org/jira/browse/YARN-1381
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Ted Yu
 Attachments: yarn-1381.patch


 Here is related code:
 {code}
 throw new InvalidContainerRequestException(Cannot submit a 
 + ContainerRequest asking for location  + location
 +  with locality relaxation  + relaxLocality +  when it 
 has 
 + already been requested with locality relaxation  + 
 relaxLocality);
 {code}
 The last relaxLocality should be  
 reqs.values().iterator().next().remoteRequest.getRelaxLocality() 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (YARN-1374) Resource Manager fails to start due to ConcurrentModificationException

2013-10-31 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13810448#comment-13810448
 ] 

Bikas Saha commented on YARN-1374:
--

Wouldnt it be incorrect to add a new service to an initing composite service? 
Depending on the race, the service would work or not work. So IMO we should see 
an error/exception though a better diagnostic message may help.

 Resource Manager fails to start due to ConcurrentModificationException
 --

 Key: YARN-1374
 URL: https://issues.apache.org/jira/browse/YARN-1374
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Affects Versions: 2.3.0
Reporter: Devaraj K
Assignee: Karthik Kambatla
Priority: Blocker
 Attachments: yarn-1374-1.patch, yarn-1374-1.patch


 Resource Manager is failing to start with the below 
 ConcurrentModificationException.
 {code:xml}
 2013-10-30 20:22:42,371 INFO org.apache.hadoop.util.HostsFileReader: 
 Refreshing hosts (include/exclude) list
 2013-10-30 20:22:42,376 INFO org.apache.hadoop.service.AbstractService: 
 Service ResourceManager failed in state INITED; cause: 
 java.util.ConcurrentModificationException
 java.util.ConcurrentModificationException
   at 
 java.util.AbstractList$Itr.checkForComodification(AbstractList.java:372)
   at java.util.AbstractList$Itr.next(AbstractList.java:343)
   at 
 java.util.Collections$UnmodifiableCollection$1.next(Collections.java:1010)
   at 
 org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:107)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceInit(ResourceManager.java:187)
   at 
 org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:944)
 2013-10-30 20:22:42,378 INFO 
 org.apache.hadoop.yarn.server.resourcemanager.RMHAProtocolService: 
 Transitioning to standby
 2013-10-30 20:22:42,378 INFO 
 org.apache.hadoop.yarn.server.resourcemanager.RMHAProtocolService: 
 Transitioned to standby
 2013-10-30 20:22:42,378 FATAL 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error starting 
 ResourceManager
 java.util.ConcurrentModificationException
   at 
 java.util.AbstractList$Itr.checkForComodification(AbstractList.java:372)
   at java.util.AbstractList$Itr.next(AbstractList.java:343)
   at 
 java.util.Collections$UnmodifiableCollection$1.next(Collections.java:1010)
   at 
 org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:107)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceInit(ResourceManager.java:187)
   at 
 org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:944)
 2013-10-30 20:22:42,379 INFO 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: SHUTDOWN_MSG: 
 /
 SHUTDOWN_MSG: Shutting down ResourceManager at HOST-10-18-40-24/10.18.40.24
 /
 {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (YARN-1381) Same relaxLocality appears twice in exception message of AMRMClientImpl#checkLocalityRelaxationConflict()

2013-10-31 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13810457#comment-13810457
 ] 

Ted Yu commented on YARN-1381:
--

I use the same Eclipse formatter for HBase where line can be 100 chars long :-) 

 Same relaxLocality appears twice in exception message of 
 AMRMClientImpl#checkLocalityRelaxationConflict() 
 --

 Key: YARN-1381
 URL: https://issues.apache.org/jira/browse/YARN-1381
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Ted Yu
Priority: Minor
 Attachments: yarn-1381.patch


 Here is related code:
 {code}
 throw new InvalidContainerRequestException(Cannot submit a 
 + ContainerRequest asking for location  + location
 +  with locality relaxation  + relaxLocality +  when it 
 has 
 + already been requested with locality relaxation  + 
 relaxLocality);
 {code}
 The last relaxLocality should be  
 reqs.values().iterator().next().remoteRequest.getRelaxLocality() 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (YARN-891) Store completed application information in RM state store

2013-10-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13810469#comment-13810469
 ] 

Hudson commented on YARN-891:
-

SUCCESS: Integrated in Hadoop-trunk-Commit #4681 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/4681/])
YARN-891. Modified ResourceManager state-store to remember completed 
applications so that clients can get information about them post RM-restart. 
Contributed by Jian He. (vinodkv: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1537560)
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/proto/server/yarn_server_resourcemanager_service_protos.proto
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMAppManager.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/FileSystemRMStateStore.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/MemoryRMStateStore.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/NullRMStateStore.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/RMStateStore.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/RMStateStoreEventType.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/RMStateUpdateAppAttemptEvent.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/RMStateUpdateAppEvent.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/ZKRMStateStore.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/records/ApplicationAttemptStateData.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/records/ApplicationStateData.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/records/impl/pb/ApplicationAttemptStateDataPBImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/records/impl/pb/ApplicationStateDataPBImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/RMAppEventType.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/RMAppImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/RMAppNewSavedEvent.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/RMAppState.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/RMAppStoredEvent.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/RMAppUpdateSavedEvent.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/RMAppAttemptEventType.java
* 

[jira] [Created] (YARN-1382) NodeListManager has a memory leak, unusableRMNodesConcurrentSet is never purged

2013-10-31 Thread Alejandro Abdelnur (JIRA)
Alejandro Abdelnur created YARN-1382:


 Summary: NodeListManager has a memory leak, 
unusableRMNodesConcurrentSet is never purged
 Key: YARN-1382
 URL: https://issues.apache.org/jira/browse/YARN-1382
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Affects Versions: 2.2.0
Reporter: Alejandro Abdelnur


If a node is in the unusable nodes set (unusableRMNodesConcurrentSet) and never 
comes back, the node will be there forever.

While the leak is not big, it gets aggravated if the NM addresses are 
configured with ephemeral ports as when the nodes come back they come back as 
new.

Some related details in YARN-1343



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (YARN-987) Adding History Service to use Store and converting Historydata to Report

2013-10-31 Thread Mayank Bansal (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-987?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13810557#comment-13810557
 ] 

Mayank Bansal commented on YARN-987:


Thanks [~zjshen] for the review.

bq. Do we really need ApplicationHistoryContext? I guess it may be the analog 
of HistoryContext of JHS. However, the designs of AHS and JHS are not almost 
the same. HistoryContext is necessary because it involves the cache mechanism, 
not only mapping the internal objects to the user-friendly objects. We can add 
the cache mechanism later, but now I think it's more obvious that in the 
implementation of ApplicationHistoryProtocol, we directly map an internal 
object into the user-friendly object.

As we discussed offline, Yes thats similar to JHS design and as we decided to 
go for Cache implementation I think that makes sense to have clear sepration 
between these two.

bq. In addition, we've involved the configurations in several patches. 
Probably, we'd like to overview them together, to ensure their names are 
consistent.

Done

{code}
+HashMapApplicationId, ApplicationHistoryData applicationsHistory =
+(histData instanceof HashMap?, ?)
+? (HashMapApplicationId, ApplicationHistoryData) histData
+: new HashMapApplicationId, ApplicationHistoryData(histData);
{code}

I removed this in latest patch.

Thanks,
Mayank

 Adding History Service to use Store and converting Historydata to Report
 

 Key: YARN-987
 URL: https://issues.apache.org/jira/browse/YARN-987
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Mayank Bansal
Assignee: Mayank Bansal
 Attachments: YARN-987-1.patch, YARN-987-2.patch, YARN-987-3.patch, 
 YARN-987-4.patch






--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (YARN-987) Adding History Service to use Store and converting Historydata to Report

2013-10-31 Thread Mayank Bansal (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-987?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mayank Bansal updated YARN-987:
---

Attachment: YARN-987-4.patch

Attaching latest patch.

Thanks,
Mayank

 Adding History Service to use Store and converting Historydata to Report
 

 Key: YARN-987
 URL: https://issues.apache.org/jira/browse/YARN-987
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Mayank Bansal
Assignee: Mayank Bansal
 Attachments: YARN-987-1.patch, YARN-987-2.patch, YARN-987-3.patch, 
 YARN-987-4.patch






--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (YARN-546) mapred.fairscheduler.eventlog.enabled removed from Hadoop 2.0

2013-10-31 Thread Joep Rottinghuis (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13810565#comment-13810565
 ] 

Joep Rottinghuis commented on YARN-546:
---

[~sandyr] didn't you mention that you fixed this as part of a different jira ?

 mapred.fairscheduler.eventlog.enabled removed from Hadoop 2.0
 -

 Key: YARN-546
 URL: https://issues.apache.org/jira/browse/YARN-546
 Project: Hadoop YARN
  Issue Type: Bug
  Components: scheduler
Affects Versions: 2.0.3-alpha
Reporter: Lohit Vijayarenu
 Attachments: YARN-546.1.patch


 Hadoop 1.0 supported an option to turn on/off FairScheduler event logging 
 using mapred.fairscheduler.eventlog.enabled. In Hadoop 2.0, it looks like 
 this option has been removed (or not ported?) which causes event logging to 
 be enabled by default and there is no way to turn it off.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (YARN-987) Adding History Service to use Store and converting Historydata to Report

2013-10-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-987?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13810575#comment-13810575
 ] 

Hadoop QA commented on YARN-987:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12611452/YARN-987-4.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:red}-1 javac{color:red}.  The patch appears to cause the build to 
fail.

Console output: https://builds.apache.org/job/PreCommit-YARN-Build/2333//console

This message is automatically generated.

 Adding History Service to use Store and converting Historydata to Report
 

 Key: YARN-987
 URL: https://issues.apache.org/jira/browse/YARN-987
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Mayank Bansal
Assignee: Mayank Bansal
 Attachments: YARN-987-1.patch, YARN-987-2.patch, YARN-987-3.patch, 
 YARN-987-4.patch






--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (YARN-1380) Enable NM to automatically reuse failed local dirs after they are available again

2013-10-31 Thread Vinod Kumar Vavilapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13810586#comment-13810586
 ] 

Vinod Kumar Vavilapalli commented on YARN-1380:
---

Duplicate of YARN-90.

 Enable NM to automatically reuse failed local dirs after they are available 
 again
 -

 Key: YARN-1380
 URL: https://issues.apache.org/jira/browse/YARN-1380
 Project: Hadoop YARN
  Issue Type: New Feature
  Components: nodemanager
Reporter: Hou Song
  Labels: features
   Original Estimate: 48h
  Remaining Estimate: 48h

 Currently NM is able to kick bad directories out when they fail, but not able 
 to reuse them if they are fixed. This is inconvenient in large production 
 clusters. 
 In this jira I propose a patch that I am using in my organization. 
 It also adds a new metric of the number of failed directories so people have 
 clearer view from outside. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Resolved] (YARN-1380) Enable NM to automatically reuse failed local dirs after they are available again

2013-10-31 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1380?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli resolved YARN-1380.
---

Resolution: Duplicate

 Enable NM to automatically reuse failed local dirs after they are available 
 again
 -

 Key: YARN-1380
 URL: https://issues.apache.org/jira/browse/YARN-1380
 Project: Hadoop YARN
  Issue Type: New Feature
  Components: nodemanager
Reporter: Hou Song
  Labels: features
   Original Estimate: 48h
  Remaining Estimate: 48h

 Currently NM is able to kick bad directories out when they fail, but not able 
 to reuse them if they are fixed. This is inconvenient in large production 
 clusters. 
 In this jira I propose a patch that I am using in my organization. 
 It also adds a new metric of the number of failed directories so people have 
 clearer view from outside. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (YARN-1374) Resource Manager fails to start due to ConcurrentModificationException

2013-10-31 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13810613#comment-13810613
 ] 

Karthik Kambatla commented on YARN-1374:


Also, the patch fixes another bug. Without the patch, monitors get added to 
ResourceManager and not RMActiveServicss. 

 Resource Manager fails to start due to ConcurrentModificationException
 --

 Key: YARN-1374
 URL: https://issues.apache.org/jira/browse/YARN-1374
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Affects Versions: 2.3.0
Reporter: Devaraj K
Assignee: Karthik Kambatla
Priority: Blocker
 Attachments: yarn-1374-1.patch, yarn-1374-1.patch


 Resource Manager is failing to start with the below 
 ConcurrentModificationException.
 {code:xml}
 2013-10-30 20:22:42,371 INFO org.apache.hadoop.util.HostsFileReader: 
 Refreshing hosts (include/exclude) list
 2013-10-30 20:22:42,376 INFO org.apache.hadoop.service.AbstractService: 
 Service ResourceManager failed in state INITED; cause: 
 java.util.ConcurrentModificationException
 java.util.ConcurrentModificationException
   at 
 java.util.AbstractList$Itr.checkForComodification(AbstractList.java:372)
   at java.util.AbstractList$Itr.next(AbstractList.java:343)
   at 
 java.util.Collections$UnmodifiableCollection$1.next(Collections.java:1010)
   at 
 org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:107)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceInit(ResourceManager.java:187)
   at 
 org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:944)
 2013-10-30 20:22:42,378 INFO 
 org.apache.hadoop.yarn.server.resourcemanager.RMHAProtocolService: 
 Transitioning to standby
 2013-10-30 20:22:42,378 INFO 
 org.apache.hadoop.yarn.server.resourcemanager.RMHAProtocolService: 
 Transitioned to standby
 2013-10-30 20:22:42,378 FATAL 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error starting 
 ResourceManager
 java.util.ConcurrentModificationException
   at 
 java.util.AbstractList$Itr.checkForComodification(AbstractList.java:372)
   at java.util.AbstractList$Itr.next(AbstractList.java:343)
   at 
 java.util.Collections$UnmodifiableCollection$1.next(Collections.java:1010)
   at 
 org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:107)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceInit(ResourceManager.java:187)
   at 
 org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:944)
 2013-10-30 20:22:42,379 INFO 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: SHUTDOWN_MSG: 
 /
 SHUTDOWN_MSG: Shutting down ResourceManager at HOST-10-18-40-24/10.18.40.24
 /
 {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (YARN-786) Expose application resource usage in RM REST API

2013-10-31 Thread Sandy Ryza (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandy Ryza updated YARN-786:


Attachment: YARN-786-1.patch

 Expose application resource usage in RM REST API
 

 Key: YARN-786
 URL: https://issues.apache.org/jira/browse/YARN-786
 Project: Hadoop YARN
  Issue Type: Improvement
Affects Versions: 2.0.4-alpha
Reporter: Sandy Ryza
Assignee: Sandy Ryza
 Attachments: YARN-786-1.patch, YARN-786.patch


 It might be good to require users to explicitly ask for this information, as 
 it's a little more expensive to collect than the other fields in AppInfo.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (YARN-786) Expose application resource usage in RM REST API

2013-10-31 Thread Sandy Ryza (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13810652#comment-13810652
 ] 

Sandy Ryza commented on YARN-786:
-

Uploaded a new patch with updated documentation.

 Expose application resource usage in RM REST API
 

 Key: YARN-786
 URL: https://issues.apache.org/jira/browse/YARN-786
 Project: Hadoop YARN
  Issue Type: Improvement
Affects Versions: 2.0.4-alpha
Reporter: Sandy Ryza
Assignee: Sandy Ryza
 Attachments: YARN-786-1.patch, YARN-786.patch


 It might be good to require users to explicitly ask for this information, as 
 it's a little more expensive to collect than the other fields in AppInfo.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (YARN-786) Expose application resource usage in RM REST API

2013-10-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13810655#comment-13810655
 ] 

Hadoop QA commented on YARN-786:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12611470/YARN-786-1.patch
  against trunk revision .

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: https://builds.apache.org/job/PreCommit-YARN-Build/2334//console

This message is automatically generated.

 Expose application resource usage in RM REST API
 

 Key: YARN-786
 URL: https://issues.apache.org/jira/browse/YARN-786
 Project: Hadoop YARN
  Issue Type: Improvement
Affects Versions: 2.0.4-alpha
Reporter: Sandy Ryza
Assignee: Sandy Ryza
 Attachments: YARN-786-1.patch, YARN-786.patch


 It might be good to require users to explicitly ask for this information, as 
 it's a little more expensive to collect than the other fields in AppInfo.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (YARN-1320) Custom log4j properties in Distributed shell does not work properly.

2013-10-31 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated YARN-1320:


Attachment: YARN-1320.4.patch

Add a test case. We set custom log4j property to output debug level message. 
After the DS finished, go to the AMContainerLogs to find out whether we have 
output debug level message by search the key word DEBUG

 Custom log4j properties in Distributed shell does not work properly.
 

 Key: YARN-1320
 URL: https://issues.apache.org/jira/browse/YARN-1320
 Project: Hadoop YARN
  Issue Type: Bug
  Components: applications/distributed-shell
Reporter: Tassapol Athiapinya
Assignee: Xuan Gong
 Fix For: 2.2.1

 Attachments: YARN-1320.1.patch, YARN-1320.2.patch, YARN-1320.3.patch, 
 YARN-1320.4.patch


 Distributed shell cannot pick up custom log4j properties (specified with 
 -log_properties). It always uses default log4j properties.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (YARN-786) Expose application resource usage in RM REST API

2013-10-31 Thread Sandy Ryza (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandy Ryza updated YARN-786:


Attachment: YARN-786-2.patch

 Expose application resource usage in RM REST API
 

 Key: YARN-786
 URL: https://issues.apache.org/jira/browse/YARN-786
 Project: Hadoop YARN
  Issue Type: Improvement
Affects Versions: 2.0.4-alpha
Reporter: Sandy Ryza
Assignee: Sandy Ryza
 Attachments: YARN-786-1.patch, YARN-786-2.patch, YARN-786.patch


 It might be good to require users to explicitly ask for this information, as 
 it's a little more expensive to collect than the other fields in AppInfo.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (YARN-1320) Custom log4j properties in Distributed shell does not work properly.

2013-10-31 Thread Xuan Gong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13810667#comment-13810667
 ] 

Xuan Gong commented on YARN-1320:
-

Did the manually test on a single node cluster. Originally, the output level is 
INFO, then the output level will be set to DEBUG in the custom log4j property, 
checked the AMContainerLog, the DEBUG level message did show up

 Custom log4j properties in Distributed shell does not work properly.
 

 Key: YARN-1320
 URL: https://issues.apache.org/jira/browse/YARN-1320
 Project: Hadoop YARN
  Issue Type: Bug
  Components: applications/distributed-shell
Reporter: Tassapol Athiapinya
Assignee: Xuan Gong
 Fix For: 2.2.1

 Attachments: YARN-1320.1.patch, YARN-1320.2.patch, YARN-1320.3.patch, 
 YARN-1320.4.patch


 Distributed shell cannot pick up custom log4j properties (specified with 
 -log_properties). It always uses default log4j properties.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (YARN-1320) Custom log4j properties in Distributed shell does not work properly.

2013-10-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13810666#comment-13810666
 ] 

Hadoop QA commented on YARN-1320:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12611471/YARN-1320.4.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:red}-1 javac{color:red}.  The patch appears to cause the build to 
fail.

Console output: https://builds.apache.org/job/PreCommit-YARN-Build/2336//console

This message is automatically generated.

 Custom log4j properties in Distributed shell does not work properly.
 

 Key: YARN-1320
 URL: https://issues.apache.org/jira/browse/YARN-1320
 Project: Hadoop YARN
  Issue Type: Bug
  Components: applications/distributed-shell
Reporter: Tassapol Athiapinya
Assignee: Xuan Gong
 Fix For: 2.2.1

 Attachments: YARN-1320.1.patch, YARN-1320.2.patch, YARN-1320.3.patch, 
 YARN-1320.4.patch


 Distributed shell cannot pick up custom log4j properties (specified with 
 -log_properties). It always uses default log4j properties.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (YARN-1320) Custom log4j properties in Distributed shell does not work properly.

2013-10-31 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated YARN-1320:


Attachment: YARN-1320.4.patch

 Custom log4j properties in Distributed shell does not work properly.
 

 Key: YARN-1320
 URL: https://issues.apache.org/jira/browse/YARN-1320
 Project: Hadoop YARN
  Issue Type: Bug
  Components: applications/distributed-shell
Reporter: Tassapol Athiapinya
Assignee: Xuan Gong
 Fix For: 2.2.1

 Attachments: YARN-1320.1.patch, YARN-1320.2.patch, YARN-1320.3.patch, 
 YARN-1320.4.patch, YARN-1320.4.patch


 Distributed shell cannot pick up custom log4j properties (specified with 
 -log_properties). It always uses default log4j properties.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (YARN-786) Expose application resource usage in RM REST API

2013-10-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13810702#comment-13810702
 ] 

Hadoop QA commented on YARN-786:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12611472/YARN-786-2.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-tools/hadoop-sls 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/2335//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/2335//console

This message is automatically generated.

 Expose application resource usage in RM REST API
 

 Key: YARN-786
 URL: https://issues.apache.org/jira/browse/YARN-786
 Project: Hadoop YARN
  Issue Type: Improvement
Affects Versions: 2.0.4-alpha
Reporter: Sandy Ryza
Assignee: Sandy Ryza
 Attachments: YARN-786-1.patch, YARN-786-2.patch, YARN-786.patch


 It might be good to require users to explicitly ask for this information, as 
 it's a little more expensive to collect than the other fields in AppInfo.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (YARN-1381) Same relaxLocality appears twice in exception message of AMRMClientImpl#checkLocalityRelaxationConflict()

2013-10-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13810716#comment-13810716
 ] 

Hudson commented on YARN-1381:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #4682 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/4682/])
YARN-1381. Same relaxLocality appears twice in exception message of 
AMRMClientImpl#checkLocalityRelaxationConflict() (Ted Yu via Sandy Ryza) 
(sandy: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1537632)
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/impl/AMRMClientImpl.java


 Same relaxLocality appears twice in exception message of 
 AMRMClientImpl#checkLocalityRelaxationConflict() 
 --

 Key: YARN-1381
 URL: https://issues.apache.org/jira/browse/YARN-1381
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Ted Yu
Priority: Minor
 Fix For: 2.2.1

 Attachments: yarn-1381.patch


 Here is related code:
 {code}
 throw new InvalidContainerRequestException(Cannot submit a 
 + ContainerRequest asking for location  + location
 +  with locality relaxation  + relaxLocality +  when it 
 has 
 + already been requested with locality relaxation  + 
 relaxLocality);
 {code}
 The last relaxLocality should be  
 reqs.values().iterator().next().remoteRequest.getRelaxLocality() 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (YARN-1320) Custom log4j properties in Distributed shell does not work properly.

2013-10-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13810786#comment-13810786
 ] 

Hadoop QA commented on YARN-1320:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12611477/YARN-1320.4.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common:

  
org.apache.hadoop.yarn.applications.distributedshell.TestDistributedShell

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/2337//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/2337//console

This message is automatically generated.

 Custom log4j properties in Distributed shell does not work properly.
 

 Key: YARN-1320
 URL: https://issues.apache.org/jira/browse/YARN-1320
 Project: Hadoop YARN
  Issue Type: Bug
  Components: applications/distributed-shell
Reporter: Tassapol Athiapinya
Assignee: Xuan Gong
 Fix For: 2.2.1

 Attachments: YARN-1320.1.patch, YARN-1320.2.patch, YARN-1320.3.patch, 
 YARN-1320.4.patch, YARN-1320.4.patch


 Distributed shell cannot pick up custom log4j properties (specified with 
 -log_properties). It always uses default log4j properties.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (YARN-786) Expose application resource usage in RM REST API

2013-10-31 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13810800#comment-13810800
 ] 

Alejandro Abdelnur commented on YARN-786:
-

+1

 Expose application resource usage in RM REST API
 

 Key: YARN-786
 URL: https://issues.apache.org/jira/browse/YARN-786
 Project: Hadoop YARN
  Issue Type: Improvement
Affects Versions: 2.0.4-alpha
Reporter: Sandy Ryza
Assignee: Sandy Ryza
 Attachments: YARN-786-1.patch, YARN-786-2.patch, YARN-786.patch


 It might be good to require users to explicitly ask for this information, as 
 it's a little more expensive to collect than the other fields in AppInfo.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (YARN-1290) Let continuous scheduling achieve more balanced task assignment

2013-10-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13810810#comment-13810810
 ] 

Hudson commented on YARN-1290:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #4683 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/4683/])
YARN-1290. Let continuous scheduling achieve more balanced task assignment (Wei 
Yan via Sandy Ryza) (sandy: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1537731)
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairScheduler.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestFairScheduler.java


 Let continuous scheduling achieve more balanced task assignment
 ---

 Key: YARN-1290
 URL: https://issues.apache.org/jira/browse/YARN-1290
 Project: Hadoop YARN
  Issue Type: Improvement
Reporter: Wei Yan
Assignee: Wei Yan
 Fix For: 2.3.0

 Attachments: YARN-1290.patch, YARN-1290.patch, YARN-1290.patch, 
 YARN-1290.patch, main.pdf


 Currently, in continuous scheduling (YARN-1010), in each round, the thread 
 iterates over pre-ordered nodes and assigns tasks. This mechanism may 
 overload the first several nodes, while the latter nodes have no tasks.
 We should sort all nodes according to available resource. In each round, 
 always assign tasks to nodes with larger capacity, which can balance the load 
 distribution among all nodes.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (YARN-1279) Expose a client API to allow clients to figure if log aggregation is complete

2013-10-31 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated YARN-1279:


Attachment: YARN-1279.1.patch

The patch includes all the changes in RM side and Client side. All the pb 
changes are included. Add a unit test to test the functionality.

 Expose a client API to allow clients to figure if log aggregation is complete
 -

 Key: YARN-1279
 URL: https://issues.apache.org/jira/browse/YARN-1279
 Project: Hadoop YARN
  Issue Type: Sub-task
Affects Versions: 2.2.0
Reporter: Arun C Murthy
Assignee: Xuan Gong
 Attachments: YARN-1279.1.patch


 Expose a client API to allow clients to figure if log aggregation is complete



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (YARN-1383) Remove node updates from the Fair Scheduler event log

2013-10-31 Thread Sandy Ryza (JIRA)
Sandy Ryza created YARN-1383:


 Summary: Remove node updates from the Fair Scheduler event log
 Key: YARN-1383
 URL: https://issues.apache.org/jira/browse/YARN-1383
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: scheduler
Affects Versions: 2.2.0
Reporter: Sandy Ryza
Assignee: Sandy Ryza


Writing out a line whenever a node heartbeats is not useful and just too much.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (YARN-1383) Remove node updates from the Fair Scheduler event log

2013-10-31 Thread Sandy Ryza (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1383?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandy Ryza updated YARN-1383:
-

Attachment: YARN-1383.patch

 Remove node updates from the Fair Scheduler event log
 -

 Key: YARN-1383
 URL: https://issues.apache.org/jira/browse/YARN-1383
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: scheduler
Affects Versions: 2.2.0
Reporter: Sandy Ryza
Assignee: Sandy Ryza
 Attachments: YARN-1383.patch


 Writing out a line whenever a node heartbeats is not useful and just too much.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (YARN-546) mapred.fairscheduler.eventlog.enabled removed from Hadoop 2.0

2013-10-31 Thread Sandy Ryza (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13810828#comment-13810828
 ] 

Sandy Ryza commented on YARN-546:
-

I think the issue still exists.  Just filed YARN-1383 to take these node 
updates out of the event log.

 mapred.fairscheduler.eventlog.enabled removed from Hadoop 2.0
 -

 Key: YARN-546
 URL: https://issues.apache.org/jira/browse/YARN-546
 Project: Hadoop YARN
  Issue Type: Bug
  Components: scheduler
Affects Versions: 2.0.3-alpha
Reporter: Lohit Vijayarenu
 Attachments: YARN-546.1.patch


 Hadoop 1.0 supported an option to turn on/off FairScheduler event logging 
 using mapred.fairscheduler.eventlog.enabled. In Hadoop 2.0, it looks like 
 this option has been removed (or not ported?) which causes event logging to 
 be enabled by default and there is no way to turn it off.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (YARN-1279) Expose a client API to allow clients to figure if log aggregation is complete

2013-10-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13810830#comment-13810830
 ] 

Hadoop QA commented on YARN-1279:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12611502/YARN-1279.1.patch
  against trunk revision .

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: https://builds.apache.org/job/PreCommit-YARN-Build/2338//console

This message is automatically generated.

 Expose a client API to allow clients to figure if log aggregation is complete
 -

 Key: YARN-1279
 URL: https://issues.apache.org/jira/browse/YARN-1279
 Project: Hadoop YARN
  Issue Type: Sub-task
Affects Versions: 2.2.0
Reporter: Arun C Murthy
Assignee: Xuan Gong
 Attachments: YARN-1279.1.patch


 Expose a client API to allow clients to figure if log aggregation is complete



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (YARN-979) [YARN-321] Add more APIs related to ApplicationAttempt and Container in ApplicationHistoryProtocol

2013-10-31 Thread Mayank Bansal (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-979?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mayank Bansal updated YARN-979:
---

Attachment: YARN-979-4.patch

Attaching the patch for latest YARN-321 branch.

Thanks,
Mayank

 [YARN-321] Add more APIs related to ApplicationAttempt and Container in 
 ApplicationHistoryProtocol
 --

 Key: YARN-979
 URL: https://issues.apache.org/jira/browse/YARN-979
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Mayank Bansal
Assignee: Mayank Bansal
 Attachments: YARN-979-1.patch, YARN-979-3.patch, YARN-979-4.patch, 
 YARN-979.2.patch


 ApplicationHistoryProtocol should have the following APIs as well:
 * getApplicationAttemptReport
 * getApplicationAttempts
 * getContainerReport
 * getContainers
 The corresponding request and response classes need to be added as well.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (YARN-979) [YARN-321] Add more APIs related to ApplicationAttempt and Container in ApplicationHistoryProtocol

2013-10-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13810846#comment-13810846
 ] 

Hadoop QA commented on YARN-979:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12611519/YARN-979-4.patch
  against trunk revision .

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: https://builds.apache.org/job/PreCommit-YARN-Build/2339//console

This message is automatically generated.

 [YARN-321] Add more APIs related to ApplicationAttempt and Container in 
 ApplicationHistoryProtocol
 --

 Key: YARN-979
 URL: https://issues.apache.org/jira/browse/YARN-979
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Mayank Bansal
Assignee: Mayank Bansal
 Attachments: YARN-979-1.patch, YARN-979-3.patch, YARN-979-4.patch, 
 YARN-979.2.patch


 ApplicationHistoryProtocol should have the following APIs as well:
 * getApplicationAttemptReport
 * getApplicationAttempts
 * getContainerReport
 * getContainers
 The corresponding request and response classes need to be added as well.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (YARN-1378) Implement a RMStateStore cleaner for deleting application/attempt info

2013-10-31 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-1378:
--

Attachment: YARN-1378.1.patch

Upload a patch:
- create a separate removeAppdispatcher in RMStateStore for handing app removal 
requests. These app removal requests will be processed once the configured 
retain time is met.
- The app removal requests will be added to the removeAppDispatcher once 
application is completed, 

 Implement a RMStateStore cleaner for deleting application/attempt info
 --

 Key: YARN-1378
 URL: https://issues.apache.org/jira/browse/YARN-1378
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Reporter: Jian He
Assignee: Jian He
 Attachments: YARN-1378.1.patch


 Now that we are storing the final state of application/attempt instead of 
 removing application/attempt info on application/attempt 
 completion(YARN-891), we need a separate RMStateStore cleaner for cleaning 
 the application/attempt state.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (YARN-1279) Expose a client API to allow clients to figure if log aggregation is complete

2013-10-31 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated YARN-1279:


Attachment: YARN-1279.2.patch

update the patch based on the latest trunk

 Expose a client API to allow clients to figure if log aggregation is complete
 -

 Key: YARN-1279
 URL: https://issues.apache.org/jira/browse/YARN-1279
 Project: Hadoop YARN
  Issue Type: Sub-task
Affects Versions: 2.2.0
Reporter: Arun C Murthy
Assignee: Xuan Gong
 Attachments: YARN-1279.1.patch, YARN-1279.2.patch


 Expose a client API to allow clients to figure if log aggregation is complete



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (YARN-1279) Expose a client API to allow clients to figure if log aggregation is complete

2013-10-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13810890#comment-13810890
 ] 

Hadoop QA commented on YARN-1279:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12611529/YARN-1279.2.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 9 new 
or modified test files.

{color:red}-1 javac{color:red}.  The patch appears to cause the build to 
fail.

Console output: https://builds.apache.org/job/PreCommit-YARN-Build/2341//console

This message is automatically generated.

 Expose a client API to allow clients to figure if log aggregation is complete
 -

 Key: YARN-1279
 URL: https://issues.apache.org/jira/browse/YARN-1279
 Project: Hadoop YARN
  Issue Type: Sub-task
Affects Versions: 2.2.0
Reporter: Arun C Murthy
Assignee: Xuan Gong
 Attachments: YARN-1279.1.patch, YARN-1279.2.patch


 Expose a client API to allow clients to figure if log aggregation is complete



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (YARN-1378) Implement a RMStateStore cleaner for deleting application/attempt info

2013-10-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13810907#comment-13810907
 ] 

Hadoop QA commented on YARN-1378:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12611527/YARN-1378.1.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 5 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 3 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/2340//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-YARN-Build/2340//artifact/trunk/patchprocess/newPatchFindbugsWarningshadoop-yarn-server-resourcemanager.html
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/2340//console

This message is automatically generated.

 Implement a RMStateStore cleaner for deleting application/attempt info
 --

 Key: YARN-1378
 URL: https://issues.apache.org/jira/browse/YARN-1378
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Reporter: Jian He
Assignee: Jian He
 Attachments: YARN-1378.1.patch


 Now that we are storing the final state of application/attempt instead of 
 removing application/attempt info on application/attempt 
 completion(YARN-891), we need a separate RMStateStore cleaner for cleaning 
 the application/attempt state.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (YARN-674) Slow or failing DelegationToken renewals on submission itself make RM unavailable

2013-10-31 Thread Omkar Vinit Joshi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-674?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Omkar Vinit Joshi updated YARN-674:
---

Attachment: YARN-674.4.patch

 Slow or failing DelegationToken renewals on submission itself make RM 
 unavailable
 -

 Key: YARN-674
 URL: https://issues.apache.org/jira/browse/YARN-674
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Reporter: Vinod Kumar Vavilapalli
Assignee: Omkar Vinit Joshi
 Attachments: YARN-674.1.patch, YARN-674.2.patch, YARN-674.3.patch, 
 YARN-674.4.patch


 This was caused by YARN-280. A slow or a down NameNode for will make it look 
 like RM is unavailable as it may run out of RPC handlers due to blocked 
 client submissions.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (YARN-1210) During RM restart, RM should start a new attempt only when previous attempt exits for real

2013-10-31 Thread Omkar Vinit Joshi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13810927#comment-13810927
 ] 

Omkar Vinit Joshi commented on YARN-1210:
-

submitting patch on top of YARN-674.

 During RM restart, RM should start a new attempt only when previous attempt 
 exits for real
 --

 Key: YARN-1210
 URL: https://issues.apache.org/jira/browse/YARN-1210
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Vinod Kumar Vavilapalli
Assignee: Omkar Vinit Joshi
 Attachments: YARN-1210.1.patch


 When RM recovers, it can wait for existing AMs to contact RM back and then 
 kill them forcefully before even starting a new AM. Worst case, RM will start 
 a new AppAttempt after waiting for 10 mins ( the expiry interval). This way 
 we'll minimize multiple AMs racing with each other. This can help issues with 
 downstream components like Pig, Hive and Oozie during RM restart.
 In the mean while, new apps will proceed as usual as existing apps wait for 
 recovery.
 This can continue to be useful after work-preserving restart, so that AMs 
 which can properly sync back up with RM can continue to run and those that 
 don't are guaranteed to be killed before starting a new attempt.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (YARN-1206) Container logs link is broken on RM web UI after application finished

2013-10-31 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated YARN-1206:
--

Target Version/s: 2.3.0

 Container logs link is broken on RM web UI after application finished
 -

 Key: YARN-1206
 URL: https://issues.apache.org/jira/browse/YARN-1206
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Jian He
Priority: Blocker

 With log aggregation disabled, when container is running, its logs link works 
 properly, but after the application is finished, the link shows 'Container 
 does not exist.'



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (YARN-674) Slow or failing DelegationToken renewals on submission itself make RM unavailable

2013-10-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13810936#comment-13810936
 ] 

Hadoop QA commented on YARN-674:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12611535/YARN-674.4.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/2342//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/2342//console

This message is automatically generated.

 Slow or failing DelegationToken renewals on submission itself make RM 
 unavailable
 -

 Key: YARN-674
 URL: https://issues.apache.org/jira/browse/YARN-674
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Reporter: Vinod Kumar Vavilapalli
Assignee: Omkar Vinit Joshi
 Attachments: YARN-674.1.patch, YARN-674.2.patch, YARN-674.3.patch, 
 YARN-674.4.patch


 This was caused by YARN-280. A slow or a down NameNode for will make it look 
 like RM is unavailable as it may run out of RPC handlers due to blocked 
 client submissions.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (YARN-90) NodeManager should identify failed disks becoming good back again

2013-10-31 Thread Hou Song (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-90?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13810976#comment-13810976
 ] 

Hou Song commented on YARN-90:
--

Sorry for the last comment, I meant: 
For unit tests, I add a test to TestLocalDirsHandlerService, and mimic disk 
failure by chmod 000 failed_dir, and mimic disk repairing by chmod 755 
failed_dir. 

 NodeManager should identify failed disks becoming good back again
 -

 Key: YARN-90
 URL: https://issues.apache.org/jira/browse/YARN-90
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: nodemanager
Reporter: Ravi Gummadi
 Attachments: YARN-90.1.patch, YARN-90.patch


 MAPREDUCE-3121 makes NodeManager identify disk failures. But once a disk goes 
 down, it is marked as failed forever. To reuse that disk (after it becomes 
 good), NodeManager needs restart. This JIRA is to improve NodeManager to 
 reuse good disks(which could be bad some time back).



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (YARN-1279) Expose a client API to allow clients to figure if log aggregation is complete

2013-10-31 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated YARN-1279:


Attachment: YARN-1279.2.patch

 Expose a client API to allow clients to figure if log aggregation is complete
 -

 Key: YARN-1279
 URL: https://issues.apache.org/jira/browse/YARN-1279
 Project: Hadoop YARN
  Issue Type: Sub-task
Affects Versions: 2.2.0
Reporter: Arun C Murthy
Assignee: Xuan Gong
 Attachments: YARN-1279.1.patch, YARN-1279.2.patch, YARN-1279.2.patch


 Expose a client API to allow clients to figure if log aggregation is complete



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (YARN-1320) Custom log4j properties in Distributed shell does not work properly.

2013-10-31 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated YARN-1320:


Attachment: YARN-1320.4.patch

 Custom log4j properties in Distributed shell does not work properly.
 

 Key: YARN-1320
 URL: https://issues.apache.org/jira/browse/YARN-1320
 Project: Hadoop YARN
  Issue Type: Bug
  Components: applications/distributed-shell
Reporter: Tassapol Athiapinya
Assignee: Xuan Gong
 Fix For: 2.2.1

 Attachments: YARN-1320.1.patch, YARN-1320.2.patch, YARN-1320.3.patch, 
 YARN-1320.4.patch, YARN-1320.4.patch, YARN-1320.4.patch


 Distributed shell cannot pick up custom log4j properties (specified with 
 -log_properties). It always uses default log4j properties.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (YARN-1320) Custom log4j properties in Distributed shell does not work properly.

2013-10-31 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated YARN-1320:


Attachment: YARN-1320.4.patch

 Custom log4j properties in Distributed shell does not work properly.
 

 Key: YARN-1320
 URL: https://issues.apache.org/jira/browse/YARN-1320
 Project: Hadoop YARN
  Issue Type: Bug
  Components: applications/distributed-shell
Reporter: Tassapol Athiapinya
Assignee: Xuan Gong
 Fix For: 2.2.1

 Attachments: YARN-1320.1.patch, YARN-1320.2.patch, YARN-1320.3.patch, 
 YARN-1320.4.patch, YARN-1320.4.patch, YARN-1320.4.patch, YARN-1320.4.patch


 Distributed shell cannot pick up custom log4j properties (specified with 
 -log_properties). It always uses default log4j properties.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (YARN-1320) Custom log4j properties in Distributed shell does not work properly.

2013-10-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13811004#comment-13811004
 ] 

Hadoop QA commented on YARN-1320:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12611544/YARN-1320.4.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:red}-1 javac{color:red}.  The patch appears to cause the build to 
fail.

Console output: https://builds.apache.org/job/PreCommit-YARN-Build/2344//console

This message is automatically generated.

 Custom log4j properties in Distributed shell does not work properly.
 

 Key: YARN-1320
 URL: https://issues.apache.org/jira/browse/YARN-1320
 Project: Hadoop YARN
  Issue Type: Bug
  Components: applications/distributed-shell
Reporter: Tassapol Athiapinya
Assignee: Xuan Gong
 Fix For: 2.2.1

 Attachments: YARN-1320.1.patch, YARN-1320.2.patch, YARN-1320.3.patch, 
 YARN-1320.4.patch, YARN-1320.4.patch, YARN-1320.4.patch, YARN-1320.4.patch


 Distributed shell cannot pick up custom log4j properties (specified with 
 -log_properties). It always uses default log4j properties.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (YARN-1320) Custom log4j properties in Distributed shell does not work properly.

2013-10-31 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated YARN-1320:


Attachment: YARN-1320.4.patch

 Custom log4j properties in Distributed shell does not work properly.
 

 Key: YARN-1320
 URL: https://issues.apache.org/jira/browse/YARN-1320
 Project: Hadoop YARN
  Issue Type: Bug
  Components: applications/distributed-shell
Reporter: Tassapol Athiapinya
Assignee: Xuan Gong
 Fix For: 2.2.1

 Attachments: YARN-1320.1.patch, YARN-1320.2.patch, YARN-1320.3.patch, 
 YARN-1320.4.patch, YARN-1320.4.patch, YARN-1320.4.patch, YARN-1320.4.patch, 
 YARN-1320.4.patch


 Distributed shell cannot pick up custom log4j properties (specified with 
 -log_properties). It always uses default log4j properties.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (YARN-1320) Custom log4j properties in Distributed shell does not work properly.

2013-10-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13811011#comment-13811011
 ] 

Hadoop QA commented on YARN-1320:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12611545/YARN-1320.4.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common:

  
org.apache.hadoop.yarn.applications.distributedshell.TestDistributedShell

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/2345//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/2345//console

This message is automatically generated.

 Custom log4j properties in Distributed shell does not work properly.
 

 Key: YARN-1320
 URL: https://issues.apache.org/jira/browse/YARN-1320
 Project: Hadoop YARN
  Issue Type: Bug
  Components: applications/distributed-shell
Reporter: Tassapol Athiapinya
Assignee: Xuan Gong
 Fix For: 2.2.1

 Attachments: YARN-1320.1.patch, YARN-1320.2.patch, YARN-1320.3.patch, 
 YARN-1320.4.patch, YARN-1320.4.patch, YARN-1320.4.patch, YARN-1320.4.patch, 
 YARN-1320.4.patch


 Distributed shell cannot pick up custom log4j properties (specified with 
 -log_properties). It always uses default log4j properties.



--
This message was sent by Atlassian JIRA
(v6.1#6144)