[jira] [Commented] (YARN-981) YARN/MR2/Job-history /logs link does not have correct content

2013-09-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13755698#comment-13755698
 ] 

Hudson commented on YARN-981:
-

SUCCESS: Integrated in Hadoop-Yarn-trunk #319 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/319/])
YARN-981. Fixed YARN webapp so that /logs servlet works like before. Addendum 
patch to fix bugs in the first patch. Contributed by Jian He. (vinodkv: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1519208)
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/WebApps.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/webapp/MyTestJAXBContextResolver.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/webapp/MyTestWebService.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/webapp/TestWebApp.java


 YARN/MR2/Job-history /logs link does not have correct content
 -

 Key: YARN-981
 URL: https://issues.apache.org/jira/browse/YARN-981
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Xuan Gong
Assignee: Jian He
 Fix For: 2.1.1-beta

 Attachments: YARN-981.1.patch, YARN-981.2.patch, YARN-981.3.patch, 
 YARN-981.4.patch, YARN-981.5.patch, YARN-981.6.patch, 
 YARN-981-branch-2.1.txt, YARN-981.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (YARN-1131) $ yarn logs should return a message log aggregation is during progress if YARN application is running

2013-09-01 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1131?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du reassigned YARN-1131:


Assignee: Junping Du

 $ yarn logs should return a message log aggregation is during progress if 
 YARN application is running
 -

 Key: YARN-1131
 URL: https://issues.apache.org/jira/browse/YARN-1131
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: client
Reporter: Tassapol Athiapinya
Assignee: Junping Du
Priority: Minor
 Fix For: 2.1.1-beta


 In the case when log aggregation is enabled, if a user submits MapReduce job 
 and runs $ yarn logs -applicationId app ID while the YARN application is 
 running, the command will return no message and return user back to shell. It 
 is nice to tell the user that log aggregation is in progress.
 {code}
 -bash-4.1$ /usr/bin/yarn logs -applicationId application_1377900193583_0002
 -bash-4.1$
 {code}
 At the same time, if invalid application ID is given, YARN CLI should say 
 that the application ID is incorrect rather than throwing 
 NoSuchElementException.
 {code}
 $ /usr/bin/yarn logs -applicationId application_0
 Exception in thread main java.util.NoSuchElementException
 at com.google.common.base.AbstractIterator.next(AbstractIterator.java:75)
 at 
 org.apache.hadoop.yarn.util.ConverterUtils.toApplicationId(ConverterUtils.java:124)
 at 
 org.apache.hadoop.yarn.util.ConverterUtils.toApplicationId(ConverterUtils.java:119)
 at org.apache.hadoop.yarn.logaggregation.LogDumper.run(LogDumper.java:110)
 at org.apache.hadoop.yarn.logaggregation.LogDumper.main(LogDumper.java:255)
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-981) YARN/MR2/Job-history /logs link does not have correct content

2013-09-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13755722#comment-13755722
 ] 

Hudson commented on YARN-981:
-

SUCCESS: Integrated in Hadoop-Hdfs-trunk #1509 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1509/])
YARN-981. Fixed YARN webapp so that /logs servlet works like before. Addendum 
patch to fix bugs in the first patch. Contributed by Jian He. (vinodkv: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1519208)
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/WebApps.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/webapp/MyTestJAXBContextResolver.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/webapp/MyTestWebService.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/webapp/TestWebApp.java


 YARN/MR2/Job-history /logs link does not have correct content
 -

 Key: YARN-981
 URL: https://issues.apache.org/jira/browse/YARN-981
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Xuan Gong
Assignee: Jian He
 Fix For: 2.1.1-beta

 Attachments: YARN-981.1.patch, YARN-981.2.patch, YARN-981.3.patch, 
 YARN-981.4.patch, YARN-981.5.patch, YARN-981.6.patch, 
 YARN-981-branch-2.1.txt, YARN-981.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-981) YARN/MR2/Job-history /logs link does not have correct content

2013-09-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13755727#comment-13755727
 ] 

Hudson commented on YARN-981:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1536 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1536/])
YARN-981. Fixed YARN webapp so that /logs servlet works like before. Addendum 
patch to fix bugs in the first patch. Contributed by Jian He. (vinodkv: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1519208)
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/WebApps.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/webapp/MyTestJAXBContextResolver.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/webapp/MyTestWebService.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/webapp/TestWebApp.java


 YARN/MR2/Job-history /logs link does not have correct content
 -

 Key: YARN-981
 URL: https://issues.apache.org/jira/browse/YARN-981
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Xuan Gong
Assignee: Jian He
 Fix For: 2.1.1-beta

 Attachments: YARN-981.1.patch, YARN-981.2.patch, YARN-981.3.patch, 
 YARN-981.4.patch, YARN-981.5.patch, YARN-981.6.patch, 
 YARN-981-branch-2.1.txt, YARN-981.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-696) Enable multiple states to to be specified in Resource Manager apps REST call

2013-09-01 Thread Trevor Lorimer (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-696?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trevor Lorimer updated YARN-696:


Attachment: YARN-696.diff

I have applied the suggestion in point 1 and addressed the suggestion in point 
2 by combining the KILLED and ACCEPTED tests into one test. 
Note: the order is not guaranteed for the returned JSON for the KILLED and 
ACCEPTED apps.

 Enable multiple states to to be specified in Resource Manager apps REST call
 

 Key: YARN-696
 URL: https://issues.apache.org/jira/browse/YARN-696
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: resourcemanager
Affects Versions: 2.0.4-alpha
Reporter: Trevor Lorimer
Assignee: Trevor Lorimer
 Attachments: YARN-696.diff, YARN-696.diff, YARN-696.diff


 Within the YARN Resource Manager REST API the GET call which returns all 
 Applications can be filtered by a single State query parameter (http://rm 
 http address:port/ws/v1/cluster/apps). 
 There are 8 possible states (New, Submitted, Accepted, Running, Finishing, 
 Finished, Failed, Killed), if no state parameter is specified all states are 
 returned, however if a sub-set of states is required then multiple REST calls 
 are required (max. of 7).
 The proposal is to be able to specify multiple states in a single REST call.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-696) Enable multiple states to to be specified in Resource Manager apps REST call

2013-09-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13755808#comment-13755808
 ] 

Hadoop QA commented on YARN-696:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12601006/YARN-696.diff
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1816//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1816//console

This message is automatically generated.

 Enable multiple states to to be specified in Resource Manager apps REST call
 

 Key: YARN-696
 URL: https://issues.apache.org/jira/browse/YARN-696
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: resourcemanager
Affects Versions: 2.0.4-alpha
Reporter: Trevor Lorimer
Assignee: Trevor Lorimer
 Attachments: YARN-696.diff, YARN-696.diff, YARN-696.diff


 Within the YARN Resource Manager REST API the GET call which returns all 
 Applications can be filtered by a single State query parameter (http://rm 
 http address:port/ws/v1/cluster/apps). 
 There are 8 possible states (New, Submitted, Accepted, Running, Finishing, 
 Finished, Failed, Killed), if no state parameter is specified all states are 
 returned, however if a sub-set of states is required then multiple REST calls 
 are required (max. of 7).
 The proposal is to be able to specify multiple states in a single REST call.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-936) RMWebServices filtering apps by states uses RMAppState instead of YarnAppilcationState.

2013-09-01 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-936?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated YARN-936:
-

Target Version/s: 2.1.1-beta

 RMWebServices filtering apps by states uses RMAppState instead of 
 YarnAppilcationState.
 ---

 Key: YARN-936
 URL: https://issues.apache.org/jira/browse/YARN-936
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Vinod Kumar Vavilapalli

 Realized this while reviewing YARN-696. YarnApplicationState is the end user 
 API and one that users expect to pass as argument to the REST API.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-219) NM should aggregate logs when application finishes.

2013-09-01 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated YARN-219:
-

Issue Type: Bug  (was: Sub-task)
Parent: (was: YARN-162)

 NM should aggregate logs when application finishes.
 ---

 Key: YARN-219
 URL: https://issues.apache.org/jira/browse/YARN-219
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Affects Versions: 0.23.5
Reporter: Robert Joseph Evans
Assignee: Robert Joseph Evans
Priority: Critical
 Fix For: 3.0.0, 2.0.3-alpha, 0.23.5

 Attachments: YARN-219.txt, YARN-219.txt, YARN-219.txt


 The NM should only aggregate logs when the application finishes.  This will 
 reduce the load on the NN, especially with respect to lease renewal.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-221) NM should provide a way for AM to tell it not to aggregate logs.

2013-09-01 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated YARN-221:
-

Issue Type: Sub-task  (was: Bug)
Parent: YARN-431

 NM should provide a way for AM to tell it not to aggregate logs.
 

 Key: YARN-221
 URL: https://issues.apache.org/jira/browse/YARN-221
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: nodemanager
Affects Versions: 0.23.4
Reporter: Robert Joseph Evans

 The NodeManager should provide a way for an AM to tell it that either the 
 logs should not be aggregated, that they should be aggregated with a high 
 priority, or that they should be aggregated but with a lower priority.  The 
 AM should be able to do this in the ContainerLaunch context to provide a 
 default value, but should also be able to update the value when the container 
 is released.
 This would allow for the NM to not aggregate logs in some cases, and avoid 
 connection to the NN at all.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-162) nodemanager log aggregation has scaling issues with namenode

2013-09-01 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated YARN-162:
-

Issue Type: Sub-task  (was: Bug)
Parent: YARN-431

 nodemanager log aggregation has scaling issues with namenode
 

 Key: YARN-162
 URL: https://issues.apache.org/jira/browse/YARN-162
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: nodemanager
Affects Versions: 0.23.3
Reporter: Nathan Roberts
Assignee: Siddharth Seth
Priority: Critical
 Attachments: YARN-162.txt, YARN-162_v2.txt, YARN-162_v2.txt, 
 YARN-162_WIP.txt


 Log aggregation causes fd explosion on the namenode. On large clusters this 
 can exhaust FDs to the point where datanodes can't check-in.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-220) NM should limit number of applications who's logs are being aggregated

2013-09-01 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated YARN-220:
-

Issue Type: Bug  (was: Sub-task)
Parent: (was: YARN-162)

 NM should limit number of applications who's logs are being aggregated
 --

 Key: YARN-220
 URL: https://issues.apache.org/jira/browse/YARN-220
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Affects Versions: 0.23.4
Reporter: Robert Joseph Evans

 The NodeManager should limit the number of applications that have their logs 
 being aggregated in parallel.  This will reduce the load on the NN.  We need 
 to ensure that the RM will continue to renew the token while this is 
 happening.  We also should look if the NM starts to fall behind if it can 
 delete some of the logs or not.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-219) NM should aggregate logs when application finishes.

2013-09-01 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated YARN-219:
-

Issue Type: Sub-task  (was: Bug)
Parent: YARN-431

 NM should aggregate logs when application finishes.
 ---

 Key: YARN-219
 URL: https://issues.apache.org/jira/browse/YARN-219
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: nodemanager
Affects Versions: 0.23.5
Reporter: Robert Joseph Evans
Assignee: Robert Joseph Evans
Priority: Critical
 Fix For: 3.0.0, 2.0.3-alpha, 0.23.5

 Attachments: YARN-219.txt, YARN-219.txt, YARN-219.txt


 The NM should only aggregate logs when the application finishes.  This will 
 reduce the load on the NN, especially with respect to lease renewal.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-221) NM should provide a way for AM to tell it not to aggregate logs.

2013-09-01 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated YARN-221:
-

Issue Type: Bug  (was: Sub-task)
Parent: (was: YARN-162)

 NM should provide a way for AM to tell it not to aggregate logs.
 

 Key: YARN-221
 URL: https://issues.apache.org/jira/browse/YARN-221
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Affects Versions: 0.23.4
Reporter: Robert Joseph Evans

 The NodeManager should provide a way for an AM to tell it that either the 
 logs should not be aggregated, that they should be aggregated with a high 
 priority, or that they should be aggregated but with a lower priority.  The 
 AM should be able to do this in the ContainerLaunch context to provide a 
 default value, but should also be able to update the value when the container 
 is released.
 This would allow for the NM to not aggregate logs in some cases, and avoid 
 connection to the NN at all.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-649) Make container logs available over HTTP in plain text

2013-09-01 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated YARN-649:
-

Issue Type: Sub-task  (was: Improvement)
Parent: YARN-431

 Make container logs available over HTTP in plain text
 -

 Key: YARN-649
 URL: https://issues.apache.org/jira/browse/YARN-649
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: nodemanager
Affects Versions: 2.0.4-alpha
Reporter: Sandy Ryza
Assignee: Sandy Ryza
 Attachments: YARN-649-2.patch, YARN-649-3.patch, YARN-649-4.patch, 
 YARN-649-5.patch, YARN-649-6.patch, YARN-649-7.patch, YARN-649.patch, 
 YARN-752-1.patch


 It would be good to make container logs available over the REST API for 
 MAPREDUCE-4362 and so that they can be accessed programatically in general.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-649) Make container logs available over HTTP in plain text

2013-09-01 Thread Vinod Kumar Vavilapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13755828#comment-13755828
 ] 

Vinod Kumar Vavilapalli commented on YARN-649:
--

Assuming my previous-self did due diligence on the review, the latest patch 
looks good to me.

Will rekick Jenkins directly and commit it if it says okay.

 Make container logs available over HTTP in plain text
 -

 Key: YARN-649
 URL: https://issues.apache.org/jira/browse/YARN-649
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: nodemanager
Affects Versions: 2.0.4-alpha
Reporter: Sandy Ryza
Assignee: Sandy Ryza
 Attachments: YARN-649-2.patch, YARN-649-3.patch, YARN-649-4.patch, 
 YARN-649-5.patch, YARN-649-6.patch, YARN-649-7.patch, YARN-649.patch, 
 YARN-752-1.patch


 It would be good to make container logs available over the REST API for 
 MAPREDUCE-4362 and so that they can be accessed programatically in general.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-1111) NM containerlogs servlet can't handle logs of more than a GB

2013-09-01 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated YARN-:
--

Issue Type: Sub-task  (was: Bug)
Parent: YARN-431

 NM containerlogs servlet can't handle logs of more than a GB
 

 Key: YARN-
 URL: https://issues.apache.org/jira/browse/YARN-
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: nodemanager
Affects Versions: 2.1.0-beta
 Environment: Long-lived service generating lots of log data from 
 HBase running in debug level
Reporter: Steve Loughran
Priority: Minor

 If a container is set up to log stdout to a file, the container log servlet 
 will list the file
 {code}
 err.txt : Total file length is 551 bytes.
 out.txt : Total file length is 1572099246 bytes.
 {code}
 If you actually click on out.txt then the tail logic takes a *very* long time 
 to react. There is also the question of what will happen if the log fills up 
 that volume

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-431) [Umbrella] Complete/Stabilize YARN appliation log-handling

2013-09-01 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated YARN-431:
-

Summary: [Umbrella] Complete/Stabilize YARN appliation log-handling  (was: 
[Umbrella] Complete/Stabilize YARN appliation log-aggregation)

 [Umbrella] Complete/Stabilize YARN appliation log-handling
 --

 Key: YARN-431
 URL: https://issues.apache.org/jira/browse/YARN-431
 Project: Hadoop YARN
  Issue Type: Task
Reporter: Vinod Kumar Vavilapalli



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-649) Make container logs available over HTTP in plain text

2013-09-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13755835#comment-13755835
 ] 

Hadoop QA commented on YARN-649:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12600017/YARN-649-7.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 9 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1817//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1817//console

This message is automatically generated.

 Make container logs available over HTTP in plain text
 -

 Key: YARN-649
 URL: https://issues.apache.org/jira/browse/YARN-649
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: nodemanager
Affects Versions: 2.0.4-alpha
Reporter: Sandy Ryza
Assignee: Sandy Ryza
 Attachments: YARN-649-2.patch, YARN-649-3.patch, YARN-649-4.patch, 
 YARN-649-5.patch, YARN-649-6.patch, YARN-649-7.patch, YARN-649.patch, 
 YARN-752-1.patch


 It would be good to make container logs available over the REST API for 
 MAPREDUCE-4362 and so that they can be accessed programatically in general.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (YARN-1134) Add support for zipping/unzipping logs while in transit for the NM logs web-service

2013-09-01 Thread Vinod Kumar Vavilapalli (JIRA)
Vinod Kumar Vavilapalli created YARN-1134:
-

 Summary: Add support for zipping/unzipping logs while in transit 
for the NM logs web-service
 Key: YARN-1134
 URL: https://issues.apache.org/jira/browse/YARN-1134
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Vinod Kumar Vavilapalli


As [~zjshen] pointed out at 
[YARN-649|https://issues.apache.org/jira/browse/YARN-649?focusedCommentId=13698415page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13698415],
{quote}
For the long running applications, they may have a big log file, such that it 
will take a long time to download the log file via the RESTful API. 
Consequently, HTTP connection may timeout before downloading before downloading 
a complete log file. Maybe it is good to zip the log file before sending it, 
and unzip it after receiving it.
{quote}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-649) Make container logs available over HTTP in plain text

2013-09-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13755845#comment-13755845
 ] 

Hudson commented on YARN-649:
-

SUCCESS: Integrated in Hadoop-trunk-Commit #4359 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/4359/])
YARN-649. Added a new NM web-service to serve container logs in plain text over 
HTTP. Contributed by Sandy Ryza. (vinodkv: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1519326)
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/Context.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/NodeManager.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/ContainerManagerImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/application/ApplicationImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/webapp/ContainerLogsPage.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/webapp/ContainerLogsUtils.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/webapp/NMWebServices.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestEventFlow.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestNodeStatusUpdater.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/BaseContainerManagerTest.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/application/TestApplication.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/webapp/TestContainerLogsPage.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/webapp/TestNMWebServer.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/webapp/TestNMWebServices.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/webapp/TestNMWebServicesApps.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/webapp/TestNMWebServicesContainers.java


 Make container logs available over HTTP in plain text
 -

 Key: YARN-649
 URL: https://issues.apache.org/jira/browse/YARN-649
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: nodemanager
Affects Versions: 2.0.4-alpha
Reporter: Sandy Ryza
Assignee: Sandy Ryza
 Fix For: 2.3.0

 Attachments: YARN-649-2.patch, YARN-649-3.patch, YARN-649-4.patch, 
 YARN-649-5.patch, YARN-649-6.patch, YARN-649-7.patch, YARN-649.patch, 
 YARN-752-1.patch


 It would be good to make container logs available over the REST API for 
 MAPREDUCE-4362 and so that they can be accessed programatically in general.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-173) Page navigation support for container logs page and the logs web-service on NMs

2013-09-01 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated YARN-173:
-

Summary: Page navigation support for container logs page and the logs 
web-service on NMs  (was: Page navigation support for container logs page)

 Page navigation support for container logs page and the logs web-service on 
 NMs
 ---

 Key: YARN-173
 URL: https://issues.apache.org/jira/browse/YARN-173
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: nodemanager
Affects Versions: 2.0.2-alpha, 0.23.3
Reporter: Jason Lowe
Assignee: Omkar Vinit Joshi
  Labels: usability

 ContainerLogsPage and AggregatedLogsBlock both support {{start}} and {{end}} 
 parameters which are a big help when trying to sift through a huge log.  
 However it's annoying to have to manually edit the URL to go through a giant 
 log page-by-page.  It would be very handy if the web page also provided page 
 navigation links so flipping to the next/previous/first/last chunk of log is 
 a simple click away.  Bonus points for providing a way to easily change the 
 size of the log chunk shown per page.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-173) Page navigation support for container logs page and the logs web-service on NMs

2013-09-01 Thread Vinod Kumar Vavilapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13755846#comment-13755846
 ] 

Vinod Kumar Vavilapalli commented on YARN-173:
--

We need pagination support in the NM log-webservice added via YARN-649.

 Page navigation support for container logs page and the logs web-service on 
 NMs
 ---

 Key: YARN-173
 URL: https://issues.apache.org/jira/browse/YARN-173
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: nodemanager
Affects Versions: 2.0.2-alpha, 0.23.3
Reporter: Jason Lowe
Assignee: Omkar Vinit Joshi
  Labels: usability

 ContainerLogsPage and AggregatedLogsBlock both support {{start}} and {{end}} 
 parameters which are a big help when trying to sift through a huge log.  
 However it's annoying to have to manually edit the URL to go through a giant 
 log page-by-page.  It would be very handy if the web page also provided page 
 navigation links so flipping to the next/previous/first/last chunk of log is 
 a simple click away.  Bonus points for providing a way to easily change the 
 size of the log chunk shown per page.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-292) ResourceManager throws ArrayIndexOutOfBoundsException while handling CONTAINER_ALLOCATED for application attempt

2013-09-01 Thread Vinod Kumar Vavilapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13755851#comment-13755851
 ] 

Vinod Kumar Vavilapalli commented on YARN-292:
--

bq. 3. The application is in FiFoScheduler#applications, but RMAppAttemptImpl 
doesn't get it. First of all, FiFoScheduler#applications is a TreeMap, which is 
not thread safe (FairScheduler#applications is a HashMap while 
CapcityScheduler#applications is a ConcurrentHashMap). Second, the methods of 
accessing the map are not consistently synchronized, thus, read and write on 
the same map can operate simultaneously. RMAppAttemptImpl on the thread of 
AsyncDispatcher will eventually call FiFoScheduler#applications#get in 
AMContainerAllocatedTransition, while FiFoScheduler on thread of 
SchedulerEventDispatcher will use FiFoScheduler#applications#add|remove. 
Therefore, getting null when the application actually exists happens under a 
big number of concurrent operations.
This doesn't sound right to me. The thing is scheduler will be told to remove 
app only by RMAppAttempt. Now if the RMAppAttempt is going to 
AMContainerAllocatedTransition, it cannot tell the scheduler to remove app. 
While the theory of unsafe data-structures seems right, I still can't see the 
case when the original exception can happen. Clearly the app was removed, then 
the RMAppAttempt would have gone into KILLING state, right? If so, why is it 
now trying to get the AM Container?

 ResourceManager throws ArrayIndexOutOfBoundsException while handling 
 CONTAINER_ALLOCATED for application attempt
 

 Key: YARN-292
 URL: https://issues.apache.org/jira/browse/YARN-292
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Affects Versions: 2.0.1-alpha
Reporter: Devaraj K
Assignee: Zhijie Shen
 Attachments: YARN-292.1.patch, YARN-292.2.patch, YARN-292.3.patch


 {code:xml}
 2012-12-26 08:41:15,030 ERROR 
 org.apache.hadoop.yarn.server.resourcemanager.scheduler.fifo.FifoScheduler: 
 Calling allocate on removed or non existant application 
 appattempt_1356385141279_49525_01
 2012-12-26 08:41:15,031 ERROR 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error in 
 handling event type CONTAINER_ALLOCATED for applicationAttempt 
 application_1356385141279_49525
 java.lang.ArrayIndexOutOfBoundsException: 0
   at java.util.Arrays$ArrayList.get(Arrays.java:3381)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl$AMContainerAllocatedTransition.transition(RMAppAttemptImpl.java:655)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl$AMContainerAllocatedTransition.transition(RMAppAttemptImpl.java:644)
   at 
 org.apache.hadoop.yarn.state.StateMachineFactory$SingleInternalArc.doTransition(StateMachineFactory.java:357)
   at 
 org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:298)
   at 
 org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:43)
   at 
 org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:443)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl.handle(RMAppAttemptImpl.java:490)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl.handle(RMAppAttemptImpl.java:80)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationAttemptEventDispatcher.handle(ResourceManager.java:433)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationAttemptEventDispatcher.handle(ResourceManager.java:414)
   at 
 org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:126)
   at 
 org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:75)
   at java.lang.Thread.run(Thread.java:662)
  {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-389) Infinitely assigning containers when the required resource exceeds the cluster's absolute capacity

2013-09-01 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated YARN-389:
-

Target Version/s: 2.1.1-beta

Let's see if we can do something for 2.1.1.

 Infinitely assigning containers when the required resource exceeds the 
 cluster's absolute capacity
 --

 Key: YARN-389
 URL: https://issues.apache.org/jira/browse/YARN-389
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Zhijie Shen
Assignee: Omkar Vinit Joshi

 I've run wordcount example on branch-2 and trunk. I've set 
 yarn.nodemanager.resource.memory-mb to 1G and 
 yarn.app.mapreduce.am.resource.mb to 1.5G. Therefore, resourcemanager is to 
 assign a 2G AM container for AM. However, the nodemanager doesn't have enough 
 memory to assign the container. The problem is that the assignment operation 
 will be repeated infinitely, if the assignment cannot be accomplished. Logs 
 follow.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-1120) Make ApplicationConstants.Environment.USER definition OS neutral

2013-09-01 Thread Vinod Kumar Vavilapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13755864#comment-13755864
 ] 

Vinod Kumar Vavilapalli commented on YARN-1120:
---

It took a while for me to really understand what this issue was about. Luckily 
I was involved in all of the referenced tickets and your description is all too 
compact but perfect :)

The patch looks good to me, +1. Checking this in.

 Make ApplicationConstants.Environment.USER definition OS neutral
 

 Key: YARN-1120
 URL: https://issues.apache.org/jira/browse/YARN-1120
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Chuan Liu
Assignee: Chuan Liu
Priority: Minor
 Attachments: YARN-1120.patch


 In YARN-557, we added some code to make {{ 
 ApplicationConstants.Environment.USER}} has OS-specific definition in order 
 to fix the unit test TestUnmanagedAMLauncher. In YARN-571, the relevant test 
 code was corrected. In YARN-602, we actually will explicitly set the 
 environment variables for the child containers. With these changes, I think 
 we can revert the YARN-557 change to make {{ 
 ApplicationConstants.Environment.USER}} OS neutral. The main benefit is that 
 we can use the same method over the Enum constants. This should also fix the 
 TestContainerLaunch#testContainerEnvVariables failure on Windows. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-1120) Make ApplicationConstants.Environment.USER definition OS neutral

2013-09-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13755877#comment-13755877
 ] 

Hudson commented on YARN-1120:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #4360 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/4360/])
YARN-1120. Made ApplicationConstants.Environment.USER definition OS neutral as 
the corresponding value is now set correctly end-to-end. Contributed by Chuan 
Liu. (vinodkv: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1519330)
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/ApplicationConstants.java


 Make ApplicationConstants.Environment.USER definition OS neutral
 

 Key: YARN-1120
 URL: https://issues.apache.org/jira/browse/YARN-1120
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Chuan Liu
Assignee: Chuan Liu
Priority: Minor
 Fix For: 2.1.1-beta

 Attachments: YARN-1120.patch


 In YARN-557, we added some code to make {{ 
 ApplicationConstants.Environment.USER}} has OS-specific definition in order 
 to fix the unit test TestUnmanagedAMLauncher. In YARN-571, the relevant test 
 code was corrected. In YARN-602, we actually will explicitly set the 
 environment variables for the child containers. With these changes, I think 
 we can revert the YARN-557 change to make {{ 
 ApplicationConstants.Environment.USER}} OS neutral. The main benefit is that 
 we can use the same method over the Enum constants. This should also fix the 
 TestContainerLaunch#testContainerEnvVariables failure on Windows. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-1077) TestContainerLaunch fails on Windows

2013-09-01 Thread Vinod Kumar Vavilapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13755879#comment-13755879
 ] 

Vinod Kumar Vavilapalli commented on YARN-1077:
---

While committing the original patch that added the command not found message 
(YARN-814), I kind of felt that it'd break Windows. But alas, no pre-commit 
Windows builds.

Assuming things are fine on Windows, +1 for the patch. Checking this in.

 TestContainerLaunch fails on Windows
 

 Key: YARN-1077
 URL: https://issues.apache.org/jira/browse/YARN-1077
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 3.0.0, 2.3.0
Reporter: Chuan Liu
Assignee: Chuan Liu
Priority: Minor
 Attachments: YARN-1077.2.patch, YARN-1077.3.patch, YARN-1077.4.patch, 
 YARN-1077.5.patch, YARN-1077.patch


 Several cases in this unit tests fail on Windows. (Append error log at the 
 end.)
 testInvalidEnvSyntaxDiagnostics fails because the difference between cmd and 
 bash script error handling. If some command fails in the cmd script, cmd will 
 continue execute the the rest of the script command. Error handling needs to 
 be explicitly carried out in the script file. The error code of the last 
 command will be returned as the error code of the whole script. In this test, 
 some error happened in the middle of the cmd script, the test expect an 
 exception and non-zero error code. In the cmd script, the intermediate errors 
 are ignored. The last command call succeeded and there is no exception.
 testContainerLaunchStdoutAndStderrDiagnostics fails due to wrong cmd commands 
 used by the test.
 testContainerEnvVariables and testDelayedKill fail due to a regression from 
 YARN-906.
 {noformat}
 ---
 Test set: 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch
 ---
 Tests run: 7, Failures: 4, Errors: 0, Skipped: 0, Time elapsed: 11.526 sec 
  FAILURE!
 testInvalidEnvSyntaxDiagnostics(org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch)
   Time elapsed: 583 sec   FAILURE!
 junit.framework.AssertionFailedError: Should catch exception
   at junit.framework.Assert.fail(Assert.java:50)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch.testInvalidEnvSyntaxDiagnostics(TestContainerLaunch.java:269)
 ...
 testContainerLaunchStdoutAndStderrDiagnostics(org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch)
   Time elapsed: 561 sec   FAILURE!
 junit.framework.AssertionFailedError: Should catch exception
   at junit.framework.Assert.fail(Assert.java:50)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch.testContainerLaunchStdoutAndStderrDiagnostics(TestContainerLaunch.java:314)
 ...
 testContainerEnvVariables(org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch)
   Time elapsed: 4136 sec   FAILURE!
 junit.framework.AssertionFailedError: expected:137 but was:143
   at junit.framework.Assert.fail(Assert.java:50)
   at junit.framework.Assert.failNotEquals(Assert.java:287)
   at junit.framework.Assert.assertEquals(Assert.java:67)
   at junit.framework.Assert.assertEquals(Assert.java:199)
   at junit.framework.Assert.assertEquals(Assert.java:205)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch.testContainerEnvVariables(TestContainerLaunch.java:500)
 ...
 testDelayedKill(org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch)
   Time elapsed: 2744 sec   FAILURE!
 junit.framework.AssertionFailedError: expected:137 but was:143
   at junit.framework.Assert.fail(Assert.java:50)
   at junit.framework.Assert.failNotEquals(Assert.java:287)
   at junit.framework.Assert.assertEquals(Assert.java:67)
   at junit.framework.Assert.assertEquals(Assert.java:199)
   at junit.framework.Assert.assertEquals(Assert.java:205)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch.testDelayedKill(TestContainerLaunch.java:601)
 ...
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira