[jira] [Commented] (YARN-1278) New AM does not start after rm restart

2013-10-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13788064#comment-13788064
 ] 

Hudson commented on YARN-1278:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #355 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/355/])
YARN-1278. Fixed NodeManager to not delete local resources for apps on resync 
command from RM - a bug caused by YARN-1149. Contributed by Hitesh Shah. 
(vinodkv: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1529657)
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/CMgrCompletedContainersEvent.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/NodeManager.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/NodeStatusUpdaterImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/ContainerManagerImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestNodeManagerResync.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestNodeStatusUpdater.java


 New AM does not start after rm restart
 --

 Key: YARN-1278
 URL: https://issues.apache.org/jira/browse/YARN-1278
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 2.1.1-beta
Reporter: Yesha Vora
Assignee: Hitesh Shah
Priority: Blocker
 Fix For: 2.2.0

 Attachments: YARN-1278.1.patch, YARN-1278.2.patch, 
 YARN-1278.trunk.2.patch


 The new AM fails to start after RM restarts. It fails to start new 
 Application master and job fails with below error.
  /usr/bin/mapred job -status job_1380985373054_0001
 13/10/05 15:04:04 INFO client.RMProxy: Connecting to ResourceManager at 
 hostname
 Job: job_1380985373054_0001
 Job File: /user/abc/.staging/job_1380985373054_0001/job.xml
 Job Tracking URL : 
 http://hostname:8088/cluster/app/application_1380985373054_0001
 Uber job : false
 Number of maps: 0
 Number of reduces: 0
 map() completion: 0.0
 reduce() completion: 0.0
 Job state: FAILED
 retired: false
 reason for failure: There are no failed tasks for the job. Job is failed due 
 to some other reason and reason can be found in the logs.
 Counters: 0



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (YARN-1149) NM throws InvalidStateTransitonException: Invalid event: APPLICATION_LOG_HANDLING_FINISHED at RUNNING

2013-10-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13788070#comment-13788070
 ] 

Hudson commented on YARN-1149:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #355 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/355/])
YARN-1278. Fixed NodeManager to not delete local resources for apps on resync 
command from RM - a bug caused by YARN-1149. Contributed by Hitesh Shah. 
(vinodkv: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1529657)
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/CMgrCompletedContainersEvent.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/NodeManager.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/NodeStatusUpdaterImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/ContainerManagerImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestNodeManagerResync.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestNodeStatusUpdater.java


 NM throws InvalidStateTransitonException: Invalid event: 
 APPLICATION_LOG_HANDLING_FINISHED at RUNNING
 -

 Key: YARN-1149
 URL: https://issues.apache.org/jira/browse/YARN-1149
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Ramya Sunil
Assignee: Xuan Gong
 Fix For: 2.2.0

 Attachments: YARN-1149.1.patch, YARN-1149.2.patch, YARN-1149.3.patch, 
 YARN-1149.4.patch, YARN-1149.5.patch, YARN-1149.6.patch, YARN-1149.7.patch, 
 YARN-1149.8.patch, YARN-1149.9.patch, YARN-1149_branch-2.1-beta.1.patch


 When nodemanager receives a kill signal when an application has finished 
 execution but log aggregation has not kicked in, 
 InvalidStateTransitonException: Invalid event: 
 APPLICATION_LOG_HANDLING_FINISHED at RUNNING is thrown
 {noformat}
 2013-08-25 20:45:00,875 INFO  logaggregation.AppLogAggregatorImpl 
 (AppLogAggregatorImpl.java:finishLogAggregation(254)) - Application just 
 finished : application_1377459190746_0118
 2013-08-25 20:45:00,876 INFO  logaggregation.AppLogAggregatorImpl 
 (AppLogAggregatorImpl.java:uploadLogsForContainer(105)) - Starting aggregate 
 log-file for app application_1377459190746_0118 at 
 /app-logs/foo/logs/application_1377459190746_0118/host_45454.tmp
 2013-08-25 20:45:00,876 INFO  logaggregation.LogAggregationService 
 (LogAggregationService.java:stopAggregators(151)) - Waiting for aggregation 
 to complete for application_1377459190746_0118
 2013-08-25 20:45:00,891 INFO  logaggregation.AppLogAggregatorImpl 
 (AppLogAggregatorImpl.java:uploadLogsForContainer(122)) - Uploading logs for 
 container container_1377459190746_0118_01_04. Current good log dirs are 
 /tmp/yarn/local
 2013-08-25 20:45:00,915 INFO  logaggregation.AppLogAggregatorImpl 
 (AppLogAggregatorImpl.java:doAppLogAggregation(182)) - Finished aggregate 
 log-file for app application_1377459190746_0118
 2013-08-25 20:45:00,925 WARN  application.Application 
 (ApplicationImpl.java:handle(427)) - Can't handle this event at current state
 org.apache.hadoop.yarn.state.InvalidStateTransitonException: Invalid event: 
 APPLICATION_LOG_HANDLING_FINISHED at RUNNING
 at 
 org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:305)
  
 at 
 org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46)
 at 
 org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448)
 at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl.handle(ApplicationImpl.java:425)
 at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl.handle(ApplicationImpl.java:59)
 at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl$ApplicationEventDispatcher.handle(ContainerManagerImpl.java:697)
 at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl$ApplicationEventDispatcher.handle(ContainerManagerImpl.java:689)
 

[jira] [Commented] (YARN-1277) Add http policy support for YARN daemons

2013-10-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13788066#comment-13788066
 ] 

Hudson commented on YARN-1277:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #355 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/355/])
YARN-1277. Added a policy based configuration for http/https in common 
HttpServer and using the same in YARN - related
to per project https config support via HADOOP-10022. Contributed by Suresh 
Srinivas and Omkar Vinit Joshi. (vinodkv: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1529662)
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpConfig.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/TestSSLHttpServer.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/MRAppMaster.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/webapp/AppController.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/jobhistory/JHAdminConfig.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/util/MRWebAppUtil.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/MRConfig.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/JobHistoryServer.java
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/util/WebAppUtils.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/NodeManager.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ResourceManager.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy/src/main/java/org/apache/hadoop/yarn/server/webproxy/ProxyUriUtils.java


 Add http policy support for YARN daemons
 

 Key: YARN-1277
 URL: https://issues.apache.org/jira/browse/YARN-1277
 Project: Hadoop YARN
  Issue Type: Sub-task
Affects Versions: 2.0.0-alpha
Reporter: Suresh Srinivas
Assignee: Omkar Vinit Joshi
 Fix For: 2.2.0

 Attachments: YARN-1277.20131005.1.patch, YARN-1277.20131005.2.patch, 
 YARN-1277.20131005.3.patch, YARN-1277.patch


 This YARN part of HADOOP-10022.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (YARN-1130) Improve the log flushing for tasks when mapred.userlog.limit.kb is set

2013-10-07 Thread Paul Han (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Han updated YARN-1130:
---

Attachment: YARN-1130.patch

 Improve the log flushing for tasks when mapred.userlog.limit.kb is set
 --

 Key: YARN-1130
 URL: https://issues.apache.org/jira/browse/YARN-1130
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Affects Versions: 2.0.5-alpha
Reporter: Paul Han
Assignee: Paul Han
 Fix For: 2.0.5-alpha

 Attachments: YARN-1130.patch, YARN-1130.patch, YARN-1130.patch


 When userlog limit is set with something like this:
 {code}
 property
 namemapred.userlog.limit.kb/name
 value2048/value
 descriptionThe maximum size of user-logs of each task in KB. 0 disables the 
 cap.
 /description
 /property
 {code}
 the log entry will be truncated randomly for the jobs.
 The log size is left between 1.2MB to 1.6MB.
 Since the log is already limited, avoid the log truncation is crucial for 
 user.
 The other issue with the current 
 impl(org.apache.hadoop.yarn.ContainerLogAppender) is that log entries will 
 not flush to file until the container shutdown and logmanager close all 
 appenders. If user likes to see the log during task execution, it doesn't 
 support it.
 Will propose a patch to add a flush mechanism and also flush the log when 
 task is done.  



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (YARN-1149) NM throws InvalidStateTransitonException: Invalid event: APPLICATION_LOG_HANDLING_FINISHED at RUNNING

2013-10-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13788139#comment-13788139
 ] 

Hudson commented on YARN-1149:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #1545 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1545/])
YARN-1278. Fixed NodeManager to not delete local resources for apps on resync 
command from RM - a bug caused by YARN-1149. Contributed by Hitesh Shah. 
(vinodkv: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1529657)
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/CMgrCompletedContainersEvent.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/NodeManager.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/NodeStatusUpdaterImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/ContainerManagerImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestNodeManagerResync.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestNodeStatusUpdater.java


 NM throws InvalidStateTransitonException: Invalid event: 
 APPLICATION_LOG_HANDLING_FINISHED at RUNNING
 -

 Key: YARN-1149
 URL: https://issues.apache.org/jira/browse/YARN-1149
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Ramya Sunil
Assignee: Xuan Gong
 Fix For: 2.2.0

 Attachments: YARN-1149.1.patch, YARN-1149.2.patch, YARN-1149.3.patch, 
 YARN-1149.4.patch, YARN-1149.5.patch, YARN-1149.6.patch, YARN-1149.7.patch, 
 YARN-1149.8.patch, YARN-1149.9.patch, YARN-1149_branch-2.1-beta.1.patch


 When nodemanager receives a kill signal when an application has finished 
 execution but log aggregation has not kicked in, 
 InvalidStateTransitonException: Invalid event: 
 APPLICATION_LOG_HANDLING_FINISHED at RUNNING is thrown
 {noformat}
 2013-08-25 20:45:00,875 INFO  logaggregation.AppLogAggregatorImpl 
 (AppLogAggregatorImpl.java:finishLogAggregation(254)) - Application just 
 finished : application_1377459190746_0118
 2013-08-25 20:45:00,876 INFO  logaggregation.AppLogAggregatorImpl 
 (AppLogAggregatorImpl.java:uploadLogsForContainer(105)) - Starting aggregate 
 log-file for app application_1377459190746_0118 at 
 /app-logs/foo/logs/application_1377459190746_0118/host_45454.tmp
 2013-08-25 20:45:00,876 INFO  logaggregation.LogAggregationService 
 (LogAggregationService.java:stopAggregators(151)) - Waiting for aggregation 
 to complete for application_1377459190746_0118
 2013-08-25 20:45:00,891 INFO  logaggregation.AppLogAggregatorImpl 
 (AppLogAggregatorImpl.java:uploadLogsForContainer(122)) - Uploading logs for 
 container container_1377459190746_0118_01_04. Current good log dirs are 
 /tmp/yarn/local
 2013-08-25 20:45:00,915 INFO  logaggregation.AppLogAggregatorImpl 
 (AppLogAggregatorImpl.java:doAppLogAggregation(182)) - Finished aggregate 
 log-file for app application_1377459190746_0118
 2013-08-25 20:45:00,925 WARN  application.Application 
 (ApplicationImpl.java:handle(427)) - Can't handle this event at current state
 org.apache.hadoop.yarn.state.InvalidStateTransitonException: Invalid event: 
 APPLICATION_LOG_HANDLING_FINISHED at RUNNING
 at 
 org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:305)
  
 at 
 org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46)
 at 
 org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448)
 at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl.handle(ApplicationImpl.java:425)
 at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl.handle(ApplicationImpl.java:59)
 at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl$ApplicationEventDispatcher.handle(ContainerManagerImpl.java:697)
 at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl$ApplicationEventDispatcher.handle(ContainerManagerImpl.java:689)
   

[jira] [Commented] (YARN-1277) Add http policy support for YARN daemons

2013-10-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13788135#comment-13788135
 ] 

Hudson commented on YARN-1277:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #1545 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1545/])
YARN-1277. Added a policy based configuration for http/https in common 
HttpServer and using the same in YARN - related
to per project https config support via HADOOP-10022. Contributed by Suresh 
Srinivas and Omkar Vinit Joshi. (vinodkv: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1529662)
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpConfig.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/TestSSLHttpServer.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/MRAppMaster.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/webapp/AppController.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/jobhistory/JHAdminConfig.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/util/MRWebAppUtil.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/MRConfig.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/JobHistoryServer.java
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/util/WebAppUtils.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/NodeManager.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ResourceManager.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy/src/main/java/org/apache/hadoop/yarn/server/webproxy/ProxyUriUtils.java


 Add http policy support for YARN daemons
 

 Key: YARN-1277
 URL: https://issues.apache.org/jira/browse/YARN-1277
 Project: Hadoop YARN
  Issue Type: Sub-task
Affects Versions: 2.0.0-alpha
Reporter: Suresh Srinivas
Assignee: Omkar Vinit Joshi
 Fix For: 2.2.0

 Attachments: YARN-1277.20131005.1.patch, YARN-1277.20131005.2.patch, 
 YARN-1277.20131005.3.patch, YARN-1277.patch


 This YARN part of HADOOP-10022.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (YARN-1278) New AM does not start after rm restart

2013-10-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13788133#comment-13788133
 ] 

Hudson commented on YARN-1278:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #1545 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1545/])
YARN-1278. Fixed NodeManager to not delete local resources for apps on resync 
command from RM - a bug caused by YARN-1149. Contributed by Hitesh Shah. 
(vinodkv: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1529657)
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/CMgrCompletedContainersEvent.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/NodeManager.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/NodeStatusUpdaterImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/ContainerManagerImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestNodeManagerResync.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestNodeStatusUpdater.java


 New AM does not start after rm restart
 --

 Key: YARN-1278
 URL: https://issues.apache.org/jira/browse/YARN-1278
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 2.1.1-beta
Reporter: Yesha Vora
Assignee: Hitesh Shah
Priority: Blocker
 Fix For: 2.2.0

 Attachments: YARN-1278.1.patch, YARN-1278.2.patch, 
 YARN-1278.trunk.2.patch


 The new AM fails to start after RM restarts. It fails to start new 
 Application master and job fails with below error.
  /usr/bin/mapred job -status job_1380985373054_0001
 13/10/05 15:04:04 INFO client.RMProxy: Connecting to ResourceManager at 
 hostname
 Job: job_1380985373054_0001
 Job File: /user/abc/.staging/job_1380985373054_0001/job.xml
 Job Tracking URL : 
 http://hostname:8088/cluster/app/application_1380985373054_0001
 Uber job : false
 Number of maps: 0
 Number of reduces: 0
 map() completion: 0.0
 reduce() completion: 0.0
 Job state: FAILED
 retired: false
 reason for failure: There are no failed tasks for the job. Job is failed due 
 to some other reason and reason can be found in the logs.
 Counters: 0



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (YARN-1278) New AM does not start after rm restart

2013-10-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13788173#comment-13788173
 ] 

Hudson commented on YARN-1278:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1571 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1571/])
YARN-1278. Fixed NodeManager to not delete local resources for apps on resync 
command from RM - a bug caused by YARN-1149. Contributed by Hitesh Shah. 
(vinodkv: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1529657)
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/CMgrCompletedContainersEvent.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/NodeManager.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/NodeStatusUpdaterImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/ContainerManagerImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestNodeManagerResync.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestNodeStatusUpdater.java


 New AM does not start after rm restart
 --

 Key: YARN-1278
 URL: https://issues.apache.org/jira/browse/YARN-1278
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 2.1.1-beta
Reporter: Yesha Vora
Assignee: Hitesh Shah
Priority: Blocker
 Fix For: 2.2.0

 Attachments: YARN-1278.1.patch, YARN-1278.2.patch, 
 YARN-1278.trunk.2.patch


 The new AM fails to start after RM restarts. It fails to start new 
 Application master and job fails with below error.
  /usr/bin/mapred job -status job_1380985373054_0001
 13/10/05 15:04:04 INFO client.RMProxy: Connecting to ResourceManager at 
 hostname
 Job: job_1380985373054_0001
 Job File: /user/abc/.staging/job_1380985373054_0001/job.xml
 Job Tracking URL : 
 http://hostname:8088/cluster/app/application_1380985373054_0001
 Uber job : false
 Number of maps: 0
 Number of reduces: 0
 map() completion: 0.0
 reduce() completion: 0.0
 Job state: FAILED
 retired: false
 reason for failure: There are no failed tasks for the job. Job is failed due 
 to some other reason and reason can be found in the logs.
 Counters: 0



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (YARN-1149) NM throws InvalidStateTransitonException: Invalid event: APPLICATION_LOG_HANDLING_FINISHED at RUNNING

2013-10-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13788179#comment-13788179
 ] 

Hudson commented on YARN-1149:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1571 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1571/])
YARN-1278. Fixed NodeManager to not delete local resources for apps on resync 
command from RM - a bug caused by YARN-1149. Contributed by Hitesh Shah. 
(vinodkv: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1529657)
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/CMgrCompletedContainersEvent.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/NodeManager.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/NodeStatusUpdaterImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/ContainerManagerImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestNodeManagerResync.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestNodeStatusUpdater.java


 NM throws InvalidStateTransitonException: Invalid event: 
 APPLICATION_LOG_HANDLING_FINISHED at RUNNING
 -

 Key: YARN-1149
 URL: https://issues.apache.org/jira/browse/YARN-1149
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Ramya Sunil
Assignee: Xuan Gong
 Fix For: 2.2.0

 Attachments: YARN-1149.1.patch, YARN-1149.2.patch, YARN-1149.3.patch, 
 YARN-1149.4.patch, YARN-1149.5.patch, YARN-1149.6.patch, YARN-1149.7.patch, 
 YARN-1149.8.patch, YARN-1149.9.patch, YARN-1149_branch-2.1-beta.1.patch


 When nodemanager receives a kill signal when an application has finished 
 execution but log aggregation has not kicked in, 
 InvalidStateTransitonException: Invalid event: 
 APPLICATION_LOG_HANDLING_FINISHED at RUNNING is thrown
 {noformat}
 2013-08-25 20:45:00,875 INFO  logaggregation.AppLogAggregatorImpl 
 (AppLogAggregatorImpl.java:finishLogAggregation(254)) - Application just 
 finished : application_1377459190746_0118
 2013-08-25 20:45:00,876 INFO  logaggregation.AppLogAggregatorImpl 
 (AppLogAggregatorImpl.java:uploadLogsForContainer(105)) - Starting aggregate 
 log-file for app application_1377459190746_0118 at 
 /app-logs/foo/logs/application_1377459190746_0118/host_45454.tmp
 2013-08-25 20:45:00,876 INFO  logaggregation.LogAggregationService 
 (LogAggregationService.java:stopAggregators(151)) - Waiting for aggregation 
 to complete for application_1377459190746_0118
 2013-08-25 20:45:00,891 INFO  logaggregation.AppLogAggregatorImpl 
 (AppLogAggregatorImpl.java:uploadLogsForContainer(122)) - Uploading logs for 
 container container_1377459190746_0118_01_04. Current good log dirs are 
 /tmp/yarn/local
 2013-08-25 20:45:00,915 INFO  logaggregation.AppLogAggregatorImpl 
 (AppLogAggregatorImpl.java:doAppLogAggregation(182)) - Finished aggregate 
 log-file for app application_1377459190746_0118
 2013-08-25 20:45:00,925 WARN  application.Application 
 (ApplicationImpl.java:handle(427)) - Can't handle this event at current state
 org.apache.hadoop.yarn.state.InvalidStateTransitonException: Invalid event: 
 APPLICATION_LOG_HANDLING_FINISHED at RUNNING
 at 
 org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:305)
  
 at 
 org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46)
 at 
 org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448)
 at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl.handle(ApplicationImpl.java:425)
 at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl.handle(ApplicationImpl.java:59)
 at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl$ApplicationEventDispatcher.handle(ContainerManagerImpl.java:697)
 at 
 

[jira] [Commented] (YARN-1277) Add http policy support for YARN daemons

2013-10-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13788175#comment-13788175
 ] 

Hudson commented on YARN-1277:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1571 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1571/])
YARN-1277. Added a policy based configuration for http/https in common 
HttpServer and using the same in YARN - related
to per project https config support via HADOOP-10022. Contributed by Suresh 
Srinivas and Omkar Vinit Joshi. (vinodkv: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1529662)
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpConfig.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/TestSSLHttpServer.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/MRAppMaster.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/webapp/AppController.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/jobhistory/JHAdminConfig.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/util/MRWebAppUtil.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/MRConfig.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/JobHistoryServer.java
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/util/WebAppUtils.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/NodeManager.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ResourceManager.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy/src/main/java/org/apache/hadoop/yarn/server/webproxy/ProxyUriUtils.java


 Add http policy support for YARN daemons
 

 Key: YARN-1277
 URL: https://issues.apache.org/jira/browse/YARN-1277
 Project: Hadoop YARN
  Issue Type: Sub-task
Affects Versions: 2.0.0-alpha
Reporter: Suresh Srinivas
Assignee: Omkar Vinit Joshi
 Fix For: 2.2.0

 Attachments: YARN-1277.20131005.1.patch, YARN-1277.20131005.2.patch, 
 YARN-1277.20131005.3.patch, YARN-1277.patch


 This YARN part of HADOOP-10022.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (YARN-1130) Improve the log flushing for tasks when mapred.userlog.limit.kb is set

2013-10-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13788203#comment-13788203
 ] 

Hadoop QA commented on YARN-1130:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12607156/YARN-1130.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common:

  org.apache.hadoop.mapred.TestJobCleanup

  The following test timeouts occurred in 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common:

org.apache.hadoop.mapreduce.v2.TestUberAM

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/2136//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/2136//console

This message is automatically generated.

 Improve the log flushing for tasks when mapred.userlog.limit.kb is set
 --

 Key: YARN-1130
 URL: https://issues.apache.org/jira/browse/YARN-1130
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Affects Versions: 2.0.5-alpha
Reporter: Paul Han
Assignee: Paul Han
 Fix For: 2.0.5-alpha

 Attachments: YARN-1130.patch, YARN-1130.patch, YARN-1130.patch


 When userlog limit is set with something like this:
 {code}
 property
 namemapred.userlog.limit.kb/name
 value2048/value
 descriptionThe maximum size of user-logs of each task in KB. 0 disables the 
 cap.
 /description
 /property
 {code}
 the log entry will be truncated randomly for the jobs.
 The log size is left between 1.2MB to 1.6MB.
 Since the log is already limited, avoid the log truncation is crucial for 
 user.
 The other issue with the current 
 impl(org.apache.hadoop.yarn.ContainerLogAppender) is that log entries will 
 not flush to file until the container shutdown and logmanager close all 
 appenders. If user likes to see the log during task execution, it doesn't 
 support it.
 Will propose a patch to add a flush mechanism and also flush the log when 
 task is done.  



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (YARN-451) Add more metrics to RM page

2013-10-07 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13788226#comment-13788226
 ] 

Jason Lowe commented on YARN-451:
-

I've found #containers to not be a very useful metric, as it doesn't 
necessarily map closely to the amount of cluster resources.  Sometimes apps run 
with lots of tiny containers while others run with huge containers.  I think 
showing the resource utilization in terms of memory and CPU for both current 
and ask would be useful.  That would show which apps are big right now and 
which are trying to be much bigger.

 Add more metrics to RM page
 ---

 Key: YARN-451
 URL: https://issues.apache.org/jira/browse/YARN-451
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: resourcemanager
Affects Versions: 2.0.3-alpha
Reporter: Lohit Vijayarenu
Assignee: Sangjin Lee
Priority: Blocker
 Attachments: in_progress_2x.png, yarn-451-trunk-20130916.1.patch


 ResourceManager webUI shows list of RUNNING applications, but it does not 
 tell which applications are requesting more resource compared to others. With 
 cluster running hundreds of applications at once it would be useful to have 
 some kind of metric to show high-resource usage applications vs low-resource 
 usage ones. At the minimum showing number of containers is good option.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (YARN-1044) used/min/max resources do not display info in the scheduler page

2013-10-07 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13788269#comment-13788269
 ] 

Sangjin Lee commented on YARN-1044:
---

Sure Arun. I'll update the patch.

 used/min/max resources do not display info in the scheduler page
 

 Key: YARN-1044
 URL: https://issues.apache.org/jira/browse/YARN-1044
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager, scheduler
Affects Versions: 2.0.5-alpha
Reporter: Sangjin Lee
Assignee: Sangjin Lee
Priority: Critical
  Labels: newbie
 Attachments: screenshot.png, yarn-1044-20130815.3.patch, 
 yarn-1044.patch, yarn-1044.patch


 Go to the scheduler page in RM, and click any queue to display the detailed 
 info. You'll find that none of the resources entries (used, min, or max) 
 would display values.
 It is because the values contain brackets ( and ) and are not properly 
 html-escaped.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (YARN-1282) CONTAINER_KILLED_ON_REQUEST is considered invalid event for state CONTAINER_CLEANEDUP_AFTER_KILL

2013-10-07 Thread Ted Yu (JIRA)
Ted Yu created YARN-1282:


 Summary: CONTAINER_KILLED_ON_REQUEST is considered invalid event 
for state CONTAINER_CLEANEDUP_AFTER_KILL
 Key: YARN-1282
 URL: https://issues.apache.org/jira/browse/YARN-1282
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Ted Yu
Priority: Minor
 Attachments: TestFreezeThawLiveRegionService.out

When running TestFreezeThawLiveRegionService in HOYA, I observed the following 
exception in log:
{code}
2013-10-07 09:17:18,493 [AsyncDispatcher event handler] INFO  
nodemanager.NMAuditLogger (NMAuditLogger.java:logSuccess(89)) - USER=tyu  
OPERATION=Container Finished - Killed TARGET=ContainerImpl  RESULT=SUCCESS  
APPID=application_1381162545230_0002  
CONTAINERID=container_1381162545230_0002_01_03
2013-10-07 09:17:18,493 [AsyncDispatcher event handler] INFO  
container.Container (ContainerImpl.java:handle(871)) - Container 
container_1381162545230_0002_01_03 transitioned from 
CONTAINER_CLEANEDUP_AFTER_KILL to DONE
2013-10-07 09:17:18,494 [AsyncDispatcher event handler] INFO  
container.Container (ContainerImpl.java:handle(871)) - Container 
container_1381162545230_0002_01_02 transitioned from KILLING to 
CONTAINER_CLEANEDUP_AFTER_KILL
2013-10-07 09:17:18,494 [AsyncDispatcher event handler] WARN  
container.Container (ContainerImpl.java:handle(867)) - Can't handle this event 
at current state: Current: [CONTAINER_CLEANEDUP_AFTER_KILL], eventType: 
[CONTAINER_KILLED_ON_REQUEST]
org.apache.hadoop.yarn.state.InvalidStateTransitonException: Invalid event: 
CONTAINER_KILLED_ON_REQUEST at CONTAINER_CLEANEDUP_AFTER_KILL
  at 
org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:305)
  at 
org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46)
  at 
org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448)
  at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl.handle(ContainerImpl.java:864)
  at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl.handle(ContainerImpl.java:73)
  at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl$ContainerEventDispatcher.handle(ContainerManagerImpl.java:685)
  at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl$ContainerEventDispatcher.handle(ContainerManagerImpl.java:678)
  at 
org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:134)
  at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:81)
  at java.lang.Thread.run(Thread.java:680)
{code}
Looking at ContainerImpl.java, CONTAINER_KILLED_ON_REQUEST is not defined as an 
expected state for CONTAINER_CLEANEDUP_AFTER_KILL.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (YARN-1282) CONTAINER_KILLED_ON_REQUEST is considered invalid event for state CONTAINER_CLEANEDUP_AFTER_KILL

2013-10-07 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1282?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated YARN-1282:
-

Attachment: TestFreezeThawLiveRegionService.out

Here is the output from the test.

 CONTAINER_KILLED_ON_REQUEST is considered invalid event for state 
 CONTAINER_CLEANEDUP_AFTER_KILL
 

 Key: YARN-1282
 URL: https://issues.apache.org/jira/browse/YARN-1282
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Ted Yu
Priority: Minor
 Attachments: TestFreezeThawLiveRegionService.out


 When running TestFreezeThawLiveRegionService in HOYA, I observed the 
 following exception in log:
 {code}
 2013-10-07 09:17:18,493 [AsyncDispatcher event handler] INFO  
 nodemanager.NMAuditLogger (NMAuditLogger.java:logSuccess(89)) - USER=tyu  
 OPERATION=Container Finished - Killed TARGET=ContainerImpl  RESULT=SUCCESS  
 APPID=application_1381162545230_0002  
 CONTAINERID=container_1381162545230_0002_01_03
 2013-10-07 09:17:18,493 [AsyncDispatcher event handler] INFO  
 container.Container (ContainerImpl.java:handle(871)) - Container 
 container_1381162545230_0002_01_03 transitioned from 
 CONTAINER_CLEANEDUP_AFTER_KILL to DONE
 2013-10-07 09:17:18,494 [AsyncDispatcher event handler] INFO  
 container.Container (ContainerImpl.java:handle(871)) - Container 
 container_1381162545230_0002_01_02 transitioned from KILLING to 
 CONTAINER_CLEANEDUP_AFTER_KILL
 2013-10-07 09:17:18,494 [AsyncDispatcher event handler] WARN  
 container.Container (ContainerImpl.java:handle(867)) - Can't handle this 
 event at current state: Current: [CONTAINER_CLEANEDUP_AFTER_KILL], eventType: 
 [CONTAINER_KILLED_ON_REQUEST]
 org.apache.hadoop.yarn.state.InvalidStateTransitonException: Invalid event: 
 CONTAINER_KILLED_ON_REQUEST at CONTAINER_CLEANEDUP_AFTER_KILL
   at 
 org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:305)
   at 
 org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46)
   at 
 org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl.handle(ContainerImpl.java:864)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl.handle(ContainerImpl.java:73)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl$ContainerEventDispatcher.handle(ContainerManagerImpl.java:685)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl$ContainerEventDispatcher.handle(ContainerManagerImpl.java:678)
   at 
 org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:134)
   at 
 org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:81)
   at java.lang.Thread.run(Thread.java:680)
 {code}
 Looking at ContainerImpl.java, CONTAINER_KILLED_ON_REQUEST is not defined as 
 an expected state for CONTAINER_CLEANEDUP_AFTER_KILL.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (YARN-1044) used/min/max resources do not display info in the scheduler page

2013-10-07 Thread Sangjin Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1044?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sangjin Lee updated YARN-1044:
--

Attachment: yarn-1044-20131007.patch

 used/min/max resources do not display info in the scheduler page
 

 Key: YARN-1044
 URL: https://issues.apache.org/jira/browse/YARN-1044
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager, scheduler
Affects Versions: 2.0.5-alpha
Reporter: Sangjin Lee
Assignee: Sangjin Lee
Priority: Critical
  Labels: newbie
 Attachments: screenshot.png, yarn-1044-20130815.3.patch, 
 yarn-1044-20131007.patch, yarn-1044.patch, yarn-1044.patch


 Go to the scheduler page in RM, and click any queue to display the detailed 
 info. You'll find that none of the resources entries (used, min, or max) 
 would display values.
 It is because the values contain brackets ( and ) and are not properly 
 html-escaped.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (YARN-1044) used/min/max resources do not display info in the scheduler page

2013-10-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13788307#comment-13788307
 ] 

Hadoop QA commented on YARN-1044:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12607180/yarn-1044-20131007.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/2137//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/2137//console

This message is automatically generated.

 used/min/max resources do not display info in the scheduler page
 

 Key: YARN-1044
 URL: https://issues.apache.org/jira/browse/YARN-1044
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager, scheduler
Affects Versions: 2.0.5-alpha
Reporter: Sangjin Lee
Assignee: Sangjin Lee
Priority: Critical
  Labels: newbie
 Attachments: screenshot.png, yarn-1044-20130815.3.patch, 
 yarn-1044-20131007.patch, yarn-1044.patch, yarn-1044.patch


 Go to the scheduler page in RM, and click any queue to display the detailed 
 info. You'll find that none of the resources entries (used, min, or max) 
 would display values.
 It is because the values contain brackets ( and ) and are not properly 
 html-escaped.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (YARN-1282) CONTAINER_KILLED_ON_REQUEST is considered invalid event for state CONTAINER_CLEANEDUP_AFTER_KILL

2013-10-07 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13788310#comment-13788310
 ] 

Jason Lowe commented on YARN-1282:
--

This looks like a duplicate of YARN-1070.

 CONTAINER_KILLED_ON_REQUEST is considered invalid event for state 
 CONTAINER_CLEANEDUP_AFTER_KILL
 

 Key: YARN-1282
 URL: https://issues.apache.org/jira/browse/YARN-1282
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Ted Yu
Priority: Minor
 Attachments: TestFreezeThawLiveRegionService.out


 When running TestFreezeThawLiveRegionService in HOYA, I observed the 
 following exception in log:
 {code}
 2013-10-07 09:17:18,493 [AsyncDispatcher event handler] INFO  
 nodemanager.NMAuditLogger (NMAuditLogger.java:logSuccess(89)) - USER=tyu  
 OPERATION=Container Finished - Killed TARGET=ContainerImpl  RESULT=SUCCESS  
 APPID=application_1381162545230_0002  
 CONTAINERID=container_1381162545230_0002_01_03
 2013-10-07 09:17:18,493 [AsyncDispatcher event handler] INFO  
 container.Container (ContainerImpl.java:handle(871)) - Container 
 container_1381162545230_0002_01_03 transitioned from 
 CONTAINER_CLEANEDUP_AFTER_KILL to DONE
 2013-10-07 09:17:18,494 [AsyncDispatcher event handler] INFO  
 container.Container (ContainerImpl.java:handle(871)) - Container 
 container_1381162545230_0002_01_02 transitioned from KILLING to 
 CONTAINER_CLEANEDUP_AFTER_KILL
 2013-10-07 09:17:18,494 [AsyncDispatcher event handler] WARN  
 container.Container (ContainerImpl.java:handle(867)) - Can't handle this 
 event at current state: Current: [CONTAINER_CLEANEDUP_AFTER_KILL], eventType: 
 [CONTAINER_KILLED_ON_REQUEST]
 org.apache.hadoop.yarn.state.InvalidStateTransitonException: Invalid event: 
 CONTAINER_KILLED_ON_REQUEST at CONTAINER_CLEANEDUP_AFTER_KILL
   at 
 org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:305)
   at 
 org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46)
   at 
 org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl.handle(ContainerImpl.java:864)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl.handle(ContainerImpl.java:73)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl$ContainerEventDispatcher.handle(ContainerManagerImpl.java:685)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl$ContainerEventDispatcher.handle(ContainerManagerImpl.java:678)
   at 
 org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:134)
   at 
 org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:81)
   at java.lang.Thread.run(Thread.java:680)
 {code}
 Looking at ContainerImpl.java, CONTAINER_KILLED_ON_REQUEST is not defined as 
 an expected state for CONTAINER_CLEANEDUP_AFTER_KILL.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Resolved] (YARN-1282) CONTAINER_KILLED_ON_REQUEST is considered invalid event for state CONTAINER_CLEANEDUP_AFTER_KILL

2013-10-07 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1282?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu resolved YARN-1282.
--

Resolution: Duplicate

 CONTAINER_KILLED_ON_REQUEST is considered invalid event for state 
 CONTAINER_CLEANEDUP_AFTER_KILL
 

 Key: YARN-1282
 URL: https://issues.apache.org/jira/browse/YARN-1282
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Ted Yu
Priority: Minor
 Attachments: TestFreezeThawLiveRegionService.out


 When running TestFreezeThawLiveRegionService in HOYA, I observed the 
 following exception in log:
 {code}
 2013-10-07 09:17:18,493 [AsyncDispatcher event handler] INFO  
 nodemanager.NMAuditLogger (NMAuditLogger.java:logSuccess(89)) - USER=tyu  
 OPERATION=Container Finished - Killed TARGET=ContainerImpl  RESULT=SUCCESS  
 APPID=application_1381162545230_0002  
 CONTAINERID=container_1381162545230_0002_01_03
 2013-10-07 09:17:18,493 [AsyncDispatcher event handler] INFO  
 container.Container (ContainerImpl.java:handle(871)) - Container 
 container_1381162545230_0002_01_03 transitioned from 
 CONTAINER_CLEANEDUP_AFTER_KILL to DONE
 2013-10-07 09:17:18,494 [AsyncDispatcher event handler] INFO  
 container.Container (ContainerImpl.java:handle(871)) - Container 
 container_1381162545230_0002_01_02 transitioned from KILLING to 
 CONTAINER_CLEANEDUP_AFTER_KILL
 2013-10-07 09:17:18,494 [AsyncDispatcher event handler] WARN  
 container.Container (ContainerImpl.java:handle(867)) - Can't handle this 
 event at current state: Current: [CONTAINER_CLEANEDUP_AFTER_KILL], eventType: 
 [CONTAINER_KILLED_ON_REQUEST]
 org.apache.hadoop.yarn.state.InvalidStateTransitonException: Invalid event: 
 CONTAINER_KILLED_ON_REQUEST at CONTAINER_CLEANEDUP_AFTER_KILL
   at 
 org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:305)
   at 
 org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46)
   at 
 org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl.handle(ContainerImpl.java:864)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl.handle(ContainerImpl.java:73)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl$ContainerEventDispatcher.handle(ContainerManagerImpl.java:685)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl$ContainerEventDispatcher.handle(ContainerManagerImpl.java:678)
   at 
 org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:134)
   at 
 org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:81)
   at java.lang.Thread.run(Thread.java:680)
 {code}
 Looking at ContainerImpl.java, CONTAINER_KILLED_ON_REQUEST is not defined as 
 an expected state for CONTAINER_CLEANEDUP_AFTER_KILL.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (YARN-1130) Improve the log flushing for tasks when mapred.userlog.limit.kb is set

2013-10-07 Thread Paul Han (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13788354#comment-13788354
 ] 

Paul Han commented on YARN-1130:


Thanks Arun for picking this up! 

I've made some changes to simplify the fix to do flush of logging with wait. 
That should get rid of those outofmemory errors due to earlier addition of the 
asynchronous flush thread.  

I'll look into the other unit test failures. At first glance,  they don't seem 
to directly relate to the changes submitted. :(

 Improve the log flushing for tasks when mapred.userlog.limit.kb is set
 --

 Key: YARN-1130
 URL: https://issues.apache.org/jira/browse/YARN-1130
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Affects Versions: 2.0.5-alpha
Reporter: Paul Han
Assignee: Paul Han
 Fix For: 2.0.5-alpha

 Attachments: YARN-1130.patch, YARN-1130.patch, YARN-1130.patch


 When userlog limit is set with something like this:
 {code}
 property
 namemapred.userlog.limit.kb/name
 value2048/value
 descriptionThe maximum size of user-logs of each task in KB. 0 disables the 
 cap.
 /description
 /property
 {code}
 the log entry will be truncated randomly for the jobs.
 The log size is left between 1.2MB to 1.6MB.
 Since the log is already limited, avoid the log truncation is crucial for 
 user.
 The other issue with the current 
 impl(org.apache.hadoop.yarn.ContainerLogAppender) is that log entries will 
 not flush to file until the container shutdown and logmanager close all 
 appenders. If user likes to see the log during task execution, it doesn't 
 support it.
 Will propose a patch to add a flush mechanism and also flush the log when 
 task is done.  



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (YARN-1282) CONTAINER_KILLED_ON_REQUEST is considered invalid event for state CONTAINER_CLEANEDUP_AFTER_KILL

2013-10-07 Thread Zhijie Shen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13788366#comment-13788366
 ] 

Zhijie Shen commented on YARN-1282:
---

[~yuzhih...@gmail.com], the fix of YARN-1070 is in branch-2.1-beta and 
branch-2.2, but not in branch-2.1.1-beta.

 CONTAINER_KILLED_ON_REQUEST is considered invalid event for state 
 CONTAINER_CLEANEDUP_AFTER_KILL
 

 Key: YARN-1282
 URL: https://issues.apache.org/jira/browse/YARN-1282
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Ted Yu
Priority: Minor
 Attachments: TestFreezeThawLiveRegionService.out


 When running TestFreezeThawLiveRegionService in HOYA, I observed the 
 following exception in log:
 {code}
 2013-10-07 09:17:18,493 [AsyncDispatcher event handler] INFO  
 nodemanager.NMAuditLogger (NMAuditLogger.java:logSuccess(89)) - USER=tyu  
 OPERATION=Container Finished - Killed TARGET=ContainerImpl  RESULT=SUCCESS  
 APPID=application_1381162545230_0002  
 CONTAINERID=container_1381162545230_0002_01_03
 2013-10-07 09:17:18,493 [AsyncDispatcher event handler] INFO  
 container.Container (ContainerImpl.java:handle(871)) - Container 
 container_1381162545230_0002_01_03 transitioned from 
 CONTAINER_CLEANEDUP_AFTER_KILL to DONE
 2013-10-07 09:17:18,494 [AsyncDispatcher event handler] INFO  
 container.Container (ContainerImpl.java:handle(871)) - Container 
 container_1381162545230_0002_01_02 transitioned from KILLING to 
 CONTAINER_CLEANEDUP_AFTER_KILL
 2013-10-07 09:17:18,494 [AsyncDispatcher event handler] WARN  
 container.Container (ContainerImpl.java:handle(867)) - Can't handle this 
 event at current state: Current: [CONTAINER_CLEANEDUP_AFTER_KILL], eventType: 
 [CONTAINER_KILLED_ON_REQUEST]
 org.apache.hadoop.yarn.state.InvalidStateTransitonException: Invalid event: 
 CONTAINER_KILLED_ON_REQUEST at CONTAINER_CLEANEDUP_AFTER_KILL
   at 
 org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:305)
   at 
 org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46)
   at 
 org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl.handle(ContainerImpl.java:864)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl.handle(ContainerImpl.java:73)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl$ContainerEventDispatcher.handle(ContainerManagerImpl.java:685)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl$ContainerEventDispatcher.handle(ContainerManagerImpl.java:678)
   at 
 org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:134)
   at 
 org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:81)
   at java.lang.Thread.run(Thread.java:680)
 {code}
 Looking at ContainerImpl.java, CONTAINER_KILLED_ON_REQUEST is not defined as 
 an expected state for CONTAINER_CLEANEDUP_AFTER_KILL.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (YARN-1283) Invalid 'url of job' mentioned in Job output with yarn.http.policy=HTTPS_ONLY

2013-10-07 Thread Yesha Vora (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yesha Vora updated YARN-1283:
-

Description: 
After setting yarn.http.policy=HTTPS_ONLY, the job output shows incorrect The 
url to track the job.

Currently, its printing 
http://RM:httpsport/proxy/application_1381162886563_0001/ instead 
https://RM:httpsport/proxy/application_1381162886563_0001/

http://hostname:8088/proxy/application_1381162886563_0001/ is invalid

hadoop  jar hadoop-mapreduce-client-jobclient-tests.jar sleep -m 1 -r 1 
13/10/07 18:39:39 INFO client.RMProxy: Connecting to ResourceManager at 
hostname/100.00.00.000:8032
13/10/07 18:39:40 INFO mapreduce.JobSubmitter: number of splits:1
13/10/07 18:39:40 INFO Configuration.deprecation: user.name is deprecated. 
Instead, use mapreduce.job.user.name
13/10/07 18:39:40 INFO Configuration.deprecation: mapred.jar is deprecated. 
Instead, use mapreduce.job.jar
13/10/07 18:39:40 INFO Configuration.deprecation: 
mapred.map.tasks.speculative.execution is deprecated. Instead, use 
mapreduce.map.speculative
13/10/07 18:39:40 INFO Configuration.deprecation: mapred.reduce.tasks is 
deprecated. Instead, use mapreduce.job.reduces
13/10/07 18:39:40 INFO Configuration.deprecation: mapreduce.partitioner.class 
is deprecated. Instead, use mapreduce.job.partitioner.class
13/10/07 18:39:40 INFO Configuration.deprecation: 
mapred.reduce.tasks.speculative.execution is deprecated. Instead, use 
mapreduce.reduce.speculative
13/10/07 18:39:40 INFO Configuration.deprecation: mapred.mapoutput.value.class 
is deprecated. Instead, use mapreduce.map.output.value.class
13/10/07 18:39:40 INFO Configuration.deprecation: mapreduce.map.class is 
deprecated. Instead, use mapreduce.job.map.class
13/10/07 18:39:40 INFO Configuration.deprecation: mapred.job.name is 
deprecated. Instead, use mapreduce.job.name
13/10/07 18:39:40 INFO Configuration.deprecation: mapreduce.reduce.class is 
deprecated. Instead, use mapreduce.job.reduce.class
13/10/07 18:39:40 INFO Configuration.deprecation: mapreduce.inputformat.class 
is deprecated. Instead, use mapreduce.job.inputformat.class
13/10/07 18:39:40 INFO Configuration.deprecation: mapred.input.dir is 
deprecated. Instead, use mapreduce.input.fileinputformat.inputdir
13/10/07 18:39:40 INFO Configuration.deprecation: mapreduce.outputformat.class 
is deprecated. Instead, use mapreduce.job.outputformat.class
13/10/07 18:39:40 INFO Configuration.deprecation: mapred.map.tasks is 
deprecated. Instead, use mapreduce.job.maps
13/10/07 18:39:40 INFO Configuration.deprecation: mapred.mapoutput.key.class is 
deprecated. Instead, use mapreduce.map.output.key.class
13/10/07 18:39:40 INFO Configuration.deprecation: mapred.working.dir is 
deprecated. Instead, use mapreduce.job.working.dir
13/10/07 18:39:40 INFO mapreduce.JobSubmitter: Submitting tokens for job: 
job_1381162886563_0001
13/10/07 18:39:40 INFO impl.YarnClientImpl: Submitted application 
application_1381162886563_0001 to ResourceManager at hostname/100.00.00.000:8032
13/10/07 18:39:40 INFO mapreduce.Job: The url to track the job: 
http://hostname:8088/proxy/application_1381162886563_0001/
13/10/07 18:39:40 INFO mapreduce.Job: Running job: job_1381162886563_0001
13/10/07 18:39:46 INFO mapreduce.Job: Job job_1381162886563_0001 running in 
uber mode : false
13/10/07 18:39:46 INFO mapreduce.Job:  map 0% reduce 0%
13/10/07 18:39:53 INFO mapreduce.Job:  map 100% reduce 0%
13/10/07 18:39:58 INFO mapreduce.Job:  map 100% reduce 100%
13/10/07 18:39:58 INFO mapreduce.Job: Job job_1381162886563_0001 completed 
successfully
13/10/07 18:39:58 INFO mapreduce.Job: Counters: 43
File System Counters
FILE: Number of bytes read=26
FILE: Number of bytes written=177279
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=48
HDFS: Number of bytes written=0
HDFS: Number of read operations=1
HDFS: Number of large read operations=0
HDFS: Number of write operations=0
Job Counters 
Launched map tasks=1
Launched reduce tasks=1
Other local map tasks=1
Total time spent by all maps in occupied slots (ms)=7136
Total time spent by all reduces in occupied slots (ms)=6062
Map-Reduce Framework
Map input records=1
Map output records=1
Map output bytes=4
Map output materialized bytes=22
Input split bytes=48
Combine input records=0
Combine output records=0
Reduce input groups=1
Reduce shuffle bytes=22
Reduce input records=1
Reduce output records=0
 

[jira] [Created] (YARN-1283) Invalid 'url of job' mentioned in Job output with yarn.http.policy=HTTPS_ONLY

2013-10-07 Thread Yesha Vora (JIRA)
Yesha Vora created YARN-1283:


 Summary: Invalid 'url of job' mentioned in Job output with 
yarn.http.policy=HTTPS_ONLY
 Key: YARN-1283
 URL: https://issues.apache.org/jira/browse/YARN-1283
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 2.1.1-beta
Reporter: Yesha Vora


After setting yarn.http.policy=HTTPS_ONLY, the job output shows incorrect The 
url to track the job.

Currently, its printing 
http://RM:httpsport/proxy/application_1381133879292_0006 instead 
https://RM:httpsport/proxy/application_1381133879292_0006/

http://RM:/proxy/application_1381133879292_0006/ is invalid

hadoop  jar hadoop-mapreduce-client-jobclient-tests.jar sleep -m 1 -r 1 
13/10/07 18:39:39 INFO client.RMProxy: Connecting to ResourceManager at 
hostname/100.00.00.000:8032
13/10/07 18:39:40 INFO mapreduce.JobSubmitter: number of splits:1
13/10/07 18:39:40 INFO Configuration.deprecation: user.name is deprecated. 
Instead, use mapreduce.job.user.name
13/10/07 18:39:40 INFO Configuration.deprecation: mapred.jar is deprecated. 
Instead, use mapreduce.job.jar
13/10/07 18:39:40 INFO Configuration.deprecation: 
mapred.map.tasks.speculative.execution is deprecated. Instead, use 
mapreduce.map.speculative
13/10/07 18:39:40 INFO Configuration.deprecation: mapred.reduce.tasks is 
deprecated. Instead, use mapreduce.job.reduces
13/10/07 18:39:40 INFO Configuration.deprecation: mapreduce.partitioner.class 
is deprecated. Instead, use mapreduce.job.partitioner.class
13/10/07 18:39:40 INFO Configuration.deprecation: 
mapred.reduce.tasks.speculative.execution is deprecated. Instead, use 
mapreduce.reduce.speculative
13/10/07 18:39:40 INFO Configuration.deprecation: mapred.mapoutput.value.class 
is deprecated. Instead, use mapreduce.map.output.value.class
13/10/07 18:39:40 INFO Configuration.deprecation: mapreduce.map.class is 
deprecated. Instead, use mapreduce.job.map.class
13/10/07 18:39:40 INFO Configuration.deprecation: mapred.job.name is 
deprecated. Instead, use mapreduce.job.name
13/10/07 18:39:40 INFO Configuration.deprecation: mapreduce.reduce.class is 
deprecated. Instead, use mapreduce.job.reduce.class
13/10/07 18:39:40 INFO Configuration.deprecation: mapreduce.inputformat.class 
is deprecated. Instead, use mapreduce.job.inputformat.class
13/10/07 18:39:40 INFO Configuration.deprecation: mapred.input.dir is 
deprecated. Instead, use mapreduce.input.fileinputformat.inputdir
13/10/07 18:39:40 INFO Configuration.deprecation: mapreduce.outputformat.class 
is deprecated. Instead, use mapreduce.job.outputformat.class
13/10/07 18:39:40 INFO Configuration.deprecation: mapred.map.tasks is 
deprecated. Instead, use mapreduce.job.maps
13/10/07 18:39:40 INFO Configuration.deprecation: mapred.mapoutput.key.class is 
deprecated. Instead, use mapreduce.map.output.key.class
13/10/07 18:39:40 INFO Configuration.deprecation: mapred.working.dir is 
deprecated. Instead, use mapreduce.job.working.dir
13/10/07 18:39:40 INFO mapreduce.JobSubmitter: Submitting tokens for job: 
job_1381162886563_0001
13/10/07 18:39:40 INFO impl.YarnClientImpl: Submitted application 
application_1381162886563_0001 to ResourceManager at hostname/100.00.00.000:8032
13/10/07 18:39:40 INFO mapreduce.Job: The url to track the job: 
http://hostname:8088/proxy/application_1381162886563_0001/
13/10/07 18:39:40 INFO mapreduce.Job: Running job: job_1381162886563_0001
13/10/07 18:39:46 INFO mapreduce.Job: Job job_1381162886563_0001 running in 
uber mode : false
13/10/07 18:39:46 INFO mapreduce.Job:  map 0% reduce 0%
13/10/07 18:39:53 INFO mapreduce.Job:  map 100% reduce 0%
13/10/07 18:39:58 INFO mapreduce.Job:  map 100% reduce 100%
13/10/07 18:39:58 INFO mapreduce.Job: Job job_1381162886563_0001 completed 
successfully
13/10/07 18:39:58 INFO mapreduce.Job: Counters: 43
File System Counters
FILE: Number of bytes read=26
FILE: Number of bytes written=177279
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=48
HDFS: Number of bytes written=0
HDFS: Number of read operations=1
HDFS: Number of large read operations=0
HDFS: Number of write operations=0
Job Counters 
Launched map tasks=1
Launched reduce tasks=1
Other local map tasks=1
Total time spent by all maps in occupied slots (ms)=7136
Total time spent by all reduces in occupied slots (ms)=6062
Map-Reduce Framework
Map input records=1
Map output records=1
Map output bytes=4
Map output materialized bytes=22
Input split bytes=48
Combine input records=0
Combine 

[jira] [Updated] (YARN-465) fix coverage org.apache.hadoop.yarn.server.webproxy

2013-10-07 Thread Andrey Klochkov (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey Klochkov updated YARN-465:
-

Attachment: YARN-465-trunk--n5.patch
YARN-465-branch-2--n5.patch

Ravi,
The main method in WebAppProxyServer is blocking and it's installing an 
exception handler which calls System.exit, so it shouldn't be used in tests. 
Yes, it was a mistake that join() was just removed without making corresponding 
change in main() - I fixed that.

Yes, you're correct about the originalPort logging. I got rid of this confusing 
port variable and fixed logging.

Also, I made patches for branch-2 and trunk as similar as possible.

Attaching updated patches.

 fix coverage  org.apache.hadoop.yarn.server.webproxy
 

 Key: YARN-465
 URL: https://issues.apache.org/jira/browse/YARN-465
 Project: Hadoop YARN
  Issue Type: Sub-task
Affects Versions: 3.0.0, 0.23.7, 2.0.4-alpha
Reporter: Aleksey Gorshkov
Assignee: Andrey Klochkov
 Attachments: YARN-465-branch-0.23-a.patch, 
 YARN-465-branch-0.23.patch, YARN-465-branch-2-a.patch, 
 YARN-465-branch-2--n3.patch, YARN-465-branch-2--n4.patch, 
 YARN-465-branch-2--n5.patch, YARN-465-branch-2.patch, YARN-465-trunk-a.patch, 
 YARN-465-trunk--n3.patch, YARN-465-trunk--n4.patch, YARN-465-trunk--n5.patch, 
 YARN-465-trunk.patch


 fix coverage  org.apache.hadoop.yarn.server.webproxy
 patch YARN-465-trunk.patch for trunk
 patch YARN-465-branch-2.patch for branch-2
 patch YARN-465-branch-0.23.patch for branch-0.23
 There is issue in branch-0.23 . Patch does not creating .keep file.
 For fix it need to run commands:
 mkdir 
 yhadoop-common/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/proxy
 touch 
 yhadoop-common/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/proxy/.keep
  



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (YARN-1168) Cannot run echo \Hello World\

2013-10-07 Thread Tassapol Athiapinya (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13788445#comment-13788445
 ] 

Tassapol Athiapinya commented on YARN-1168:
---

Use a double-quote with following single-quote instead of one double-quote will 
work.

/usr/bin/yarn org.apache.hadoop.yarn.applications.distributedshell.Client -jar 
jar -shell_command echo -shell_args 'hello world'

 Cannot run echo \Hello World\
 -

 Key: YARN-1168
 URL: https://issues.apache.org/jira/browse/YARN-1168
 Project: Hadoop YARN
  Issue Type: Bug
  Components: applications/distributed-shell
Reporter: Tassapol Athiapinya
Priority: Critical
 Fix For: 2.2.0


 Run
 $ ssh localhost echo \Hello World\
 with bash does succeed. Hello World is shown in stdout.
 Run distributed shell with similar echo command. That is either
 $ /usr/bin/yarn  org.apache.hadoop.yarn.applications.distributedshell.Client 
 -jar /usr/lib/hadoop-yarn/hadoop-yarn-applications-distributedshell-2.*.jar 
 -shell_command echo -shell_args \Hello World\
 or
 $ /usr/bin/yarn  org.apache.hadoop.yarn.applications.distributedshell.Client 
 -jar /usr/lib/hadoop-yarn/hadoop-yarn-applications-distributedshell-2.*.jar 
 -shell_command echo -shell_args Hello World
 {code:title=yarn logs -- only hello is shown}
 LogType: stdout
 LogLength: 6
 Log Contents:
 hello
 {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Resolved] (YARN-1168) Cannot run echo \Hello World\

2013-10-07 Thread Tassapol Athiapinya (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tassapol Athiapinya resolved YARN-1168.
---

Resolution: Not A Problem

 Cannot run echo \Hello World\
 -

 Key: YARN-1168
 URL: https://issues.apache.org/jira/browse/YARN-1168
 Project: Hadoop YARN
  Issue Type: Bug
  Components: applications/distributed-shell
Reporter: Tassapol Athiapinya
Priority: Critical
 Fix For: 2.2.0


 Run
 $ ssh localhost echo \Hello World\
 with bash does succeed. Hello World is shown in stdout.
 Run distributed shell with similar echo command. That is either
 $ /usr/bin/yarn  org.apache.hadoop.yarn.applications.distributedshell.Client 
 -jar /usr/lib/hadoop-yarn/hadoop-yarn-applications-distributedshell-2.*.jar 
 -shell_command echo -shell_args \Hello World\
 or
 $ /usr/bin/yarn  org.apache.hadoop.yarn.applications.distributedshell.Client 
 -jar /usr/lib/hadoop-yarn/hadoop-yarn-applications-distributedshell-2.*.jar 
 -shell_command echo -shell_args Hello World
 {code:title=yarn logs -- only hello is shown}
 LogType: stdout
 LogLength: 6
 Log Contents:
 hello
 {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (YARN-1068) Add admin support for HA operations

2013-10-07 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13788460#comment-13788460
 ] 

Bikas Saha commented on YARN-1068:
--

Should this be RMHAServiceProtocol address? Admins and ZKFC would be connecting 
on this protocol right?
{code}
+  public static final String RM_HA_ADMIN_ADDRESS =
+  RM_HA_PREFIX + admin.address;
...
+descriptionThe address of the RM HA admin interface./description
+nameyarn.resourcemanager.ha.admin.address/name
+value${yarn.resourcemanager.hostname}:8034/value
...
+  private AccessControlList adminAcl;
+  private Server haAdminServer;
{code}

instanceof check before creating new conf?
{code}
+  public void setConf(Configuration conf) {
+if (conf != null) {
+  conf = new YarnConfiguration(conf);
+}
+super.setConf(conf);
{code}

More clear if we name the parameter rmId instead of nodeId
{code}
protected HAServiceTarget resolveTarget(String nodeId) {
{code}

cast not needed right?
{code}
+try {
+  return new RMHAServiceTarget((YarnConfiguration)getConf(), nodeId);
{code}

Should this create its own copy of the conf instead of overriding the CLI's 
copy? If its ok to override then we should probably do it in the CLI itself 
instead of being the side effect of a constructor.
{code}
public RMHAServiceTarget(YarnConfiguration conf, String nodeId)
+  throws IOException {
+conf.set(YarnConfiguration.RM_HA_ID, nodeId);
{code}

After this how is someone supposed to get the socker addr? conf.getSocketAddr() 
will do the HAUtil magic instead of directly picking what is set by this code?
{code}
+haAdminServer.start();
+conf.updateConnectAddr(YarnConfiguration.RM_HA_ADMIN_ADDRESS,
+haAdminServer.getListenerAddress());
{code}

Good to refactor. Looks like we should change AdminService to use the same 
method. Also, this method returns a user value that is used by AdminService to 
do audit logging. Would be good to follow that pattern and do audit logging in 
HAService at least for the state transition operations. probably not for the 
health check operations.
{code}
+try {
+  RMServerUtils.verifyAccess(adminAcl, method, LOG);
{code}

Any notes on testing. There doesnt seem to be any new unit tests added.

 Add admin support for HA operations
 ---

 Key: YARN-1068
 URL: https://issues.apache.org/jira/browse/YARN-1068
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Affects Versions: 2.1.0-beta
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla
  Labels: ha
 Attachments: yarn-1068-1.patch, yarn-1068-2.patch, yarn-1068-3.patch, 
 yarn-1068-4.patch, yarn-1068-5.patch, yarn-1068-6.patch, yarn-1068-7.patch, 
 yarn-1068-8.patch, yarn-1068-9.patch, yarn-1068-prelim.patch


 Support HA admin operations to facilitate transitioning the RM to Active and 
 Standby states.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (YARN-465) fix coverage org.apache.hadoop.yarn.server.webproxy

2013-10-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13788461#comment-13788461
 ] 

Hadoop QA commented on YARN-465:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12607218/YARN-465-trunk--n5.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/2138//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/2138//console

This message is automatically generated.

 fix coverage  org.apache.hadoop.yarn.server.webproxy
 

 Key: YARN-465
 URL: https://issues.apache.org/jira/browse/YARN-465
 Project: Hadoop YARN
  Issue Type: Sub-task
Affects Versions: 3.0.0, 0.23.7, 2.0.4-alpha
Reporter: Aleksey Gorshkov
Assignee: Andrey Klochkov
 Attachments: YARN-465-branch-0.23-a.patch, 
 YARN-465-branch-0.23.patch, YARN-465-branch-2-a.patch, 
 YARN-465-branch-2--n3.patch, YARN-465-branch-2--n4.patch, 
 YARN-465-branch-2--n5.patch, YARN-465-branch-2.patch, YARN-465-trunk-a.patch, 
 YARN-465-trunk--n3.patch, YARN-465-trunk--n4.patch, YARN-465-trunk--n5.patch, 
 YARN-465-trunk.patch


 fix coverage  org.apache.hadoop.yarn.server.webproxy
 patch YARN-465-trunk.patch for trunk
 patch YARN-465-branch-2.patch for branch-2
 patch YARN-465-branch-0.23.patch for branch-0.23
 There is issue in branch-0.23 . Patch does not creating .keep file.
 For fix it need to run commands:
 mkdir 
 yhadoop-common/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/proxy
 touch 
 yhadoop-common/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/proxy/.keep
  



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Assigned] (YARN-1283) Invalid 'url of job' mentioned in Job output with yarn.http.policy=HTTPS_ONLY

2013-10-07 Thread Omkar Vinit Joshi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Omkar Vinit Joshi reassigned YARN-1283:
---

Assignee: Omkar Vinit Joshi

 Invalid 'url of job' mentioned in Job output with yarn.http.policy=HTTPS_ONLY
 -

 Key: YARN-1283
 URL: https://issues.apache.org/jira/browse/YARN-1283
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 2.1.1-beta
Reporter: Yesha Vora
Assignee: Omkar Vinit Joshi

 After setting yarn.http.policy=HTTPS_ONLY, the job output shows incorrect 
 The url to track the job.
 Currently, its printing 
 http://RM:httpsport/proxy/application_1381162886563_0001/ instead 
 https://RM:httpsport/proxy/application_1381162886563_0001/
 http://hostname:8088/proxy/application_1381162886563_0001/ is invalid
 hadoop  jar hadoop-mapreduce-client-jobclient-tests.jar sleep -m 1 -r 1 
 13/10/07 18:39:39 INFO client.RMProxy: Connecting to ResourceManager at 
 hostname/100.00.00.000:8032
 13/10/07 18:39:40 INFO mapreduce.JobSubmitter: number of splits:1
 13/10/07 18:39:40 INFO Configuration.deprecation: user.name is deprecated. 
 Instead, use mapreduce.job.user.name
 13/10/07 18:39:40 INFO Configuration.deprecation: mapred.jar is deprecated. 
 Instead, use mapreduce.job.jar
 13/10/07 18:39:40 INFO Configuration.deprecation: 
 mapred.map.tasks.speculative.execution is deprecated. Instead, use 
 mapreduce.map.speculative
 13/10/07 18:39:40 INFO Configuration.deprecation: mapred.reduce.tasks is 
 deprecated. Instead, use mapreduce.job.reduces
 13/10/07 18:39:40 INFO Configuration.deprecation: mapreduce.partitioner.class 
 is deprecated. Instead, use mapreduce.job.partitioner.class
 13/10/07 18:39:40 INFO Configuration.deprecation: 
 mapred.reduce.tasks.speculative.execution is deprecated. Instead, use 
 mapreduce.reduce.speculative
 13/10/07 18:39:40 INFO Configuration.deprecation: 
 mapred.mapoutput.value.class is deprecated. Instead, use 
 mapreduce.map.output.value.class
 13/10/07 18:39:40 INFO Configuration.deprecation: mapreduce.map.class is 
 deprecated. Instead, use mapreduce.job.map.class
 13/10/07 18:39:40 INFO Configuration.deprecation: mapred.job.name is 
 deprecated. Instead, use mapreduce.job.name
 13/10/07 18:39:40 INFO Configuration.deprecation: mapreduce.reduce.class is 
 deprecated. Instead, use mapreduce.job.reduce.class
 13/10/07 18:39:40 INFO Configuration.deprecation: mapreduce.inputformat.class 
 is deprecated. Instead, use mapreduce.job.inputformat.class
 13/10/07 18:39:40 INFO Configuration.deprecation: mapred.input.dir is 
 deprecated. Instead, use mapreduce.input.fileinputformat.inputdir
 13/10/07 18:39:40 INFO Configuration.deprecation: 
 mapreduce.outputformat.class is deprecated. Instead, use 
 mapreduce.job.outputformat.class
 13/10/07 18:39:40 INFO Configuration.deprecation: mapred.map.tasks is 
 deprecated. Instead, use mapreduce.job.maps
 13/10/07 18:39:40 INFO Configuration.deprecation: mapred.mapoutput.key.class 
 is deprecated. Instead, use mapreduce.map.output.key.class
 13/10/07 18:39:40 INFO Configuration.deprecation: mapred.working.dir is 
 deprecated. Instead, use mapreduce.job.working.dir
 13/10/07 18:39:40 INFO mapreduce.JobSubmitter: Submitting tokens for job: 
 job_1381162886563_0001
 13/10/07 18:39:40 INFO impl.YarnClientImpl: Submitted application 
 application_1381162886563_0001 to ResourceManager at 
 hostname/100.00.00.000:8032
 13/10/07 18:39:40 INFO mapreduce.Job: The url to track the job: 
 http://hostname:8088/proxy/application_1381162886563_0001/
 13/10/07 18:39:40 INFO mapreduce.Job: Running job: job_1381162886563_0001
 13/10/07 18:39:46 INFO mapreduce.Job: Job job_1381162886563_0001 running in 
 uber mode : false
 13/10/07 18:39:46 INFO mapreduce.Job:  map 0% reduce 0%
 13/10/07 18:39:53 INFO mapreduce.Job:  map 100% reduce 0%
 13/10/07 18:39:58 INFO mapreduce.Job:  map 100% reduce 100%
 13/10/07 18:39:58 INFO mapreduce.Job: Job job_1381162886563_0001 completed 
 successfully
 13/10/07 18:39:58 INFO mapreduce.Job: Counters: 43
   File System Counters
   FILE: Number of bytes read=26
   FILE: Number of bytes written=177279
   FILE: Number of read operations=0
   FILE: Number of large read operations=0
   FILE: Number of write operations=0
   HDFS: Number of bytes read=48
   HDFS: Number of bytes written=0
   HDFS: Number of read operations=1
   HDFS: Number of large read operations=0
   HDFS: Number of write operations=0
   Job Counters 
   Launched map tasks=1
   Launched reduce tasks=1
   Other local map tasks=1
   Total time spent by all maps in occupied slots (ms)=7136
   Total time spent by all 

[jira] [Commented] (YARN-465) fix coverage org.apache.hadoop.yarn.server.webproxy

2013-10-07 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13788542#comment-13788542
 ] 

Jason Lowe commented on YARN-465:
-

+1 to the trunk patch, lgtm.

bq. Also, I made patches for branch-2 and trunk as similar as possible.

I diff'd the trunk and branch-2 patches and the only significant difference I 
could find was the branch-2 patch was importing 
org.apache.commons.lang.StringUtils while trunk imported 
org.apache.hadoop.util.StringUtils.  The trunk patch applies to branch-2 
cleanly and the unit tests pass, so is there any reason I'm missing to just 
apply the trunk patch to branch-2 as well?

 fix coverage  org.apache.hadoop.yarn.server.webproxy
 

 Key: YARN-465
 URL: https://issues.apache.org/jira/browse/YARN-465
 Project: Hadoop YARN
  Issue Type: Sub-task
Affects Versions: 3.0.0, 0.23.7, 2.0.4-alpha
Reporter: Aleksey Gorshkov
Assignee: Andrey Klochkov
 Attachments: YARN-465-branch-0.23-a.patch, 
 YARN-465-branch-0.23.patch, YARN-465-branch-2-a.patch, 
 YARN-465-branch-2--n3.patch, YARN-465-branch-2--n4.patch, 
 YARN-465-branch-2--n5.patch, YARN-465-branch-2.patch, YARN-465-trunk-a.patch, 
 YARN-465-trunk--n3.patch, YARN-465-trunk--n4.patch, YARN-465-trunk--n5.patch, 
 YARN-465-trunk.patch


 fix coverage  org.apache.hadoop.yarn.server.webproxy
 patch YARN-465-trunk.patch for trunk
 patch YARN-465-branch-2.patch for branch-2
 patch YARN-465-branch-0.23.patch for branch-0.23
 There is issue in branch-0.23 . Patch does not creating .keep file.
 For fix it need to run commands:
 mkdir 
 yhadoop-common/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/proxy
 touch 
 yhadoop-common/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/proxy/.keep
  



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (YARN-1283) Invalid 'url of job' mentioned in Job output with yarn.http.policy=HTTPS_ONLY

2013-10-07 Thread Omkar Vinit Joshi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Omkar Vinit Joshi updated YARN-1283:


Issue Type: Sub-task  (was: Bug)
Parent: YARN-1280

 Invalid 'url of job' mentioned in Job output with yarn.http.policy=HTTPS_ONLY
 -

 Key: YARN-1283
 URL: https://issues.apache.org/jira/browse/YARN-1283
 Project: Hadoop YARN
  Issue Type: Sub-task
Affects Versions: 2.1.1-beta
Reporter: Yesha Vora
Assignee: Omkar Vinit Joshi

 After setting yarn.http.policy=HTTPS_ONLY, the job output shows incorrect 
 The url to track the job.
 Currently, its printing 
 http://RM:httpsport/proxy/application_1381162886563_0001/ instead 
 https://RM:httpsport/proxy/application_1381162886563_0001/
 http://hostname:8088/proxy/application_1381162886563_0001/ is invalid
 hadoop  jar hadoop-mapreduce-client-jobclient-tests.jar sleep -m 1 -r 1 
 13/10/07 18:39:39 INFO client.RMProxy: Connecting to ResourceManager at 
 hostname/100.00.00.000:8032
 13/10/07 18:39:40 INFO mapreduce.JobSubmitter: number of splits:1
 13/10/07 18:39:40 INFO Configuration.deprecation: user.name is deprecated. 
 Instead, use mapreduce.job.user.name
 13/10/07 18:39:40 INFO Configuration.deprecation: mapred.jar is deprecated. 
 Instead, use mapreduce.job.jar
 13/10/07 18:39:40 INFO Configuration.deprecation: 
 mapred.map.tasks.speculative.execution is deprecated. Instead, use 
 mapreduce.map.speculative
 13/10/07 18:39:40 INFO Configuration.deprecation: mapred.reduce.tasks is 
 deprecated. Instead, use mapreduce.job.reduces
 13/10/07 18:39:40 INFO Configuration.deprecation: mapreduce.partitioner.class 
 is deprecated. Instead, use mapreduce.job.partitioner.class
 13/10/07 18:39:40 INFO Configuration.deprecation: 
 mapred.reduce.tasks.speculative.execution is deprecated. Instead, use 
 mapreduce.reduce.speculative
 13/10/07 18:39:40 INFO Configuration.deprecation: 
 mapred.mapoutput.value.class is deprecated. Instead, use 
 mapreduce.map.output.value.class
 13/10/07 18:39:40 INFO Configuration.deprecation: mapreduce.map.class is 
 deprecated. Instead, use mapreduce.job.map.class
 13/10/07 18:39:40 INFO Configuration.deprecation: mapred.job.name is 
 deprecated. Instead, use mapreduce.job.name
 13/10/07 18:39:40 INFO Configuration.deprecation: mapreduce.reduce.class is 
 deprecated. Instead, use mapreduce.job.reduce.class
 13/10/07 18:39:40 INFO Configuration.deprecation: mapreduce.inputformat.class 
 is deprecated. Instead, use mapreduce.job.inputformat.class
 13/10/07 18:39:40 INFO Configuration.deprecation: mapred.input.dir is 
 deprecated. Instead, use mapreduce.input.fileinputformat.inputdir
 13/10/07 18:39:40 INFO Configuration.deprecation: 
 mapreduce.outputformat.class is deprecated. Instead, use 
 mapreduce.job.outputformat.class
 13/10/07 18:39:40 INFO Configuration.deprecation: mapred.map.tasks is 
 deprecated. Instead, use mapreduce.job.maps
 13/10/07 18:39:40 INFO Configuration.deprecation: mapred.mapoutput.key.class 
 is deprecated. Instead, use mapreduce.map.output.key.class
 13/10/07 18:39:40 INFO Configuration.deprecation: mapred.working.dir is 
 deprecated. Instead, use mapreduce.job.working.dir
 13/10/07 18:39:40 INFO mapreduce.JobSubmitter: Submitting tokens for job: 
 job_1381162886563_0001
 13/10/07 18:39:40 INFO impl.YarnClientImpl: Submitted application 
 application_1381162886563_0001 to ResourceManager at 
 hostname/100.00.00.000:8032
 13/10/07 18:39:40 INFO mapreduce.Job: The url to track the job: 
 http://hostname:8088/proxy/application_1381162886563_0001/
 13/10/07 18:39:40 INFO mapreduce.Job: Running job: job_1381162886563_0001
 13/10/07 18:39:46 INFO mapreduce.Job: Job job_1381162886563_0001 running in 
 uber mode : false
 13/10/07 18:39:46 INFO mapreduce.Job:  map 0% reduce 0%
 13/10/07 18:39:53 INFO mapreduce.Job:  map 100% reduce 0%
 13/10/07 18:39:58 INFO mapreduce.Job:  map 100% reduce 100%
 13/10/07 18:39:58 INFO mapreduce.Job: Job job_1381162886563_0001 completed 
 successfully
 13/10/07 18:39:58 INFO mapreduce.Job: Counters: 43
   File System Counters
   FILE: Number of bytes read=26
   FILE: Number of bytes written=177279
   FILE: Number of read operations=0
   FILE: Number of large read operations=0
   FILE: Number of write operations=0
   HDFS: Number of bytes read=48
   HDFS: Number of bytes written=0
   HDFS: Number of read operations=1
   HDFS: Number of large read operations=0
   HDFS: Number of write operations=0
   Job Counters 
   Launched map tasks=1
   Launched reduce tasks=1
   Other local map tasks=1
   Total time spent by all maps in occupied slots (ms)=7136
   

[jira] [Commented] (YARN-1283) Invalid 'url of job' mentioned in Job output with yarn.http.policy=HTTPS_ONLY

2013-10-07 Thread Omkar Vinit Joshi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13788543#comment-13788543
 ] 

Omkar Vinit Joshi commented on YARN-1283:
-

Thanks [~yeshavora]. Today as a part of application report we don't return the 
scheme and on client side we generate scheme using 
HttpConfig.getSchemePrefix(). Ideally server should have returned the url and 
client should have used it as it is. Uploading a patch which fixes this 
behavior.

 Invalid 'url of job' mentioned in Job output with yarn.http.policy=HTTPS_ONLY
 -

 Key: YARN-1283
 URL: https://issues.apache.org/jira/browse/YARN-1283
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 2.1.1-beta
Reporter: Yesha Vora
Assignee: Omkar Vinit Joshi

 After setting yarn.http.policy=HTTPS_ONLY, the job output shows incorrect 
 The url to track the job.
 Currently, its printing 
 http://RM:httpsport/proxy/application_1381162886563_0001/ instead 
 https://RM:httpsport/proxy/application_1381162886563_0001/
 http://hostname:8088/proxy/application_1381162886563_0001/ is invalid
 hadoop  jar hadoop-mapreduce-client-jobclient-tests.jar sleep -m 1 -r 1 
 13/10/07 18:39:39 INFO client.RMProxy: Connecting to ResourceManager at 
 hostname/100.00.00.000:8032
 13/10/07 18:39:40 INFO mapreduce.JobSubmitter: number of splits:1
 13/10/07 18:39:40 INFO Configuration.deprecation: user.name is deprecated. 
 Instead, use mapreduce.job.user.name
 13/10/07 18:39:40 INFO Configuration.deprecation: mapred.jar is deprecated. 
 Instead, use mapreduce.job.jar
 13/10/07 18:39:40 INFO Configuration.deprecation: 
 mapred.map.tasks.speculative.execution is deprecated. Instead, use 
 mapreduce.map.speculative
 13/10/07 18:39:40 INFO Configuration.deprecation: mapred.reduce.tasks is 
 deprecated. Instead, use mapreduce.job.reduces
 13/10/07 18:39:40 INFO Configuration.deprecation: mapreduce.partitioner.class 
 is deprecated. Instead, use mapreduce.job.partitioner.class
 13/10/07 18:39:40 INFO Configuration.deprecation: 
 mapred.reduce.tasks.speculative.execution is deprecated. Instead, use 
 mapreduce.reduce.speculative
 13/10/07 18:39:40 INFO Configuration.deprecation: 
 mapred.mapoutput.value.class is deprecated. Instead, use 
 mapreduce.map.output.value.class
 13/10/07 18:39:40 INFO Configuration.deprecation: mapreduce.map.class is 
 deprecated. Instead, use mapreduce.job.map.class
 13/10/07 18:39:40 INFO Configuration.deprecation: mapred.job.name is 
 deprecated. Instead, use mapreduce.job.name
 13/10/07 18:39:40 INFO Configuration.deprecation: mapreduce.reduce.class is 
 deprecated. Instead, use mapreduce.job.reduce.class
 13/10/07 18:39:40 INFO Configuration.deprecation: mapreduce.inputformat.class 
 is deprecated. Instead, use mapreduce.job.inputformat.class
 13/10/07 18:39:40 INFO Configuration.deprecation: mapred.input.dir is 
 deprecated. Instead, use mapreduce.input.fileinputformat.inputdir
 13/10/07 18:39:40 INFO Configuration.deprecation: 
 mapreduce.outputformat.class is deprecated. Instead, use 
 mapreduce.job.outputformat.class
 13/10/07 18:39:40 INFO Configuration.deprecation: mapred.map.tasks is 
 deprecated. Instead, use mapreduce.job.maps
 13/10/07 18:39:40 INFO Configuration.deprecation: mapred.mapoutput.key.class 
 is deprecated. Instead, use mapreduce.map.output.key.class
 13/10/07 18:39:40 INFO Configuration.deprecation: mapred.working.dir is 
 deprecated. Instead, use mapreduce.job.working.dir
 13/10/07 18:39:40 INFO mapreduce.JobSubmitter: Submitting tokens for job: 
 job_1381162886563_0001
 13/10/07 18:39:40 INFO impl.YarnClientImpl: Submitted application 
 application_1381162886563_0001 to ResourceManager at 
 hostname/100.00.00.000:8032
 13/10/07 18:39:40 INFO mapreduce.Job: The url to track the job: 
 http://hostname:8088/proxy/application_1381162886563_0001/
 13/10/07 18:39:40 INFO mapreduce.Job: Running job: job_1381162886563_0001
 13/10/07 18:39:46 INFO mapreduce.Job: Job job_1381162886563_0001 running in 
 uber mode : false
 13/10/07 18:39:46 INFO mapreduce.Job:  map 0% reduce 0%
 13/10/07 18:39:53 INFO mapreduce.Job:  map 100% reduce 0%
 13/10/07 18:39:58 INFO mapreduce.Job:  map 100% reduce 100%
 13/10/07 18:39:58 INFO mapreduce.Job: Job job_1381162886563_0001 completed 
 successfully
 13/10/07 18:39:58 INFO mapreduce.Job: Counters: 43
   File System Counters
   FILE: Number of bytes read=26
   FILE: Number of bytes written=177279
   FILE: Number of read operations=0
   FILE: Number of large read operations=0
   FILE: Number of write operations=0
   HDFS: Number of bytes read=48
   HDFS: Number of bytes written=0
   HDFS: Number of read operations=1
   HDFS: Number of large 

[jira] [Commented] (YARN-1283) Invalid 'url of job' mentioned in Job output with yarn.http.policy=HTTPS_ONLY

2013-10-07 Thread Omkar Vinit Joshi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13788547#comment-13788547
 ] 

Omkar Vinit Joshi commented on YARN-1283:
-

Earlier it was working mainly because of hadoop.ssl.enable property. After 
YARN-1277 the behavior has changed.

 Invalid 'url of job' mentioned in Job output with yarn.http.policy=HTTPS_ONLY
 -

 Key: YARN-1283
 URL: https://issues.apache.org/jira/browse/YARN-1283
 Project: Hadoop YARN
  Issue Type: Sub-task
Affects Versions: 2.1.1-beta
Reporter: Yesha Vora
Assignee: Omkar Vinit Joshi

 After setting yarn.http.policy=HTTPS_ONLY, the job output shows incorrect 
 The url to track the job.
 Currently, its printing 
 http://RM:httpsport/proxy/application_1381162886563_0001/ instead 
 https://RM:httpsport/proxy/application_1381162886563_0001/
 http://hostname:8088/proxy/application_1381162886563_0001/ is invalid
 hadoop  jar hadoop-mapreduce-client-jobclient-tests.jar sleep -m 1 -r 1 
 13/10/07 18:39:39 INFO client.RMProxy: Connecting to ResourceManager at 
 hostname/100.00.00.000:8032
 13/10/07 18:39:40 INFO mapreduce.JobSubmitter: number of splits:1
 13/10/07 18:39:40 INFO Configuration.deprecation: user.name is deprecated. 
 Instead, use mapreduce.job.user.name
 13/10/07 18:39:40 INFO Configuration.deprecation: mapred.jar is deprecated. 
 Instead, use mapreduce.job.jar
 13/10/07 18:39:40 INFO Configuration.deprecation: 
 mapred.map.tasks.speculative.execution is deprecated. Instead, use 
 mapreduce.map.speculative
 13/10/07 18:39:40 INFO Configuration.deprecation: mapred.reduce.tasks is 
 deprecated. Instead, use mapreduce.job.reduces
 13/10/07 18:39:40 INFO Configuration.deprecation: mapreduce.partitioner.class 
 is deprecated. Instead, use mapreduce.job.partitioner.class
 13/10/07 18:39:40 INFO Configuration.deprecation: 
 mapred.reduce.tasks.speculative.execution is deprecated. Instead, use 
 mapreduce.reduce.speculative
 13/10/07 18:39:40 INFO Configuration.deprecation: 
 mapred.mapoutput.value.class is deprecated. Instead, use 
 mapreduce.map.output.value.class
 13/10/07 18:39:40 INFO Configuration.deprecation: mapreduce.map.class is 
 deprecated. Instead, use mapreduce.job.map.class
 13/10/07 18:39:40 INFO Configuration.deprecation: mapred.job.name is 
 deprecated. Instead, use mapreduce.job.name
 13/10/07 18:39:40 INFO Configuration.deprecation: mapreduce.reduce.class is 
 deprecated. Instead, use mapreduce.job.reduce.class
 13/10/07 18:39:40 INFO Configuration.deprecation: mapreduce.inputformat.class 
 is deprecated. Instead, use mapreduce.job.inputformat.class
 13/10/07 18:39:40 INFO Configuration.deprecation: mapred.input.dir is 
 deprecated. Instead, use mapreduce.input.fileinputformat.inputdir
 13/10/07 18:39:40 INFO Configuration.deprecation: 
 mapreduce.outputformat.class is deprecated. Instead, use 
 mapreduce.job.outputformat.class
 13/10/07 18:39:40 INFO Configuration.deprecation: mapred.map.tasks is 
 deprecated. Instead, use mapreduce.job.maps
 13/10/07 18:39:40 INFO Configuration.deprecation: mapred.mapoutput.key.class 
 is deprecated. Instead, use mapreduce.map.output.key.class
 13/10/07 18:39:40 INFO Configuration.deprecation: mapred.working.dir is 
 deprecated. Instead, use mapreduce.job.working.dir
 13/10/07 18:39:40 INFO mapreduce.JobSubmitter: Submitting tokens for job: 
 job_1381162886563_0001
 13/10/07 18:39:40 INFO impl.YarnClientImpl: Submitted application 
 application_1381162886563_0001 to ResourceManager at 
 hostname/100.00.00.000:8032
 13/10/07 18:39:40 INFO mapreduce.Job: The url to track the job: 
 http://hostname:8088/proxy/application_1381162886563_0001/
 13/10/07 18:39:40 INFO mapreduce.Job: Running job: job_1381162886563_0001
 13/10/07 18:39:46 INFO mapreduce.Job: Job job_1381162886563_0001 running in 
 uber mode : false
 13/10/07 18:39:46 INFO mapreduce.Job:  map 0% reduce 0%
 13/10/07 18:39:53 INFO mapreduce.Job:  map 100% reduce 0%
 13/10/07 18:39:58 INFO mapreduce.Job:  map 100% reduce 100%
 13/10/07 18:39:58 INFO mapreduce.Job: Job job_1381162886563_0001 completed 
 successfully
 13/10/07 18:39:58 INFO mapreduce.Job: Counters: 43
   File System Counters
   FILE: Number of bytes read=26
   FILE: Number of bytes written=177279
   FILE: Number of read operations=0
   FILE: Number of large read operations=0
   FILE: Number of write operations=0
   HDFS: Number of bytes read=48
   HDFS: Number of bytes written=0
   HDFS: Number of read operations=1
   HDFS: Number of large read operations=0
   HDFS: Number of write operations=0
   Job Counters 
   Launched map tasks=1
   Launched reduce tasks=1
   

[jira] [Updated] (YARN-1283) Invalid 'url of job' mentioned in Job output with yarn.http.policy=HTTPS_ONLY

2013-10-07 Thread Omkar Vinit Joshi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Omkar Vinit Joshi updated YARN-1283:


Labels: newbie  (was: )

 Invalid 'url of job' mentioned in Job output with yarn.http.policy=HTTPS_ONLY
 -

 Key: YARN-1283
 URL: https://issues.apache.org/jira/browse/YARN-1283
 Project: Hadoop YARN
  Issue Type: Sub-task
Affects Versions: 2.1.1-beta
Reporter: Yesha Vora
Assignee: Omkar Vinit Joshi
  Labels: newbie

 After setting yarn.http.policy=HTTPS_ONLY, the job output shows incorrect 
 The url to track the job.
 Currently, its printing 
 http://RM:httpsport/proxy/application_1381162886563_0001/ instead 
 https://RM:httpsport/proxy/application_1381162886563_0001/
 http://hostname:8088/proxy/application_1381162886563_0001/ is invalid
 hadoop  jar hadoop-mapreduce-client-jobclient-tests.jar sleep -m 1 -r 1 
 13/10/07 18:39:39 INFO client.RMProxy: Connecting to ResourceManager at 
 hostname/100.00.00.000:8032
 13/10/07 18:39:40 INFO mapreduce.JobSubmitter: number of splits:1
 13/10/07 18:39:40 INFO Configuration.deprecation: user.name is deprecated. 
 Instead, use mapreduce.job.user.name
 13/10/07 18:39:40 INFO Configuration.deprecation: mapred.jar is deprecated. 
 Instead, use mapreduce.job.jar
 13/10/07 18:39:40 INFO Configuration.deprecation: 
 mapred.map.tasks.speculative.execution is deprecated. Instead, use 
 mapreduce.map.speculative
 13/10/07 18:39:40 INFO Configuration.deprecation: mapred.reduce.tasks is 
 deprecated. Instead, use mapreduce.job.reduces
 13/10/07 18:39:40 INFO Configuration.deprecation: mapreduce.partitioner.class 
 is deprecated. Instead, use mapreduce.job.partitioner.class
 13/10/07 18:39:40 INFO Configuration.deprecation: 
 mapred.reduce.tasks.speculative.execution is deprecated. Instead, use 
 mapreduce.reduce.speculative
 13/10/07 18:39:40 INFO Configuration.deprecation: 
 mapred.mapoutput.value.class is deprecated. Instead, use 
 mapreduce.map.output.value.class
 13/10/07 18:39:40 INFO Configuration.deprecation: mapreduce.map.class is 
 deprecated. Instead, use mapreduce.job.map.class
 13/10/07 18:39:40 INFO Configuration.deprecation: mapred.job.name is 
 deprecated. Instead, use mapreduce.job.name
 13/10/07 18:39:40 INFO Configuration.deprecation: mapreduce.reduce.class is 
 deprecated. Instead, use mapreduce.job.reduce.class
 13/10/07 18:39:40 INFO Configuration.deprecation: mapreduce.inputformat.class 
 is deprecated. Instead, use mapreduce.job.inputformat.class
 13/10/07 18:39:40 INFO Configuration.deprecation: mapred.input.dir is 
 deprecated. Instead, use mapreduce.input.fileinputformat.inputdir
 13/10/07 18:39:40 INFO Configuration.deprecation: 
 mapreduce.outputformat.class is deprecated. Instead, use 
 mapreduce.job.outputformat.class
 13/10/07 18:39:40 INFO Configuration.deprecation: mapred.map.tasks is 
 deprecated. Instead, use mapreduce.job.maps
 13/10/07 18:39:40 INFO Configuration.deprecation: mapred.mapoutput.key.class 
 is deprecated. Instead, use mapreduce.map.output.key.class
 13/10/07 18:39:40 INFO Configuration.deprecation: mapred.working.dir is 
 deprecated. Instead, use mapreduce.job.working.dir
 13/10/07 18:39:40 INFO mapreduce.JobSubmitter: Submitting tokens for job: 
 job_1381162886563_0001
 13/10/07 18:39:40 INFO impl.YarnClientImpl: Submitted application 
 application_1381162886563_0001 to ResourceManager at 
 hostname/100.00.00.000:8032
 13/10/07 18:39:40 INFO mapreduce.Job: The url to track the job: 
 http://hostname:8088/proxy/application_1381162886563_0001/
 13/10/07 18:39:40 INFO mapreduce.Job: Running job: job_1381162886563_0001
 13/10/07 18:39:46 INFO mapreduce.Job: Job job_1381162886563_0001 running in 
 uber mode : false
 13/10/07 18:39:46 INFO mapreduce.Job:  map 0% reduce 0%
 13/10/07 18:39:53 INFO mapreduce.Job:  map 100% reduce 0%
 13/10/07 18:39:58 INFO mapreduce.Job:  map 100% reduce 100%
 13/10/07 18:39:58 INFO mapreduce.Job: Job job_1381162886563_0001 completed 
 successfully
 13/10/07 18:39:58 INFO mapreduce.Job: Counters: 43
   File System Counters
   FILE: Number of bytes read=26
   FILE: Number of bytes written=177279
   FILE: Number of read operations=0
   FILE: Number of large read operations=0
   FILE: Number of write operations=0
   HDFS: Number of bytes read=48
   HDFS: Number of bytes written=0
   HDFS: Number of read operations=1
   HDFS: Number of large read operations=0
   HDFS: Number of write operations=0
   Job Counters 
   Launched map tasks=1
   Launched reduce tasks=1
   Other local map tasks=1
   Total time spent by all maps in occupied slots (ms)=7136
   

[jira] [Updated] (YARN-1283) Invalid 'url of job' mentioned in Job output with yarn.http.policy=HTTPS_ONLY

2013-10-07 Thread Omkar Vinit Joshi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Omkar Vinit Joshi updated YARN-1283:


Attachment: YARN-1283.20131007.1.patch

 Invalid 'url of job' mentioned in Job output with yarn.http.policy=HTTPS_ONLY
 -

 Key: YARN-1283
 URL: https://issues.apache.org/jira/browse/YARN-1283
 Project: Hadoop YARN
  Issue Type: Sub-task
Affects Versions: 2.1.1-beta
Reporter: Yesha Vora
Assignee: Omkar Vinit Joshi
  Labels: newbie
 Attachments: YARN-1283.20131007.1.patch


 After setting yarn.http.policy=HTTPS_ONLY, the job output shows incorrect 
 The url to track the job.
 Currently, its printing 
 http://RM:httpsport/proxy/application_1381162886563_0001/ instead 
 https://RM:httpsport/proxy/application_1381162886563_0001/
 http://hostname:8088/proxy/application_1381162886563_0001/ is invalid
 hadoop  jar hadoop-mapreduce-client-jobclient-tests.jar sleep -m 1 -r 1 
 13/10/07 18:39:39 INFO client.RMProxy: Connecting to ResourceManager at 
 hostname/100.00.00.000:8032
 13/10/07 18:39:40 INFO mapreduce.JobSubmitter: number of splits:1
 13/10/07 18:39:40 INFO Configuration.deprecation: user.name is deprecated. 
 Instead, use mapreduce.job.user.name
 13/10/07 18:39:40 INFO Configuration.deprecation: mapred.jar is deprecated. 
 Instead, use mapreduce.job.jar
 13/10/07 18:39:40 INFO Configuration.deprecation: 
 mapred.map.tasks.speculative.execution is deprecated. Instead, use 
 mapreduce.map.speculative
 13/10/07 18:39:40 INFO Configuration.deprecation: mapred.reduce.tasks is 
 deprecated. Instead, use mapreduce.job.reduces
 13/10/07 18:39:40 INFO Configuration.deprecation: mapreduce.partitioner.class 
 is deprecated. Instead, use mapreduce.job.partitioner.class
 13/10/07 18:39:40 INFO Configuration.deprecation: 
 mapred.reduce.tasks.speculative.execution is deprecated. Instead, use 
 mapreduce.reduce.speculative
 13/10/07 18:39:40 INFO Configuration.deprecation: 
 mapred.mapoutput.value.class is deprecated. Instead, use 
 mapreduce.map.output.value.class
 13/10/07 18:39:40 INFO Configuration.deprecation: mapreduce.map.class is 
 deprecated. Instead, use mapreduce.job.map.class
 13/10/07 18:39:40 INFO Configuration.deprecation: mapred.job.name is 
 deprecated. Instead, use mapreduce.job.name
 13/10/07 18:39:40 INFO Configuration.deprecation: mapreduce.reduce.class is 
 deprecated. Instead, use mapreduce.job.reduce.class
 13/10/07 18:39:40 INFO Configuration.deprecation: mapreduce.inputformat.class 
 is deprecated. Instead, use mapreduce.job.inputformat.class
 13/10/07 18:39:40 INFO Configuration.deprecation: mapred.input.dir is 
 deprecated. Instead, use mapreduce.input.fileinputformat.inputdir
 13/10/07 18:39:40 INFO Configuration.deprecation: 
 mapreduce.outputformat.class is deprecated. Instead, use 
 mapreduce.job.outputformat.class
 13/10/07 18:39:40 INFO Configuration.deprecation: mapred.map.tasks is 
 deprecated. Instead, use mapreduce.job.maps
 13/10/07 18:39:40 INFO Configuration.deprecation: mapred.mapoutput.key.class 
 is deprecated. Instead, use mapreduce.map.output.key.class
 13/10/07 18:39:40 INFO Configuration.deprecation: mapred.working.dir is 
 deprecated. Instead, use mapreduce.job.working.dir
 13/10/07 18:39:40 INFO mapreduce.JobSubmitter: Submitting tokens for job: 
 job_1381162886563_0001
 13/10/07 18:39:40 INFO impl.YarnClientImpl: Submitted application 
 application_1381162886563_0001 to ResourceManager at 
 hostname/100.00.00.000:8032
 13/10/07 18:39:40 INFO mapreduce.Job: The url to track the job: 
 http://hostname:8088/proxy/application_1381162886563_0001/
 13/10/07 18:39:40 INFO mapreduce.Job: Running job: job_1381162886563_0001
 13/10/07 18:39:46 INFO mapreduce.Job: Job job_1381162886563_0001 running in 
 uber mode : false
 13/10/07 18:39:46 INFO mapreduce.Job:  map 0% reduce 0%
 13/10/07 18:39:53 INFO mapreduce.Job:  map 100% reduce 0%
 13/10/07 18:39:58 INFO mapreduce.Job:  map 100% reduce 100%
 13/10/07 18:39:58 INFO mapreduce.Job: Job job_1381162886563_0001 completed 
 successfully
 13/10/07 18:39:58 INFO mapreduce.Job: Counters: 43
   File System Counters
   FILE: Number of bytes read=26
   FILE: Number of bytes written=177279
   FILE: Number of read operations=0
   FILE: Number of large read operations=0
   FILE: Number of write operations=0
   HDFS: Number of bytes read=48
   HDFS: Number of bytes written=0
   HDFS: Number of read operations=1
   HDFS: Number of large read operations=0
   HDFS: Number of write operations=0
   Job Counters 
   Launched map tasks=1
   Launched reduce tasks=1
   Other local map tasks=1
   Total 

[jira] [Commented] (YARN-465) fix coverage org.apache.hadoop.yarn.server.webproxy

2013-10-07 Thread Andrey Klochkov (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13788623#comment-13788623
 ] 

Andrey Klochkov commented on YARN-465:
--

Actually the only difference is in how HttpServer is initiated as in trunk the 
constructor was deprecated in favor of HttpServer.Builder. Seems this 
constructor is deprecated in branch-2 as well so yes, let's apply the trunk 
patch to both branches. Sorry didn't notice that earlier.

 fix coverage  org.apache.hadoop.yarn.server.webproxy
 

 Key: YARN-465
 URL: https://issues.apache.org/jira/browse/YARN-465
 Project: Hadoop YARN
  Issue Type: Sub-task
Affects Versions: 3.0.0, 0.23.7, 2.0.4-alpha
Reporter: Aleksey Gorshkov
Assignee: Andrey Klochkov
 Attachments: YARN-465-branch-0.23-a.patch, 
 YARN-465-branch-0.23.patch, YARN-465-branch-2-a.patch, 
 YARN-465-branch-2--n3.patch, YARN-465-branch-2--n4.patch, 
 YARN-465-branch-2--n5.patch, YARN-465-branch-2.patch, YARN-465-trunk-a.patch, 
 YARN-465-trunk--n3.patch, YARN-465-trunk--n4.patch, YARN-465-trunk--n5.patch, 
 YARN-465-trunk.patch


 fix coverage  org.apache.hadoop.yarn.server.webproxy
 patch YARN-465-trunk.patch for trunk
 patch YARN-465-branch-2.patch for branch-2
 patch YARN-465-branch-0.23.patch for branch-0.23
 There is issue in branch-0.23 . Patch does not creating .keep file.
 For fix it need to run commands:
 mkdir 
 yhadoop-common/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/proxy
 touch 
 yhadoop-common/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/proxy/.keep
  



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Assigned] (YARN-1258) Allow configuring the Fair Scheduler root queue

2013-10-07 Thread Sandy Ryza (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandy Ryza reassigned YARN-1258:


Assignee: Sandy Ryza

 Allow configuring the Fair Scheduler root queue
 ---

 Key: YARN-1258
 URL: https://issues.apache.org/jira/browse/YARN-1258
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: scheduler
Affects Versions: 2.1.1-beta
Reporter: Sandy Ryza
Assignee: Sandy Ryza
 Attachments: YARN-1258.patch


 This would be useful for acls, maxRunningApps, scheduling modes, etc.
 The allocation file should be able to accept both:
 * An implicit root queue
 * A root queue at the top of the hierarchy with all queues under/inside of it



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (YARN-1258) Allow configuring the Fair Scheduler root queue

2013-10-07 Thread Sandy Ryza (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandy Ryza updated YARN-1258:
-

Attachment: YARN-1258.patch

 Allow configuring the Fair Scheduler root queue
 ---

 Key: YARN-1258
 URL: https://issues.apache.org/jira/browse/YARN-1258
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: scheduler
Affects Versions: 2.1.1-beta
Reporter: Sandy Ryza
Assignee: Sandy Ryza
 Attachments: YARN-1258.patch


 This would be useful for acls, maxRunningApps, scheduling modes, etc.
 The allocation file should be able to accept both:
 * An implicit root queue
 * A root queue at the top of the hierarchy with all queues under/inside of it



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (YARN-1283) Invalid 'url of job' mentioned in Job output with yarn.http.policy=HTTPS_ONLY

2013-10-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13788633#comment-13788633
 ] 

Hadoop QA commented on YARN-1283:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12607236/YARN-1283.20131007.1.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 1 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:

  org.apache.hadoop.mapred.TestNetworkedJob
  org.apache.hadoop.mapred.TestClusterMRNotification
  org.apache.hadoop.mapred.TestMiniMRClasspath
  org.apache.hadoop.mapred.TestBlockLimits
  org.apache.hadoop.mapred.TestMiniMRWithDFSWithDistinctUsers
  org.apache.hadoop.mapred.TestMiniMRChildTask
  org.apache.hadoop.mapred.TestReduceFetch
  org.apache.hadoop.mapred.TestReduceFetchFromPartialMem
  org.apache.hadoop.mapred.TestMerge
  org.apache.hadoop.mapred.TestJobName
  org.apache.hadoop.mapred.TestLazyOutput
  org.apache.hadoop.mapred.TestJobSysDirWithDFS

  The following test timeouts occurred in 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:

org.apache.hadoop.mapred.TestJobCleanup
org.apache.hadoop.mapred.TestJobCounters
org.apache.hadoop.mapred.TestClusterMapReduceTestCase

  The test build failed in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/2139//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/2139//console

This message is automatically generated.

 Invalid 'url of job' mentioned in Job output with yarn.http.policy=HTTPS_ONLY
 -

 Key: YARN-1283
 URL: https://issues.apache.org/jira/browse/YARN-1283
 Project: Hadoop YARN
  Issue Type: Sub-task
Affects Versions: 2.1.1-beta
Reporter: Yesha Vora
Assignee: Omkar Vinit Joshi
  Labels: newbie
 Attachments: YARN-1283.20131007.1.patch


 After setting yarn.http.policy=HTTPS_ONLY, the job output shows incorrect 
 The url to track the job.
 Currently, its printing 
 http://RM:httpsport/proxy/application_1381162886563_0001/ instead 
 https://RM:httpsport/proxy/application_1381162886563_0001/
 http://hostname:8088/proxy/application_1381162886563_0001/ is invalid
 hadoop  jar hadoop-mapreduce-client-jobclient-tests.jar sleep -m 1 -r 1 
 13/10/07 18:39:39 INFO client.RMProxy: Connecting to ResourceManager at 
 hostname/100.00.00.000:8032
 13/10/07 18:39:40 INFO mapreduce.JobSubmitter: number of splits:1
 13/10/07 18:39:40 INFO Configuration.deprecation: user.name is deprecated. 
 Instead, use mapreduce.job.user.name
 13/10/07 18:39:40 INFO Configuration.deprecation: mapred.jar is deprecated. 
 Instead, use mapreduce.job.jar
 13/10/07 18:39:40 INFO Configuration.deprecation: 
 mapred.map.tasks.speculative.execution is deprecated. Instead, use 
 mapreduce.map.speculative
 13/10/07 18:39:40 INFO Configuration.deprecation: mapred.reduce.tasks is 
 deprecated. Instead, use mapreduce.job.reduces
 13/10/07 18:39:40 INFO Configuration.deprecation: 

[jira] [Commented] (YARN-465) fix coverage org.apache.hadoop.yarn.server.webproxy

2013-10-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13788707#comment-13788707
 ] 

Hudson commented on YARN-465:
-

SUCCESS: Integrated in Hadoop-trunk-Commit #4561 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/4561/])
YARN-465. fix coverage org.apache.hadoop.yarn.server.webproxy. Contributed by 
Aleksey Gorshkov and Andrey Klochkov (jlowe: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1530095)
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy/src/main/java/org/apache/hadoop/yarn/server/webproxy/WebAppProxyServer.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy/src/test/java/org/apache/hadoop/yarn/server/webproxy/TestWebAppProxyServlet.java


 fix coverage  org.apache.hadoop.yarn.server.webproxy
 

 Key: YARN-465
 URL: https://issues.apache.org/jira/browse/YARN-465
 Project: Hadoop YARN
  Issue Type: Sub-task
Affects Versions: 3.0.0, 0.23.7, 2.0.4-alpha
Reporter: Aleksey Gorshkov
Assignee: Andrey Klochkov
 Fix For: 2.3.0

 Attachments: YARN-465-branch-0.23-a.patch, 
 YARN-465-branch-0.23.patch, YARN-465-branch-2-a.patch, 
 YARN-465-branch-2--n3.patch, YARN-465-branch-2--n4.patch, 
 YARN-465-branch-2--n5.patch, YARN-465-branch-2.patch, YARN-465-trunk-a.patch, 
 YARN-465-trunk--n3.patch, YARN-465-trunk--n4.patch, YARN-465-trunk--n5.patch, 
 YARN-465-trunk.patch


 fix coverage  org.apache.hadoop.yarn.server.webproxy
 patch YARN-465-trunk.patch for trunk
 patch YARN-465-branch-2.patch for branch-2
 patch YARN-465-branch-0.23.patch for branch-0.23
 There is issue in branch-0.23 . Patch does not creating .keep file.
 For fix it need to run commands:
 mkdir 
 yhadoop-common/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/proxy
 touch 
 yhadoop-common/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/proxy/.keep
  



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (YARN-1258) Allow configuring the Fair Scheduler root queue

2013-10-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13788714#comment-13788714
 ] 

Hadoop QA commented on YARN-1258:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12607255/YARN-1258.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/2141//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/2141//console

This message is automatically generated.

 Allow configuring the Fair Scheduler root queue
 ---

 Key: YARN-1258
 URL: https://issues.apache.org/jira/browse/YARN-1258
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: scheduler
Affects Versions: 2.1.1-beta
Reporter: Sandy Ryza
Assignee: Sandy Ryza
 Attachments: YARN-1258.patch


 This would be useful for acls, maxRunningApps, scheduling modes, etc.
 The allocation file should be able to accept both:
 * An implicit root queue
 * A root queue at the top of the hierarchy with all queues under/inside of it



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (YARN-415) Capture memory utilization at the app-level for chargeback

2013-10-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13788720#comment-13788720
 ] 

Hadoop QA commented on YARN-415:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12606923/YARN-415--n4.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 7 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:

  
org.apache.hadoop.yarn.server.resourcemanager.TestClientRMService

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/2140//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/2140//console

This message is automatically generated.

 Capture memory utilization at the app-level for chargeback
 --

 Key: YARN-415
 URL: https://issues.apache.org/jira/browse/YARN-415
 Project: Hadoop YARN
  Issue Type: New Feature
  Components: resourcemanager
Affects Versions: 0.23.6
Reporter: Kendall Thrapp
Assignee: Andrey Klochkov
 Attachments: YARN-415--n2.patch, YARN-415--n3.patch, 
 YARN-415--n4.patch, YARN-415.patch


 For the purpose of chargeback, I'd like to be able to compute the cost of an
 application in terms of cluster resource usage.  To start out, I'd like to 
 get the memory utilization of an application.  The unit should be MB-seconds 
 or something similar and, from a chargeback perspective, the memory amount 
 should be the memory reserved for the application, as even if the app didn't 
 use all that memory, no one else was able to use it.
 (reserved ram for container 1 * lifetime of container 1) + (reserved ram for
 container 2 * lifetime of container 2) + ... + (reserved ram for container n 
 * lifetime of container n)
 It'd be nice to have this at the app level instead of the job level because:
 1. We'd still be able to get memory usage for jobs that crashed (and wouldn't 
 appear on the job history server).
 2. We'd be able to get memory usage for future non-MR jobs (e.g. Storm).
 This new metric should be available both through the RM UI and RM Web 
 Services REST API.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (YARN-1284) LCE: Race condition leaves dangling cgroups entries for killed containers

2013-10-07 Thread Alejandro Abdelnur (JIRA)
Alejandro Abdelnur created YARN-1284:


 Summary: LCE: Race condition leaves dangling cgroups entries for 
killed containers
 Key: YARN-1284
 URL: https://issues.apache.org/jira/browse/YARN-1284
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Affects Versions: 2.2.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
Priority: Blocker


When LCE  cgroups are enabled, when a container is is killed (in this case by 
its owning AM, an MRAM) it seems to be a race condition at OS level when doing 
a SIGTERM/SIGKILL and when the OS does all necessary cleanup. 

LCE code, after sending the SIGTERM/SIGKILL and getting the exitcode, 
immediately attempts to clean up the cgroups entry for the container. But this 
is failing with an error like:

{code}
2013-10-07 15:21:24,359 WARN 
org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor: Exit code 
from container container_1381179532433_0016_01_11 is : 143
2013-10-07 15:21:24,359 DEBUG 
org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: 
Processing container_1381179532433_0016_01_11 of type UPDATE_DIAGNOSTICS_MSG
2013-10-07 15:21:24,359 DEBUG 
org.apache.hadoop.yarn.server.nodemanager.util.CgroupsLCEResourcesHandler: 
deleteCgroup: 
/run/cgroups/cpu/hadoop-yarn/container_1381179532433_0016_01_11
2013-10-07 15:21:24,359 WARN 
org.apache.hadoop.yarn.server.nodemanager.util.CgroupsLCEResourcesHandler: 
Unable to delete cgroup at: 
/run/cgroups/cpu/hadoop-yarn/container_1381179532433_0016_01_11
{code}


CgroupsLCEResourcesHandler.clearLimits() has logic to wait for 500 ms for AM 
containers to avoid this problem. it seems this should be done for all 
containers.

Still, waiting for extra 500ms seems too expensive.

We should look at a way of doing this in a more 'efficient way' from time 
perspective, may be spinning while the deleteCgroup() cannot be done with a 
minimal sleep and a timeout.




--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (YARN-415) Capture memory utilization at the app-level for chargeback

2013-10-07 Thread Andrey Klochkov (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-415?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey Klochkov updated YARN-415:
-

Attachment: YARN-415--n5.patch

Fixing the failed test.

 Capture memory utilization at the app-level for chargeback
 --

 Key: YARN-415
 URL: https://issues.apache.org/jira/browse/YARN-415
 Project: Hadoop YARN
  Issue Type: New Feature
  Components: resourcemanager
Affects Versions: 0.23.6
Reporter: Kendall Thrapp
Assignee: Andrey Klochkov
 Attachments: YARN-415--n2.patch, YARN-415--n3.patch, 
 YARN-415--n4.patch, YARN-415--n5.patch, YARN-415.patch


 For the purpose of chargeback, I'd like to be able to compute the cost of an
 application in terms of cluster resource usage.  To start out, I'd like to 
 get the memory utilization of an application.  The unit should be MB-seconds 
 or something similar and, from a chargeback perspective, the memory amount 
 should be the memory reserved for the application, as even if the app didn't 
 use all that memory, no one else was able to use it.
 (reserved ram for container 1 * lifetime of container 1) + (reserved ram for
 container 2 * lifetime of container 2) + ... + (reserved ram for container n 
 * lifetime of container n)
 It'd be nice to have this at the app level instead of the job level because:
 1. We'd still be able to get memory usage for jobs that crashed (and wouldn't 
 appear on the job history server).
 2. We'd be able to get memory usage for future non-MR jobs (e.g. Storm).
 This new metric should be available both through the RM UI and RM Web 
 Services REST API.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (YARN-415) Capture memory utilization at the app-level for chargeback

2013-10-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13788794#comment-13788794
 ] 

Hadoop QA commented on YARN-415:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12607275/YARN-415--n5.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 7 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/2142//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/2142//console

This message is automatically generated.

 Capture memory utilization at the app-level for chargeback
 --

 Key: YARN-415
 URL: https://issues.apache.org/jira/browse/YARN-415
 Project: Hadoop YARN
  Issue Type: New Feature
  Components: resourcemanager
Affects Versions: 0.23.6
Reporter: Kendall Thrapp
Assignee: Andrey Klochkov
 Attachments: YARN-415--n2.patch, YARN-415--n3.patch, 
 YARN-415--n4.patch, YARN-415--n5.patch, YARN-415.patch


 For the purpose of chargeback, I'd like to be able to compute the cost of an
 application in terms of cluster resource usage.  To start out, I'd like to 
 get the memory utilization of an application.  The unit should be MB-seconds 
 or something similar and, from a chargeback perspective, the memory amount 
 should be the memory reserved for the application, as even if the app didn't 
 use all that memory, no one else was able to use it.
 (reserved ram for container 1 * lifetime of container 1) + (reserved ram for
 container 2 * lifetime of container 2) + ... + (reserved ram for container n 
 * lifetime of container n)
 It'd be nice to have this at the app level instead of the job level because:
 1. We'd still be able to get memory usage for jobs that crashed (and wouldn't 
 appear on the job history server).
 2. We'd be able to get memory usage for future non-MR jobs (e.g. Storm).
 This new metric should be available both through the RM UI and RM Web 
 Services REST API.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (YARN-1283) Invalid 'url of job' mentioned in Job output with yarn.http.policy=HTTPS_ONLY

2013-10-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13788830#comment-13788830
 ] 

Hadoop QA commented on YARN-1283:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12607236/YARN-1283.20131007.1.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 1 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:

  org.apache.hadoop.mapred.TestNetworkedJob
  org.apache.hadoop.mapred.TestClusterMRNotification
  org.apache.hadoop.mapred.TestMiniMRClasspath
  org.apache.hadoop.mapred.TestBlockLimits
  org.apache.hadoop.mapred.TestMiniMRWithDFSWithDistinctUsers
  org.apache.hadoop.mapred.TestMiniMRChildTask
  org.apache.hadoop.mapred.TestJobCleanup
  org.apache.hadoop.mapred.TestReduceFetch
  org.apache.hadoop.mapred.TestReduceFetchFromPartialMem
  org.apache.hadoop.mapred.TestMerge
  org.apache.hadoop.mapred.TestJobName
  org.apache.hadoop.mapred.TestLazyOutput
  org.apache.hadoop.mapred.TestJobSysDirWithDFS

  The following test timeouts occurred in 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:

org.apache.hadoop.mapred.TestJobCounters
org.apache.hadoop.mapred.TestClusterMapReduceTestCase

  The test build failed in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/2143//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/2143//console

This message is automatically generated.

 Invalid 'url of job' mentioned in Job output with yarn.http.policy=HTTPS_ONLY
 -

 Key: YARN-1283
 URL: https://issues.apache.org/jira/browse/YARN-1283
 Project: Hadoop YARN
  Issue Type: Sub-task
Affects Versions: 2.1.1-beta
Reporter: Yesha Vora
Assignee: Omkar Vinit Joshi
  Labels: newbie
 Attachments: YARN-1283.20131007.1.patch


 After setting yarn.http.policy=HTTPS_ONLY, the job output shows incorrect 
 The url to track the job.
 Currently, its printing 
 http://RM:httpsport/proxy/application_1381162886563_0001/ instead 
 https://RM:httpsport/proxy/application_1381162886563_0001/
 http://hostname:8088/proxy/application_1381162886563_0001/ is invalid
 hadoop  jar hadoop-mapreduce-client-jobclient-tests.jar sleep -m 1 -r 1 
 13/10/07 18:39:39 INFO client.RMProxy: Connecting to ResourceManager at 
 hostname/100.00.00.000:8032
 13/10/07 18:39:40 INFO mapreduce.JobSubmitter: number of splits:1
 13/10/07 18:39:40 INFO Configuration.deprecation: user.name is deprecated. 
 Instead, use mapreduce.job.user.name
 13/10/07 18:39:40 INFO Configuration.deprecation: mapred.jar is deprecated. 
 Instead, use mapreduce.job.jar
 13/10/07 18:39:40 INFO Configuration.deprecation: 
 mapred.map.tasks.speculative.execution is deprecated. Instead, use 
 mapreduce.map.speculative
 13/10/07 18:39:40 INFO Configuration.deprecation: mapred.reduce.tasks is 
 deprecated. Instead, use mapreduce.job.reduces
 13/10/07 18:39:40 INFO Configuration.deprecation: