[jira] [Updated] (YARN-1890) Too many unnecessary logs are logged while accessing applicationMaster web UI.

2014-04-01 Thread Rohith (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith updated YARN-1890:
-

Attachment: YARN-1890.patch

Simple patch to clean excessing loggin on every refresh. Log priority is moved 
to DEBUG. Please review..

 Too many unnecessary logs are logged while accessing applicationMaster web UI.
 --

 Key: YARN-1890
 URL: https://issues.apache.org/jira/browse/YARN-1890
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Rohith
Assignee: Rohith
Priority: Minor
 Attachments: YARN-1890.patch


 Accessing applicationMaster UI which is redirected from RM UI, logs too many 
 logs in ResourceManager logs and ProxyServer logs. On every refresh, logging 
 is done at WebAppProxyServlet.doGet(). All my RM and Proxyserver logs are 
 filled with UI information logs which are not really necessary for user.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (YARN-1896) For FairScheduler expose MinimumQueueResource of each queu in QueueMetrics

2014-04-01 Thread Henry Saputra (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13956374#comment-13956374
 ] 

Henry Saputra commented on YARN-1896:
-

Would be helpful to add more in the  description to add more info to explain 
why the issue is created 

 For FairScheduler expose MinimumQueueResource of each queu in QueueMetrics
 --

 Key: YARN-1896
 URL: https://issues.apache.org/jira/browse/YARN-1896
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Siqi Li
 Attachments: YARN-1896.v1.patch






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (YARN-1890) Too many unnecessary logs are logged while accessing applicationMaster web UI.

2014-04-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13956387#comment-13956387
 ] 

Hadoop QA commented on YARN-1890:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12638044/YARN-1890.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/3496//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/3496//console

This message is automatically generated.

 Too many unnecessary logs are logged while accessing applicationMaster web UI.
 --

 Key: YARN-1890
 URL: https://issues.apache.org/jira/browse/YARN-1890
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Rohith
Assignee: Rohith
Priority: Minor
 Attachments: YARN-1890.patch


 Accessing applicationMaster UI which is redirected from RM UI, logs too many 
 logs in ResourceManager logs and ProxyServer logs. On every refresh, logging 
 is done at WebAppProxyServlet.doGet(). All my RM and Proxyserver logs are 
 filled with UI information logs which are not really necessary for user.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (YARN-1889) In Fair Scheduler, avoid creating objects on each call to AppSchedulable comparator

2014-04-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13956401#comment-13956401
 ] 

Hudson commented on YARN-1889:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #526 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/526/])
YARN-1889. In Fair Scheduler, avoid creating objects on each call to 
AppSchedulable comparator (Hong Zhiguo via Sandy Ryza) (sandy: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1583491)
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/resource/ResourceWeights.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/AppSchedulable.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairScheduler.java


 In Fair Scheduler, avoid creating objects on each call to AppSchedulable 
 comparator
 ---

 Key: YARN-1889
 URL: https://issues.apache.org/jira/browse/YARN-1889
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: scheduler
Reporter: Hong Zhiguo
Assignee: Hong Zhiguo
Priority: Minor
  Labels: reviewed
 Fix For: 2.5.0

 Attachments: YARN-1889.patch, YARN-1889.patch


 In fair scheduler, in each scheduling attempt, a full sort is
 performed on List of AppSchedulable, which invokes Comparator.compare
 method many times. Both FairShareComparator and DRFComparator call
 AppSchedulable.getWeights, and AppSchedulable.getPriority.
 A new ResourceWeights object is allocated on each call of getWeights,
 and the same for getPriority. This introduces a lot of pressure to
 GC because these methods are called very very frequently.
 Below test case shows improvement on performance and GC behaviour. The 
 results show that the GC pressure during processing NodeUpdate is recuded 
 half by this patch.
 The code to show the improvement: (Add it to TestFairScheduler.java)
 {code}
 import java.lang.management.GarbageCollectorMXBean;
 import java.lang.management.ManagementFactory;
   public void printGCStats() {
 long totalGarbageCollections = 0;
 long garbageCollectionTime = 0;
 for(GarbageCollectorMXBean gc :
   ManagementFactory.getGarbageCollectorMXBeans()) {
   long count = gc.getCollectionCount();
   if(count = 0) {
 totalGarbageCollections += count;
   }
   long time = gc.getCollectionTime();
   if(time = 0) {
 garbageCollectionTime += time;
   }
 }
 System.out.println(Total Garbage Collections: 
 + totalGarbageCollections);
 System.out.println(Total Garbage Collection Time (ms): 
 + garbageCollectionTime);
   }
   @Test
   public void testImpactOnGC() throws Exception {
 scheduler.reinitialize(conf, resourceManager.getRMContext());
 // Add nodes
 int numNode = 1;
 for (int i = 0; i  numNode; ++i) {
 String host = String.format(192.1.%d.%d, i/256, i%256);
 RMNode node =
 MockNodes.newNodeInfo(1, Resources.createResource(1024 * 64), i, 
 host);
 NodeAddedSchedulerEvent nodeEvent = new NodeAddedSchedulerEvent(node);
 scheduler.handle(nodeEvent);
 assertEquals(1024 * 64 * (i+1), 
 scheduler.getClusterCapacity().getMemory());
 }
 assertEquals(numNode, scheduler.getNumClusterNodes());
 assertEquals(1024 * 64 * numNode, 
 scheduler.getClusterCapacity().getMemory());
 // add apps, each app has 100 containers.
 int minReqSize =
 
 FairSchedulerConfiguration.DEFAULT_RM_SCHEDULER_INCREMENT_ALLOCATION_MB;
 int numApp = 8000;
 int priority = 1;
 for (int i = 1; i  numApp + 1; ++i) {
 ApplicationAttemptId attemptId = createAppAttemptId(i, 1);
 AppAddedSchedulerEvent appAddedEvent = new AppAddedSchedulerEvent(
 attemptId.getApplicationId(), queue1, user1);
 scheduler.handle(appAddedEvent);
 AppAttemptAddedSchedulerEvent attemptAddedEvent =
 new AppAttemptAddedSchedulerEvent(attemptId, false);
 scheduler.handle(attemptAddedEvent);
 createSchedulingRequestExistingApplication(minReqSize * 2, 1, 
 priority, attemptId);
 }
 scheduler.update();
 assertEquals(numApp, scheduler.getQueueManager().getLeafQueue(queue1, 
 true)
 .getRunnableAppSchedulables().size());
 System.out.println(GC stats before NodeUpdate processing:);
 

[jira] [Commented] (YARN-1889) In Fair Scheduler, avoid creating objects on each call to AppSchedulable comparator

2014-04-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13956490#comment-13956490
 ] 

Hudson commented on YARN-1889:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1744 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1744/])
YARN-1889. In Fair Scheduler, avoid creating objects on each call to 
AppSchedulable comparator (Hong Zhiguo via Sandy Ryza) (sandy: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1583491)
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/resource/ResourceWeights.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/AppSchedulable.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairScheduler.java


 In Fair Scheduler, avoid creating objects on each call to AppSchedulable 
 comparator
 ---

 Key: YARN-1889
 URL: https://issues.apache.org/jira/browse/YARN-1889
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: scheduler
Reporter: Hong Zhiguo
Assignee: Hong Zhiguo
Priority: Minor
  Labels: reviewed
 Fix For: 2.5.0

 Attachments: YARN-1889.patch, YARN-1889.patch


 In fair scheduler, in each scheduling attempt, a full sort is
 performed on List of AppSchedulable, which invokes Comparator.compare
 method many times. Both FairShareComparator and DRFComparator call
 AppSchedulable.getWeights, and AppSchedulable.getPriority.
 A new ResourceWeights object is allocated on each call of getWeights,
 and the same for getPriority. This introduces a lot of pressure to
 GC because these methods are called very very frequently.
 Below test case shows improvement on performance and GC behaviour. The 
 results show that the GC pressure during processing NodeUpdate is recuded 
 half by this patch.
 The code to show the improvement: (Add it to TestFairScheduler.java)
 {code}
 import java.lang.management.GarbageCollectorMXBean;
 import java.lang.management.ManagementFactory;
   public void printGCStats() {
 long totalGarbageCollections = 0;
 long garbageCollectionTime = 0;
 for(GarbageCollectorMXBean gc :
   ManagementFactory.getGarbageCollectorMXBeans()) {
   long count = gc.getCollectionCount();
   if(count = 0) {
 totalGarbageCollections += count;
   }
   long time = gc.getCollectionTime();
   if(time = 0) {
 garbageCollectionTime += time;
   }
 }
 System.out.println(Total Garbage Collections: 
 + totalGarbageCollections);
 System.out.println(Total Garbage Collection Time (ms): 
 + garbageCollectionTime);
   }
   @Test
   public void testImpactOnGC() throws Exception {
 scheduler.reinitialize(conf, resourceManager.getRMContext());
 // Add nodes
 int numNode = 1;
 for (int i = 0; i  numNode; ++i) {
 String host = String.format(192.1.%d.%d, i/256, i%256);
 RMNode node =
 MockNodes.newNodeInfo(1, Resources.createResource(1024 * 64), i, 
 host);
 NodeAddedSchedulerEvent nodeEvent = new NodeAddedSchedulerEvent(node);
 scheduler.handle(nodeEvent);
 assertEquals(1024 * 64 * (i+1), 
 scheduler.getClusterCapacity().getMemory());
 }
 assertEquals(numNode, scheduler.getNumClusterNodes());
 assertEquals(1024 * 64 * numNode, 
 scheduler.getClusterCapacity().getMemory());
 // add apps, each app has 100 containers.
 int minReqSize =
 
 FairSchedulerConfiguration.DEFAULT_RM_SCHEDULER_INCREMENT_ALLOCATION_MB;
 int numApp = 8000;
 int priority = 1;
 for (int i = 1; i  numApp + 1; ++i) {
 ApplicationAttemptId attemptId = createAppAttemptId(i, 1);
 AppAddedSchedulerEvent appAddedEvent = new AppAddedSchedulerEvent(
 attemptId.getApplicationId(), queue1, user1);
 scheduler.handle(appAddedEvent);
 AppAttemptAddedSchedulerEvent attemptAddedEvent =
 new AppAttemptAddedSchedulerEvent(attemptId, false);
 scheduler.handle(attemptAddedEvent);
 createSchedulingRequestExistingApplication(minReqSize * 2, 1, 
 priority, attemptId);
 }
 scheduler.update();
 assertEquals(numApp, scheduler.getQueueManager().getLeafQueue(queue1, 
 true)
 .getRunnableAppSchedulables().size());
 System.out.println(GC stats before NodeUpdate processing:);
   

[jira] [Commented] (YARN-1889) In Fair Scheduler, avoid creating objects on each call to AppSchedulable comparator

2014-04-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13956521#comment-13956521
 ] 

Hudson commented on YARN-1889:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #1718 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1718/])
YARN-1889. In Fair Scheduler, avoid creating objects on each call to 
AppSchedulable comparator (Hong Zhiguo via Sandy Ryza) (sandy: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1583491)
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/resource/ResourceWeights.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/AppSchedulable.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairScheduler.java


 In Fair Scheduler, avoid creating objects on each call to AppSchedulable 
 comparator
 ---

 Key: YARN-1889
 URL: https://issues.apache.org/jira/browse/YARN-1889
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: scheduler
Reporter: Hong Zhiguo
Assignee: Hong Zhiguo
Priority: Minor
  Labels: reviewed
 Fix For: 2.5.0

 Attachments: YARN-1889.patch, YARN-1889.patch


 In fair scheduler, in each scheduling attempt, a full sort is
 performed on List of AppSchedulable, which invokes Comparator.compare
 method many times. Both FairShareComparator and DRFComparator call
 AppSchedulable.getWeights, and AppSchedulable.getPriority.
 A new ResourceWeights object is allocated on each call of getWeights,
 and the same for getPriority. This introduces a lot of pressure to
 GC because these methods are called very very frequently.
 Below test case shows improvement on performance and GC behaviour. The 
 results show that the GC pressure during processing NodeUpdate is recuded 
 half by this patch.
 The code to show the improvement: (Add it to TestFairScheduler.java)
 {code}
 import java.lang.management.GarbageCollectorMXBean;
 import java.lang.management.ManagementFactory;
   public void printGCStats() {
 long totalGarbageCollections = 0;
 long garbageCollectionTime = 0;
 for(GarbageCollectorMXBean gc :
   ManagementFactory.getGarbageCollectorMXBeans()) {
   long count = gc.getCollectionCount();
   if(count = 0) {
 totalGarbageCollections += count;
   }
   long time = gc.getCollectionTime();
   if(time = 0) {
 garbageCollectionTime += time;
   }
 }
 System.out.println(Total Garbage Collections: 
 + totalGarbageCollections);
 System.out.println(Total Garbage Collection Time (ms): 
 + garbageCollectionTime);
   }
   @Test
   public void testImpactOnGC() throws Exception {
 scheduler.reinitialize(conf, resourceManager.getRMContext());
 // Add nodes
 int numNode = 1;
 for (int i = 0; i  numNode; ++i) {
 String host = String.format(192.1.%d.%d, i/256, i%256);
 RMNode node =
 MockNodes.newNodeInfo(1, Resources.createResource(1024 * 64), i, 
 host);
 NodeAddedSchedulerEvent nodeEvent = new NodeAddedSchedulerEvent(node);
 scheduler.handle(nodeEvent);
 assertEquals(1024 * 64 * (i+1), 
 scheduler.getClusterCapacity().getMemory());
 }
 assertEquals(numNode, scheduler.getNumClusterNodes());
 assertEquals(1024 * 64 * numNode, 
 scheduler.getClusterCapacity().getMemory());
 // add apps, each app has 100 containers.
 int minReqSize =
 
 FairSchedulerConfiguration.DEFAULT_RM_SCHEDULER_INCREMENT_ALLOCATION_MB;
 int numApp = 8000;
 int priority = 1;
 for (int i = 1; i  numApp + 1; ++i) {
 ApplicationAttemptId attemptId = createAppAttemptId(i, 1);
 AppAddedSchedulerEvent appAddedEvent = new AppAddedSchedulerEvent(
 attemptId.getApplicationId(), queue1, user1);
 scheduler.handle(appAddedEvent);
 AppAttemptAddedSchedulerEvent attemptAddedEvent =
 new AppAttemptAddedSchedulerEvent(attemptId, false);
 scheduler.handle(attemptAddedEvent);
 createSchedulingRequestExistingApplication(minReqSize * 2, 1, 
 priority, attemptId);
 }
 scheduler.update();
 assertEquals(numApp, scheduler.getQueueManager().getLeafQueue(queue1, 
 true)
 .getRunnableAppSchedulables().size());
 System.out.println(GC stats before NodeUpdate processing:);
 

[jira] [Commented] (YARN-1888) Not add NodeManager to inactiveRMNodes when reboot NodeManager which have different port

2014-04-01 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13956690#comment-13956690
 ] 

Jason Lowe commented on YARN-1888:
--

I agree with [~kasha] on this.  A nodemanager coming up on a different port 
isn't necessarily the same nodemanager from a previous instance.  For exampe, 
the minicluster runs multiple nodes on the same host with different ports, so 
if one of these nodes disappears then it will no longer be reported as lost 
with this patch since there are others still running with the same host?

I think the real fix is to run the nodemanager with a non-ephemeral nodemanager 
port specified in yarn-site.xml.  This helps solve a number of issues:

# lost nodes count will be accurate
# a NM that reboots and rejoins the cluster before the RM expires the old 
instance will be correctly recognized as the same NM, and we avoid the RM 
thinking there are really two NMs on the host for up to the NM expiry interval
# attempts to start a subsequent NM on the same host where an NM is already 
running will fail rather than accidentally overcommit the node

 Not add NodeManager to inactiveRMNodes when reboot NodeManager which have 
 different port
 

 Key: YARN-1888
 URL: https://issues.apache.org/jira/browse/YARN-1888
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Affects Versions: 2.3.0
Reporter: zhaoyunjiong
Priority: Minor
 Attachments: YARN-1888.patch


 When NodeManager's port set to 0, reboot NodeManager will cause Losts Nodes 
 inaccurate.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (YARN-1820) I can not run mapreduce

2014-04-01 Thread huangxing (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huangxing updated YARN-1820:


Summary: I can not run mapreduce  (was: I can not run mapreduce!!)

 I can not run mapreduce
 ---

 Key: YARN-1820
 URL: https://issues.apache.org/jira/browse/YARN-1820
 Project: Hadoop YARN
  Issue Type: Test
  Components: resourcemanager
Affects Versions: 2.2.0
 Environment: ubuntu 12.04 64-bit
Reporter: huangxing
Priority: Minor

 When I run the mapreduce example ,some error occurs,I don't known what  
 produce it,someone give me an idea? My resourcemanager and the nodemangers 
 all work well.
 [bigdata1 /hadoop/hadoop-2.0.0-cdh4.5.0/share/hadoop/mapreduce]#hadoop jar 
 ./hadoop-mapreduce-examples-2.0.0-cdh4.5.0.jar  grep  /input /output 
 'dfs[a-z.]+'
 14/03/10 19:02:34 INFO service.AbstractService: 
 Service:smileysurprised:rg.apache.hadoop.yarn.client.YarnClientImpl is inited.
 14/03/10 19:02:34 INFO service.AbstractService: 
 Service:smileysurprised:rg.apache.hadoop.yarn.client.YarnClientImpl is 
 started.
 14/03/10 19:02:34 WARN mapreduce.JobSubmitter: No job jar file set.  User 
 classes may not be found. See Job or Job#setJar(String).
 14/03/10 19:02:34 INFO input.FileInputFormat: Total input paths to process : 7
 14/03/10 19:02:35 INFO mapreduce.JobSubmitter: number of splits:7
 14/03/10 19:02:35 WARN conf.Configuration: mapred.output.value.class is 
 deprecated. Instead, use mapreduce.job.output.value.class
 14/03/10 19:02:35 WARN conf.Configuration: mapreduce.combine.class is 
 deprecated. Instead, use mapreduce.job.combine.class
 14/03/10 19:02:35 WARN conf.Configuration: mapreduce.map.class is deprecated. 
 Instead, use mapreduce.job.map.class
 14/03/10 19:02:35 WARN conf.Configuration: mapred.job.name is deprecated. 
 Instead, use mapreduce.job.name
 14/03/10 19:02:35 WARN conf.Configuration: mapreduce.reduce.class is 
 deprecated. Instead, use mapreduce.job.reduce.class
 14/03/10 19:02:35 WARN conf.Configuration: mapred.input.dir is deprecated. 
 Instead, use mapreduce.input.fileinputformat.inputdir
 14/03/10 19:02:35 WARN conf.Configuration: mapred.output.dir is deprecated. 
 Instead, use mapreduce.output.fileoutputformat.outputdir
 14/03/10 19:02:35 WARN conf.Configuration: mapreduce.outputformat.class is 
 deprecated. Instead, use mapreduce.job.outputformat.class
 14/03/10 19:02:35 WARN conf.Configuration: mapred.map.tasks is deprecated. 
 Instead, use mapreduce.job.maps
 14/03/10 19:02:35 WARN conf.Configuration: mapred.output.key.class is 
 deprecated. Instead, use mapreduce.job.output.key.class
 14/03/10 19:02:35 WARN conf.Configuration: mapred.working.dir is deprecated. 
 Instead, use mapreduce.job.working.dir
 14/03/10 19:02:35 INFO mapreduce.JobSubmitter: Submitting tokens for job: 
 job_1394435601283_0011
 14/03/10 19:02:35 INFO mapred.YARNRunner: Job jar is not present. Not adding 
 any jar to the list of resources.
 14/03/10 19:02:35 INFO client.YarnClientImpl: Submitted application 
 application_1394435601283_0011 to ResourceManager at 
 bigdata1/192.168.65.66:8032
 14/03/10 19:02:35 INFO mapreduce.Job: The url to track the job: 
 http://bigdata1:8088/proxy/application_1394435601283_0011/
 14/03/10 19:02:35 INFO mapreduce.Job: Running job: job_1394435601283_0011
 14/03/10 19:02:41 INFO mapreduce.Job: Job job_1394435601283_0011 running in 
 uber mode : false
 14/03/10 19:02:41 INFO mapreduce.Job:  map 0% reduce 0%
 14/03/10 19:02:41 INFO mapreduce.Job: Job job_1394435601283_0011 failed with 
 state FAILED due to: Application application_1394435601283_0011 failed 1 
 times due to AM Container for appattempt_1394435601283_0011_01 exited 
 with  exitCode: 1 due to: 
 .Failing this attempt.. Failing the application.
 14/03/10 19:02:41 INFO mapreduce.Job: Counters: 0
 14/03/10 19:02:41 INFO service.AbstractService: 
 Service:smileysurprised:rg.apache.hadoop.yarn.client.YarnClientImpl is inited.
 14/03/10 19:02:41 INFO service.AbstractService: 
 Service:smileysurprised:rg.apache.hadoop.yarn.client.YarnClientImpl is 
 started.
 14/03/10 19:02:42 WARN mapreduce.JobSubmitter: No job jar file set.  User 
 classes may not be found. See Job or Job#setJar(String).
 14/03/10 19:02:42 INFO mapreduce.JobSubmitter: Cleaning up the staging area 
 /user/root/.staging/job_1394435601283_0012
 14/03/10 19:02:42 ERROR security.UserGroupInformation: 
 PriviledgedActionException as:root (auth:smileyfrustrated:IMPLE) 
 cause:smileysurprised:rg.apache.hadoop.mapreduce.lib.input.InvalidInputException:
  Input path does not exist: 
 hdfs://bigdata1:8020/user/root/grep-temp-1811237380
 org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path does 
 not exist: hdfs://bigdata1:8020/user/root/grep-temp-1811237380
 at 
 

[jira] [Updated] (YARN-1896) For FairScheduler expose MinimumQueueResource of each queu in QueueMetrics

2014-04-01 Thread Siqi Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1896?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siqi Li updated YARN-1896:
--

Description: For FairScheduler, it's very useful to expose 
MinimumQueueResource and MaximumQueueResource of each queu in QueueMetrics. 
Therefore, people can use monitoring graph to see what are their current usage 
and their limit. 

 For FairScheduler expose MinimumQueueResource of each queu in QueueMetrics
 --

 Key: YARN-1896
 URL: https://issues.apache.org/jira/browse/YARN-1896
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Siqi Li
 Attachments: YARN-1896.v1.patch


 For FairScheduler, it's very useful to expose MinimumQueueResource and 
 MaximumQueueResource of each queu in QueueMetrics. Therefore, people can use 
 monitoring graph to see what are their current usage and their limit. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (YARN-1726) ResourceSchedulerWrapper failed due to the AbstractYarnScheduler introduced in YARN-1041

2014-04-01 Thread Wei Yan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei Yan updated YARN-1726:
--

Attachment: YARN-1726.patch

 ResourceSchedulerWrapper failed due to the AbstractYarnScheduler introduced 
 in YARN-1041
 

 Key: YARN-1726
 URL: https://issues.apache.org/jira/browse/YARN-1726
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Wei Yan
Assignee: Wei Yan
Priority: Minor
 Attachments: YARN-1726.patch, YARN-1726.patch, YARN-1726.patch


 The YARN scheduler simulator failed when running Fair Scheduler, due to 
 AbstractYarnScheduler introduced in YARN-1041. The ResourceSchedulerWrapper 
 should inherit AbstractYarnScheduler, instead of implementing 
 ResourceScheduler interface directly.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (YARN-1898) Standby RM's conf, stack, logLevel and metrics links are redirecting to Active RM

2014-04-01 Thread Yesha Vora (JIRA)
Yesha Vora created YARN-1898:


 Summary: Standby RM's conf, stack, logLevel and metrics links are 
redirecting to Active RM
 Key: YARN-1898
 URL: https://issues.apache.org/jira/browse/YARN-1898
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Yesha Vora


Standby RM links /conf, /stacks, /logLevel, /metrics is redirected to Active RM.

It should not be redirected to Active RM



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Assigned] (YARN-1898) Standby RM's conf, stack, logLevel and metrics links are redirecting to Active RM

2014-04-01 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong reassigned YARN-1898:
---

Assignee: Xuan Gong

 Standby RM's conf, stack, logLevel and metrics links are redirecting to 
 Active RM
 -

 Key: YARN-1898
 URL: https://issues.apache.org/jira/browse/YARN-1898
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Reporter: Yesha Vora
Assignee: Xuan Gong

 Standby RM links /conf, /stacks, /logLevel, /metrics is redirected to Active 
 RM.
 It should not be redirected to Active RM



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (YARN-1726) ResourceSchedulerWrapper failed due to the AbstractYarnScheduler introduced in YARN-1041

2014-04-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13956828#comment-13956828
 ] 

Hadoop QA commented on YARN-1726:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12638093/YARN-1726.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-tools/hadoop-sls.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/3497//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/3497//console

This message is automatically generated.

 ResourceSchedulerWrapper failed due to the AbstractYarnScheduler introduced 
 in YARN-1041
 

 Key: YARN-1726
 URL: https://issues.apache.org/jira/browse/YARN-1726
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Wei Yan
Assignee: Wei Yan
Priority: Minor
 Attachments: YARN-1726.patch, YARN-1726.patch, YARN-1726.patch


 The YARN scheduler simulator failed when running Fair Scheduler, due to 
 AbstractYarnScheduler introduced in YARN-1041. The ResourceSchedulerWrapper 
 should inherit AbstractYarnScheduler, instead of implementing 
 ResourceScheduler interface directly.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (YARN-1898) Standby RM's conf, stack, logLevel and metrics links are redirecting to Active RM

2014-04-01 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated YARN-1898:


Description: 
Standby RM links /conf, /stacks, /logLevel, /metrics, /jmx is redirected to 
Active RM.

It should not be redirected to Active RM

  was:
Standby RM links /conf, /stacks, /logLevel, /metrics, /jmx, /logs is redirected 
to Active RM.

It should not be redirected to Active RM


 Standby RM's conf, stack, logLevel and metrics links are redirecting to 
 Active RM
 -

 Key: YARN-1898
 URL: https://issues.apache.org/jira/browse/YARN-1898
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Reporter: Yesha Vora
Assignee: Xuan Gong

 Standby RM links /conf, /stacks, /logLevel, /metrics, /jmx is redirected to 
 Active RM.
 It should not be redirected to Active RM



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (YARN-1898) Standby RM's conf, stack, logLevel and metrics links are redirecting to Active RM

2014-04-01 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated YARN-1898:


Description: 
Standby RM links /conf, /stacks, /logLevel, /metrics, /jmx, /logs is redirected 
to Active RM.

It should not be redirected to Active RM

  was:
Standby RM links /conf, /stacks, /logLevel, /metrics is redirected to Active RM.

It should not be redirected to Active RM


 Standby RM's conf, stack, logLevel and metrics links are redirecting to 
 Active RM
 -

 Key: YARN-1898
 URL: https://issues.apache.org/jira/browse/YARN-1898
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Reporter: Yesha Vora
Assignee: Xuan Gong

 Standby RM links /conf, /stacks, /logLevel, /metrics, /jmx, /logs is 
 redirected to Active RM.
 It should not be redirected to Active RM



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (YARN-1898) Standby RM's conf, stack, logLevel and metrics links are redirecting to Active RM

2014-04-01 Thread Xuan Gong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13956845#comment-13956845
 ] 

Xuan Gong commented on YARN-1898:
-

 Standby RM links /conf, /stacks, /logLevel, /metrics, /jmx, /logs, /static 
should not be redirected to Active RM

 Standby RM's conf, stack, logLevel and metrics links are redirecting to 
 Active RM
 -

 Key: YARN-1898
 URL: https://issues.apache.org/jira/browse/YARN-1898
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Reporter: Yesha Vora
Assignee: Xuan Gong

 Standby RM links /conf, /stacks, /logLevel, /metrics, /jmx is redirected to 
 Active RM.
 It should not be redirected to Active RM



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (YARN-1898) Standby RM's conf, stack, logLevel and metrics links are redirecting to Active RM

2014-04-01 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13956872#comment-13956872
 ] 

Karthik Kambatla commented on YARN-1898:


Good catch, [~yeshavora]. 



 Standby RM's conf, stack, logLevel and metrics links are redirecting to 
 Active RM
 -

 Key: YARN-1898
 URL: https://issues.apache.org/jira/browse/YARN-1898
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Reporter: Yesha Vora
Assignee: Xuan Gong

 Standby RM links /conf, /stacks, /logLevel, /metrics, /jmx is redirected to 
 Active RM.
 It should not be redirected to Active RM



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (YARN-1898) Standby RM's conf, stack, logLevel and metrics links are redirecting to Active RM

2014-04-01 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated YARN-1898:


Attachment: YARN-1898.1.patch

add /conf, /stacks, /logLevel, /metrics, /jmx, /logs to RMWebAppFilter

 Standby RM's conf, stack, logLevel and metrics links are redirecting to 
 Active RM
 -

 Key: YARN-1898
 URL: https://issues.apache.org/jira/browse/YARN-1898
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Reporter: Yesha Vora
Assignee: Xuan Gong
 Attachments: YARN-1898.1.patch


 Standby RM links /conf, /stacks, /logLevel, /metrics, /jmx is redirected to 
 Active RM.
 It should not be redirected to Active RM



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (YARN-1898) Standby RM's conf, stack, logLevel and metrics links are redirecting to Active RM

2014-04-01 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13956896#comment-13956896
 ] 

Karthik Kambatla commented on YARN-1898:


[~xgong] - what do you think of maintaining a list of URIs that shouldn't be 
redirected, may be as a static final Set in RMWebAppFilter and change 
shouldRedirect to check against that? 

 Standby RM's conf, stack, logLevel and metrics links are redirecting to 
 Active RM
 -

 Key: YARN-1898
 URL: https://issues.apache.org/jira/browse/YARN-1898
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Reporter: Yesha Vora
Assignee: Xuan Gong
 Attachments: YARN-1898.1.patch


 Standby RM links /conf, /stacks, /logLevel, /metrics, /jmx is redirected to 
 Active RM.
 It should not be redirected to Active RM



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (YARN-1726) ResourceSchedulerWrapper failed due to the AbstractYarnScheduler introduced in YARN-1041

2014-04-01 Thread Wei Yan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei Yan updated YARN-1726:
--

Attachment: YARN-1726.patch

 ResourceSchedulerWrapper failed due to the AbstractYarnScheduler introduced 
 in YARN-1041
 

 Key: YARN-1726
 URL: https://issues.apache.org/jira/browse/YARN-1726
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Wei Yan
Assignee: Wei Yan
Priority: Minor
 Attachments: YARN-1726.patch, YARN-1726.patch, YARN-1726.patch, 
 YARN-1726.patch


 The YARN scheduler simulator failed when running Fair Scheduler, due to 
 AbstractYarnScheduler introduced in YARN-1041. The ResourceSchedulerWrapper 
 should inherit AbstractYarnScheduler, instead of implementing 
 ResourceScheduler interface directly.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (YARN-1898) Standby RM's conf, stack, logLevel and metrics links are redirecting to Active RM

2014-04-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13956959#comment-13956959
 ] 

Hadoop QA commented on YARN-1898:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12638108/YARN-1898.1.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/3498//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/3498//console

This message is automatically generated.

 Standby RM's conf, stack, logLevel and metrics links are redirecting to 
 Active RM
 -

 Key: YARN-1898
 URL: https://issues.apache.org/jira/browse/YARN-1898
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Reporter: Yesha Vora
Assignee: Xuan Gong
 Attachments: YARN-1898.1.patch


 Standby RM links /conf, /stacks, /logLevel, /metrics, /jmx is redirected to 
 Active RM.
 It should not be redirected to Active RM



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (YARN-1898) Standby RM's conf, stack, logLevel and metrics links are redirecting to Active RM

2014-04-01 Thread Xuan Gong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13956973#comment-13956973
 ] 

Xuan Gong commented on YARN-1898:
-

bq. what do you think of maintaining a list of URIs that shouldn't be 
redirected, may be as a static final Set in RMWebAppFilter and change 
shouldRedirect to check against that?

We can do that. But this set can only contains  /conf, /stacks, /logLevel, 
/metrics, /jmx, /logs. 
For / + rmWebApp.wsName() + /v1/cluster/info, and / + rmWebApp.name() + 
/cluster, we still need to do the comparison explicitly. 

 Standby RM's conf, stack, logLevel and metrics links are redirecting to 
 Active RM
 -

 Key: YARN-1898
 URL: https://issues.apache.org/jira/browse/YARN-1898
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Reporter: Yesha Vora
Assignee: Xuan Gong
 Attachments: YARN-1898.1.patch


 Standby RM links /conf, /stacks, /logLevel, /metrics, /jmx is redirected to 
 Active RM.
 It should not be redirected to Active RM



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (YARN-1726) ResourceSchedulerWrapper failed due to the AbstractYarnScheduler introduced in YARN-1041

2014-04-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13956978#comment-13956978
 ] 

Hadoop QA commented on YARN-1726:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12638118/YARN-1726.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-tools/hadoop-sls.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/3499//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/3499//console

This message is automatically generated.

 ResourceSchedulerWrapper failed due to the AbstractYarnScheduler introduced 
 in YARN-1041
 

 Key: YARN-1726
 URL: https://issues.apache.org/jira/browse/YARN-1726
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Wei Yan
Assignee: Wei Yan
Priority: Minor
 Attachments: YARN-1726.patch, YARN-1726.patch, YARN-1726.patch, 
 YARN-1726.patch


 The YARN scheduler simulator failed when running Fair Scheduler, due to 
 AbstractYarnScheduler introduced in YARN-1041. The ResourceSchedulerWrapper 
 should inherit AbstractYarnScheduler, instead of implementing 
 ResourceScheduler interface directly.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (YARN-1898) Standby RM's conf, stack, logLevel and metrics links are redirecting to Active RM

2014-04-01 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated YARN-1898:


Attachment: YARN-1898.2.patch

 Standby RM's conf, stack, logLevel and metrics links are redirecting to 
 Active RM
 -

 Key: YARN-1898
 URL: https://issues.apache.org/jira/browse/YARN-1898
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Reporter: Yesha Vora
Assignee: Xuan Gong
 Attachments: YARN-1898.1.patch, YARN-1898.2.patch


 Standby RM links /conf, /stacks, /logLevel, /metrics, /jmx is redirected to 
 Active RM.
 It should not be redirected to Active RM



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (YARN-1898) Standby RM's conf, stack, logLevel and metrics links are redirecting to Active RM

2014-04-01 Thread Xuan Gong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13956979#comment-13956979
 ] 

Xuan Gong commented on YARN-1898:
-

address the latest comments

 Standby RM's conf, stack, logLevel and metrics links are redirecting to 
 Active RM
 -

 Key: YARN-1898
 URL: https://issues.apache.org/jira/browse/YARN-1898
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Reporter: Yesha Vora
Assignee: Xuan Gong
 Attachments: YARN-1898.1.patch, YARN-1898.2.patch


 Standby RM links /conf, /stacks, /logLevel, /metrics, /jmx is redirected to 
 Active RM.
 It should not be redirected to Active RM



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (YARN-1862) Change mapreduce.jobhistory.done-dir by command line arg seems not working

2014-04-01 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1862?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe resolved YARN-1862.
--

Resolution: Invalid

This is a question best asked on the [user@ mailing 
list|http://hadoop.apache.org/mailing_lists.html#User].  JIRA is for reporting 
bugs in the code and tracking new features and is not a user support channel.

mapreduce.jobhistory.done-dir is not a per-job property but rather a property 
used by the job history server.  Attempting to override this property in a job 
will have no effect.  There currently isn't a way to specify a per-job final 
destination for job history files.  This would require some additional support 
in the job history server as it would need a way to locate the corresponding 
job history files for a given job ID if they could be in arbitrary places for 
each job.

 Change mapreduce.jobhistory.done-dir by command line arg seems not working
 --

 Key: YARN-1862
 URL: https://issues.apache.org/jira/browse/YARN-1862
 Project: Hadoop YARN
  Issue Type: Bug
 Environment: Hadoop 2.2.0-cdh5.0.0-beta-2
Reporter: Ruirui
Priority: Minor

 Basically my requirement is to store the job history file in a customized 
 directory and maybe history files for job1 in dir1 and job2 in dir2 in YARN. 
 But although I changed mapreduce.jobhistory.done-dir  as command arg and 
 run with the user mapred to eliminate permission concerns, the history 
 files are still generated in the default directory but not in the directory I 
 specified. 
 However if I specify mapreduce.jobhistory.intermediate-done-dir in the 
 command line, it places files in expected directory, only that it is deleted 
 shortly after the job completion thus not possible for me to get data from 
 the files.
 I searched the Internet and couldn't find any suggestions about this. I am 
 not sure if it comes from any misconfiguration and I'm not sure if this is 
 the way to ask for help. Would you please give me a hand? Thank you.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (YARN-1898) Standby RM's conf, stack, logLevel and metrics links are redirecting to Active RM

2014-04-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13957030#comment-13957030
 ] 

Hadoop QA commented on YARN-1898:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12638123/YARN-1898.2.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/3500//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/3500//console

This message is automatically generated.

 Standby RM's conf, stack, logLevel and metrics links are redirecting to 
 Active RM
 -

 Key: YARN-1898
 URL: https://issues.apache.org/jira/browse/YARN-1898
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Reporter: Yesha Vora
Assignee: Xuan Gong
 Attachments: YARN-1898.1.patch, YARN-1898.2.patch


 Standby RM links /conf, /stacks, /logLevel, /metrics, /jmx is redirected to 
 Active RM.
 It should not be redirected to Active RM



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (YARN-1898) Standby RM's conf, stack, logLevel and metrics links are redirecting to Active RM

2014-04-01 Thread Zhijie Shen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13957054#comment-13957054
 ] 

Zhijie Shen commented on YARN-1898:
---

LGTM. It should cover all the default apps/servlets in HttpServer2. One small 
nit:
{code}
+  private static final SetString nonRedirectedURIs = Sets.newHashSet(/conf,
+  /stacks, /logLevel, /metrics, /jmx, /logs);
{code}
The constant should be NON_REDIRECTED_URIS by convention?

 Standby RM's conf, stack, logLevel and metrics links are redirecting to 
 Active RM
 -

 Key: YARN-1898
 URL: https://issues.apache.org/jira/browse/YARN-1898
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Reporter: Yesha Vora
Assignee: Xuan Gong
 Attachments: YARN-1898.1.patch, YARN-1898.2.patch


 Standby RM links /conf, /stacks, /logLevel, /metrics, /jmx is redirected to 
 Active RM.
 It should not be redirected to Active RM



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (YARN-1898) Standby RM's conf, stack, logLevel and metrics links are redirecting to Active RM

2014-04-01 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated YARN-1898:


Attachment: YARN-1898.3.patch

 Standby RM's conf, stack, logLevel and metrics links are redirecting to 
 Active RM
 -

 Key: YARN-1898
 URL: https://issues.apache.org/jira/browse/YARN-1898
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Reporter: Yesha Vora
Assignee: Xuan Gong
 Attachments: YARN-1898.1.patch, YARN-1898.2.patch, YARN-1898.3.patch


 Standby RM links /conf, /stacks, /logLevel, /metrics, /jmx is redirected to 
 Active RM.
 It should not be redirected to Active RM



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (YARN-1898) Standby RM's conf, stack, logLevel and metrics links are redirecting to Active RM

2014-04-01 Thread Xuan Gong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13957059#comment-13957059
 ] 

Xuan Gong commented on YARN-1898:
-

bq. The constant should be NON_REDIRECTED_URIS by convention?

DONE

 Standby RM's conf, stack, logLevel and metrics links are redirecting to 
 Active RM
 -

 Key: YARN-1898
 URL: https://issues.apache.org/jira/browse/YARN-1898
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Reporter: Yesha Vora
Assignee: Xuan Gong
 Attachments: YARN-1898.1.patch, YARN-1898.2.patch, YARN-1898.3.patch


 Standby RM links /conf, /stacks, /logLevel, /metrics, /jmx is redirected to 
 Active RM.
 It should not be redirected to Active RM



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (YARN-1898) Standby RM's conf, stack, logLevel and metrics links are redirecting to Active RM

2014-04-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13957114#comment-13957114
 ] 

Hadoop QA commented on YARN-1898:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12638139/YARN-1898.3.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:

  org.apache.hadoop.yarn.server.resourcemanager.TestRMRestart

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/3501//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/3501//console

This message is automatically generated.

 Standby RM's conf, stack, logLevel and metrics links are redirecting to 
 Active RM
 -

 Key: YARN-1898
 URL: https://issues.apache.org/jira/browse/YARN-1898
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Reporter: Yesha Vora
Assignee: Xuan Gong
 Attachments: YARN-1898.1.patch, YARN-1898.2.patch, YARN-1898.3.patch


 Standby RM links /conf, /stacks, /logLevel, /metrics, /jmx is redirected to 
 Active RM.
 It should not be redirected to Active RM



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (YARN-1898) Standby RM's conf, stack, logLevel and metrics links are redirecting to Active RM

2014-04-01 Thread Xuan Gong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13957122#comment-13957122
 ] 

Xuan Gong commented on YARN-1898:
-

testcase: 
org.apache.hadoop.yarn.server.resourcemanager.TestRMRestart.testQueueMetricsOnRMRestart
  failure is unrelated

 Standby RM's conf, stack, logLevel and metrics links are redirecting to 
 Active RM
 -

 Key: YARN-1898
 URL: https://issues.apache.org/jira/browse/YARN-1898
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Reporter: Yesha Vora
Assignee: Xuan Gong
 Attachments: YARN-1898.1.patch, YARN-1898.2.patch, YARN-1898.3.patch


 Standby RM links /conf, /stacks, /logLevel, /metrics, /jmx is redirected to 
 Active RM.
 It should not be redirected to Active RM



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (YARN-1890) Too many unnecessary logs are logged while accessing applicationMaster web UI.

2014-04-01 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13957128#comment-13957128
 ] 

Jian He commented on YARN-1890:
---

Agree with moving the verbose access logging into debug level. Maybe also have 
some place just print a single logging about the accessing user in INFO level, 
which is good for debugging.? 

 Too many unnecessary logs are logged while accessing applicationMaster web UI.
 --

 Key: YARN-1890
 URL: https://issues.apache.org/jira/browse/YARN-1890
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Rohith
Assignee: Rohith
Priority: Minor
 Attachments: YARN-1890.patch


 Accessing applicationMaster UI which is redirected from RM UI, logs too many 
 logs in ResourceManager logs and ProxyServer logs. On every refresh, logging 
 is done at WebAppProxyServlet.doGet(). All my RM and Proxyserver logs are 
 filled with UI information logs which are not really necessary for user.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (YARN-1879) Mark Idempotent/AtMostOnce annotations to ApplicationMasterProtocol

2014-04-01 Thread Tsuyoshi OZAWA (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi OZAWA updated YARN-1879:
-

Attachment: YARN-1879.2-wip.patch

Attached a WIP patch. 

1. Added RetryCache support on 
registerApplicationMaster()/finishApplicationMaster().
2. Renamed TestApplicationMasterServiceOnHA to 
TestApplicationMasterProtocolOnHA, because it checks whether the APIs have the 
annotations. [~xgong] mentioned the range of tests on YARN-1521, so this patch 
includes the description.
3. Added tests of annotation with TestApplicationMasterProtocolOnHA.

Additionally, now I'm planning to add tests like TestNamenodeRetryCache. If we 
add tests with LossyInvocationHandler, we need to support 
LossyInvocationHandler on RMProxy. We can do this and add tests on another JIRA.

 Mark Idempotent/AtMostOnce annotations to ApplicationMasterProtocol
 ---

 Key: YARN-1879
 URL: https://issues.apache.org/jira/browse/YARN-1879
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Reporter: Jian He
Assignee: Tsuyoshi OZAWA
Priority: Critical
 Attachments: YARN-1879.1.patch, YARN-1879.1.patch, 
 YARN-1879.2-wip.patch






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (YARN-1898) Standby RM's conf, stack, logLevel and metrics links are redirecting to Active RM

2014-04-01 Thread Zhijie Shen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13957160#comment-13957160
 ] 

Zhijie Shen commented on YARN-1898:
---

The test failure should be unrelated here. It's the same one reported in 
YARN-1830. Otherwise, the patch looks good to me. Will commit it

 Standby RM's conf, stack, logLevel and metrics links are redirecting to 
 Active RM
 -

 Key: YARN-1898
 URL: https://issues.apache.org/jira/browse/YARN-1898
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Reporter: Yesha Vora
Assignee: Xuan Gong
 Attachments: YARN-1898.1.patch, YARN-1898.2.patch, YARN-1898.3.patch


 Standby RM links /conf, /stacks, /logLevel, /metrics, /jmx is redirected to 
 Active RM.
 It should not be redirected to Active RM



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Reopened] (YARN-1830) TestRMRestart.testQueueMetricsOnRMRestart failure

2014-04-01 Thread Zhijie Shen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhijie Shen reopened YARN-1830:
---


https://builds.apache.org/job/PreCommit-YARN-Build/3501//testReport/org.apache.hadoop.yarn.server.resourcemanager/TestRMRestart/testQueueMetricsOnRMRestart/

The same failure occurred again on jenkins

 TestRMRestart.testQueueMetricsOnRMRestart failure
 -

 Key: YARN-1830
 URL: https://issues.apache.org/jira/browse/YARN-1830
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Affects Versions: 2.4.0
Reporter: Karthik Kambatla
Assignee: Zhijie Shen
 Fix For: 2.4.0

 Attachments: YARN-1830.1.patch


 TestRMRestart.testQueueMetricsOnRMRestart fails intermittently as follows 
 (reported on YARN-1815):
 {noformat}
 java.lang.AssertionError: expected:37 but was:38
 ...
   at 
 org.apache.hadoop.yarn.server.resourcemanager.TestRMRestart.assertQueueMetrics(TestRMRestart.java:1728)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.TestRMRestart.testQueueMetricsOnRMRestart(TestRMRestart.java:1682)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (YARN-1898) Standby RM's conf, stacks, logLevel, metrics, jmx and logs links are redirecting to Active RM

2014-04-01 Thread Zhijie Shen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhijie Shen updated YARN-1898:
--

Summary: Standby RM's conf, stacks, logLevel, metrics, jmx and logs links 
are redirecting to Active RM  (was: Standby RM's conf, stack, logLevel and 
metrics links are redirecting to Active RM)

 Standby RM's conf, stacks, logLevel, metrics, jmx and logs links are 
 redirecting to Active RM
 -

 Key: YARN-1898
 URL: https://issues.apache.org/jira/browse/YARN-1898
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Reporter: Yesha Vora
Assignee: Xuan Gong
 Attachments: YARN-1898.1.patch, YARN-1898.2.patch, YARN-1898.3.patch


 Standby RM links /conf, /stacks, /logLevel, /metrics, /jmx is redirected to 
 Active RM.
 It should not be redirected to Active RM



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (YARN-1898) Standby RM's conf, stacks, logLevel, metrics, jmx and logs links are redirecting to Active RM

2014-04-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13957190#comment-13957190
 ] 

Hudson commented on YARN-1898:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #5443 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/5443/])
YARN-1898. Made Standby RM links conf, stacks, logLevel, metrics, jmx, logs and 
static not be redirected to Active RM. Contributed by Xuan Gong. (zjshen: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1583833)
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/TestRMFailover.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMWebAppFilter.java


 Standby RM's conf, stacks, logLevel, metrics, jmx and logs links are 
 redirecting to Active RM
 -

 Key: YARN-1898
 URL: https://issues.apache.org/jira/browse/YARN-1898
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Reporter: Yesha Vora
Assignee: Xuan Gong
 Fix For: 2.4.1

 Attachments: YARN-1898.1.patch, YARN-1898.2.patch, YARN-1898.3.patch


 Standby RM links /conf, /stacks, /logLevel, /metrics, /jmx is redirected to 
 Active RM.
 It should not be redirected to Active RM



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (YARN-1879) Mark Idempotent/AtMostOnce annotations to ApplicationMasterProtocol

2014-04-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13957196#comment-13957196
 ] 

Hadoop QA commented on YARN-1879:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12638160/YARN-1879.2-wip.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/3502//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/3502//console

This message is automatically generated.

 Mark Idempotent/AtMostOnce annotations to ApplicationMasterProtocol
 ---

 Key: YARN-1879
 URL: https://issues.apache.org/jira/browse/YARN-1879
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Reporter: Jian He
Assignee: Tsuyoshi OZAWA
Priority: Critical
 Attachments: YARN-1879.1.patch, YARN-1879.1.patch, 
 YARN-1879.2-wip.patch






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (YARN-1888) Not add NodeManager to inactiveRMNodes when reboot NodeManager which have different port

2014-04-01 Thread zhaoyunjiong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhaoyunjiong resolved YARN-1888.


Resolution: Not a Problem

Thank you Jason Lowe and  Karthik Kambatla for your time.
Now I agree with you, close it as Not a Problem.

 Not add NodeManager to inactiveRMNodes when reboot NodeManager which have 
 different port
 

 Key: YARN-1888
 URL: https://issues.apache.org/jira/browse/YARN-1888
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Affects Versions: 2.3.0
Reporter: zhaoyunjiong
Priority: Minor
 Attachments: YARN-1888.patch


 When NodeManager's port set to 0, reboot NodeManager will cause Losts Nodes 
 inaccurate.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (YARN-1892) Excessive logging in RM

2014-04-01 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1892?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-1892:
--

Attachment: YARN-1892.2.patch

Added a side fix to move sending container statuses...  in NodeStatusUpdater 
to info level

 Excessive logging in RM
 ---

 Key: YARN-1892
 URL: https://issues.apache.org/jira/browse/YARN-1892
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Siddharth Seth
Assignee: Jian He
Priority: Minor
 Attachments: YARN-1892.1.patch, YARN-1892.2.patch


 Mostly in the CS I believe
 {code}
  INFO 
 org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApplicationAttempt:
  Application application_1395435468498_0011 reserved container 
 container_1395435468498_0011_01_000213 on node host:  #containers=5 
 available=4096 used=20960, currently has 1 at priority 4; currentReservation 
 4096
 {code}
 {code}
 INFO 
 org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: 
 hive2 usedResources: memory:20480, vCores:5 clusterResources: 
 memory:81920, vCores:16 currentCapacity 0.25 required memory:4096, 
 vCores:1 potentialNewCapacity: 0.255 (  max-capacity: 0.25)
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (YARN-1892) Excessive logging in RM

2014-04-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13957320#comment-13957320
 ] 

Hadoop QA commented on YARN-1892:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12638189/YARN-1892.2.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/3503//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/3503//console

This message is automatically generated.

 Excessive logging in RM
 ---

 Key: YARN-1892
 URL: https://issues.apache.org/jira/browse/YARN-1892
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Siddharth Seth
Assignee: Jian He
Priority: Minor
 Attachments: YARN-1892.1.patch, YARN-1892.2.patch


 Mostly in the CS I believe
 {code}
  INFO 
 org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApplicationAttempt:
  Application application_1395435468498_0011 reserved container 
 container_1395435468498_0011_01_000213 on node host:  #containers=5 
 available=4096 used=20960, currently has 1 at priority 4; currentReservation 
 4096
 {code}
 {code}
 INFO 
 org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: 
 hive2 usedResources: memory:20480, vCores:5 clusterResources: 
 memory:81920, vCores:16 currentCapacity 0.25 required memory:4096, 
 vCores:1 potentialNewCapacity: 0.255 (  max-capacity: 0.25)
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (YARN-1890) Too many unnecessary logs are logged while accessing applicationMaster web UI.

2014-04-01 Thread Rohith (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13957340#comment-13957340
 ] 

Rohith commented on YARN-1890:
--

Since WebAppProxyServlet.doGet() is called for every fetch data, it is very 
hard to print accessing user only single time. :-( 

 Too many unnecessary logs are logged while accessing applicationMaster web UI.
 --

 Key: YARN-1890
 URL: https://issues.apache.org/jira/browse/YARN-1890
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Rohith
Assignee: Rohith
Priority: Minor
 Attachments: YARN-1890.patch


 Accessing applicationMaster UI which is redirected from RM UI, logs too many 
 logs in ResourceManager logs and ProxyServer logs. On every refresh, logging 
 is done at WebAppProxyServlet.doGet(). All my RM and Proxyserver logs are 
 filled with UI information logs which are not really necessary for user.



--
This message was sent by Atlassian JIRA
(v6.2#6252)