[jira] [Commented] (MAPREDUCE-3490) RMContainerAllocator counts failed maps towards Reduce ramp up

2012-01-03 Thread Hadoop QA (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-3490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13178657#comment-13178657
 ] 

Hadoop QA commented on MAPREDUCE-3490:
--

+1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12509178/MAPREDUCE-3490.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 9 new or modified tests.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in .

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/1524//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/1524//console

This message is automatically generated.

 RMContainerAllocator counts failed maps towards Reduce ramp up
 --

 Key: MAPREDUCE-3490
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3490
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: mr-am, mrv2
Affects Versions: 0.23.0
Reporter: Siddharth Seth
Assignee: Sharad Agarwal
Priority: Blocker
 Attachments: MAPREDUCE-3490.patch, MAPREDUCE-3490.patch, 
 MAPREDUCE-3490.patch, MAPREDUCE-3490.patch, MAPREDUCE-3490.patch, 
 MAPREDUCE-3490.patch, MR-3490-alternate.patch, MR-3490-alternate1.patch


 The RMContainerAllocator does not differentiate between failed and successful 
 maps while calculating whether reduce tasks are ready to launch. Failed tasks 
 are also counted towards total completed tasks. 
 Example. 4 failed maps, 10 total maps. Map%complete = 4/14 * 100 instead of 
 being 0.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (MAPREDUCE-3610) Some parts in MR use old property dfs.block.size

2012-01-03 Thread Harsh J (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-3610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13178682#comment-13178682
 ] 

Harsh J commented on MAPREDUCE-3610:


Changes look good, but can you please add back that FIXME note? Its still 
relevant, after your changes and we don't want to lose it.

Great Ship it! stuff, beyond that. Thanks Sho!

 Some parts in MR use old property dfs.block.size
 

 Key: MAPREDUCE-3610
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3610
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
Reporter: Sho Shimauchi
Assignee: Sho Shimauchi
Priority: Minor
 Attachments: MAPREDUCE-3610.patch


 Some parts in MR use old property dfs.block.size.
 dfs.blocksize should be used instead.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (MAPREDUCE-3612) Task.TaskReporter.done method blocked for some time when task is finishing

2012-01-03 Thread Binglin Chang (Created) (JIRA)
Task.TaskReporter.done method blocked for some time when task is finishing
--

 Key: MAPREDUCE-3612
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3612
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Reporter: Binglin Chang


We recently have done some tests to evaluate performances of different Hadoop 
versions(1.0, 0.23, Baidu internal version), and found some weird results. One 
of them is in 1.0 Task.TaskReporter.done() takes too much time, about 2s, this 
is bad for small tasks. After reviewing source code and add some log, the 
following code block Task.TaskReporter.done

src/mapred/org/apache/hadoop/mapred/Task.java

 658   try {
 659 Thread.sleep(PROGRESS_INTERVAL);
 660   }


 723 public void stopCommunicationThread() throws InterruptedException {
 724   // Updating resources specified in ResourceCalculatorPlugin
 725   if (pingThread != null) {
 726 synchronized(lock) {
 727   while(!done) {
 728 lock.wait();
 729   }
 730 }
 731 pingThread.interrupt();
 732 pingThread.join();
 733   }
 734 }

Originally line 724-730 don't exists, I don't know why it is added. If it is 
needed, we can replace Thread.sleep with Object.wait(timeout) and Object.notify 
instead, so it won't block.




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (MAPREDUCE-3612) Task.TaskReporter.done method blocked for some time when task is finishing

2012-01-03 Thread Binglin Chang (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-3612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Binglin Chang updated MAPREDUCE-3612:
-

 Description: 
We recently have done some tests to evaluate performances of different Hadoop 
versions(1.0, 0.23, Baidu internal version), and found some weird results. One 
of them is in 1.0 Task.TaskReporter.done() takes too much time, about 2s, this 
is bad for small tasks. After reviewing source code and add some log, the 
following code block Task.TaskReporter.done


{code:title=src/mapred/org/apache/hadoop/mapred/Task.java}
 658   try {
 659 Thread.sleep(PROGRESS_INTERVAL);
 660   }


 723 public void stopCommunicationThread() throws InterruptedException {
 724   // Updating resources specified in ResourceCalculatorPlugin
 725   if (pingThread != null) {
 726 synchronized(lock) {
 727   while(!done) {
 728 lock.wait();
 729   }
 730 }
 731 pingThread.interrupt();
 732 pingThread.join();
 733   }
 734 }
{code}
Originally line 724-730 don't exists, I don't know why it is added. If it is 
needed, we can replace Thread.sleep with Object.wait(timeout) and Object.notify 
instead, so it won't block.




  was:
We recently have done some tests to evaluate performances of different Hadoop 
versions(1.0, 0.23, Baidu internal version), and found some weird results. One 
of them is in 1.0 Task.TaskReporter.done() takes too much time, about 2s, this 
is bad for small tasks. After reviewing source code and add some log, the 
following code block Task.TaskReporter.done

src/mapred/org/apache/hadoop/mapred/Task.java

 658   try {
 659 Thread.sleep(PROGRESS_INTERVAL);
 660   }


 723 public void stopCommunicationThread() throws InterruptedException {
 724   // Updating resources specified in ResourceCalculatorPlugin
 725   if (pingThread != null) {
 726 synchronized(lock) {
 727   while(!done) {
 728 lock.wait();
 729   }
 730 }
 731 pingThread.interrupt();
 732 pingThread.join();
 733   }
 734 }

Originally line 724-730 don't exists, I don't know why it is added. If it is 
needed, we can replace Thread.sleep with Object.wait(timeout) and Object.notify 
instead, so it won't block.




Target Version/s: 1.0.0, 0.20.205.0  (was: 0.20.205.0, 1.0.0)

 Task.TaskReporter.done method blocked for some time when task is finishing
 --

 Key: MAPREDUCE-3612
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3612
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Reporter: Binglin Chang

 We recently have done some tests to evaluate performances of different Hadoop 
 versions(1.0, 0.23, Baidu internal version), and found some weird results. 
 One of them is in 1.0 Task.TaskReporter.done() takes too much time, about 2s, 
 this is bad for small tasks. After reviewing source code and add some log, 
 the following code block Task.TaskReporter.done
 {code:title=src/mapred/org/apache/hadoop/mapred/Task.java}
  658   try {
  659 Thread.sleep(PROGRESS_INTERVAL);
  660   }
  723 public void stopCommunicationThread() throws InterruptedException {
  724   // Updating resources specified in ResourceCalculatorPlugin
  725   if (pingThread != null) {
  726 synchronized(lock) {
  727   while(!done) {
  728 lock.wait();
  729   }
  730 }
  731 pingThread.interrupt();
  732 pingThread.join();
  733   }
  734 }
 {code}
 Originally line 724-730 don't exists, I don't know why it is added. If it is 
 needed, we can replace Thread.sleep with Object.wait(timeout) and 
 Object.notify instead, so it won't block.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (MAPREDUCE-3612) Task.TaskReporter.done method blocked for some time when task is finishing

2012-01-03 Thread Binglin Chang (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-3612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Binglin Chang updated MAPREDUCE-3612:
-

Attachment: MAPREDUCE-3612.patch

 Task.TaskReporter.done method blocked for some time when task is finishing
 --

 Key: MAPREDUCE-3612
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3612
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Reporter: Binglin Chang
 Attachments: MAPREDUCE-3612.patch


 We recently have done some tests to evaluate performances of different Hadoop 
 versions(1.0, 0.23, Baidu internal version), and found some weird results. 
 One of them is in 1.0 Task.TaskReporter.done() takes too much time, about 2s, 
 this is bad for small tasks. After reviewing source code and add some log, 
 the following code block Task.TaskReporter.done
 {code:title=src/mapred/org/apache/hadoop/mapred/Task.java}
  658   try {
  659 Thread.sleep(PROGRESS_INTERVAL);
  660   }
  723 public void stopCommunicationThread() throws InterruptedException {
  724   // Updating resources specified in ResourceCalculatorPlugin
  725   if (pingThread != null) {
  726 synchronized(lock) {
  727   while(!done) {
  728 lock.wait();
  729   }
  730 }
  731 pingThread.interrupt();
  732 pingThread.join();
  733   }
  734 }
 {code}
 Originally line 724-730 don't exists, I don't know why it is added. If it is 
 needed, we can replace Thread.sleep with Object.wait(timeout) and 
 Object.notify instead, so it won't block.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (MAPREDUCE-1928) Dynamic information fed into Hadoop for controlling execution of a submitted job

2012-01-03 Thread Mariappan Asokan (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-1928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13178735#comment-13178735
 ] 

Mariappan Asokan commented on MAPREDUCE-1928:
-

Please take a look at MAPREDUCE-2454 for another approach to solve the problem 
addressed in this Jira.  The problem in the current Hadoop MR framework is that 
when the number of reducers is greater than 0, a sort is always performed.  
Sorting requires reading the entire input data.  As of now, there is no way to 
bypass the sort.

MAPREDUCE-2454 makes the sort pluggable and refactors the current sort code so 
that it is the default plugin.  An external sort plugin called NullSortPlugin 
is in the works.  It will bypass the sort and just copy the KEY,VALUE pairs 
from the Mapper to the Reducer.  This will enable one to stop a job after a 
certain number of records are processed without reading the entire input.

 Dynamic information fed into Hadoop for controlling execution of a submitted 
 job
 

 Key: MAPREDUCE-1928
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-1928
 Project: Hadoop Map/Reduce
  Issue Type: New Feature
  Components: job submission, jobtracker, tasktracker
Affects Versions: 0.20.3
Reporter: Raman Grover
   Original Estimate: 2,016h
  Remaining Estimate: 2,016h

 Currently the job submission protocol requires the job provider to put every 
 bit of information inside an instance of JobConf. The submitted information 
 includes the input data (hdfs path) , suspected resource requirement, number 
 of reducers etc.  This information is read by JobTracker as part of job 
 initialization. Once initialized, job is moved into a running state. From 
 this point, there is no mechanism for any additional information to be fed 
 into Hadoop infrastructure for controlling the job execution. 
The execution pattern for the job looks very much 
 static from this point. Using the size of input data and a few settings 
 inside JobConf, number of mappers is computed. Hadoop attempts at reading the 
 whole of data in parallel by launching parallel map tasks. Once map phase is 
 over, a known number of reduce tasks (supplied as part of  JobConf) are 
 started. 
 Parameters that control the job execution were set in JobConf prior to 
 reading the input data. As the map phase progresses, useful information based 
 upon the content of the input data surfaces and can be used in controlling 
 the further execution of the job. Let us walk through some of the examples 
 where additional information can be fed to Hadoop subsequent to job 
 submission for optimal execution of the job. 
 I) Process a part of the input , based upon the results decide if reading 
 more input is required  
 In a huge data set, user is interested in finding 'k' records that 
 satisfy a predicate, essentially sampling the data. In current 
 implementation, as the data is huge, a large no of mappers would be launched 
 consuming a significant fraction of the available map slots in the cluster. 
 Each map task would attempt at emitting a max of  'k' records. With N 
 mappers, we get N*k records out of which one can pick any k to form the final 
 result. 
This is not optimal as:
1)  A larger number of map slots get occupied initially, affecting other 
 jobs in the queue. 
2) If the selectivity of input data is very low, we essentially did not 
 need scanning the whole of data to form our result. 
 we could have finished by reading a fraction of input data, 
 monitoring the cardinality of the map output and determining if 
more input needs to be processed.  

Optimal way: If reading the whole of input requires N mappers, launch only 
 'M' initially. Allow them to complete. Based upon the statistics collected, 
 decide additional number of mappers to be launched next and so on until the 
 whole of input has been processed or enough records have been collected to 
 for the results, whichever is earlier. 
  
  
 II)  Here is some data, the remaining is yet to arrive, but you may start 
 with it, and receive more input later
  Consider a chain of 2 M-R jobs chained together such that the latter 
 reads the output of the former. The second MR job cannot be started until the 
 first has finished completely. This is essentially because Hadoop needs to be 
 told the complete information about the input before beginning the job. 
 The first M-R has produced enough data ( not finished yet) that can be 
 processed by another MR job and hence the other MR need not wait to grab the 
 whole of input before beginning.  Input splits could be supplied later , but 
 ofcourse before the copy/shuffle phase.
  
 III)   Input data has undergone 

[jira] [Commented] (MAPREDUCE-3210) Support delay scheduling for node locality in MR2's capacity scheduler

2012-01-03 Thread Robert Joseph Evans (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-3210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13178780#comment-13178780
 ] 

Robert Joseph Evans commented on MAPREDUCE-3210:


Your concern #1 is already happening.  With MRV2 right now all the requests, 
global, rack local, and node specific are made at once.  This results in the 
possibility that on an underused cluster all of them might be fulfilled and 
returned to the AM.  If the AM can make use of one of the containers it will, 
otherwise it will release it.

Perhaps the better way to do this is to have the AM be responsible for making 
the requests at different times.  So for example on the first heartbeat after a 
container is needed only the node local request is made.  If it does not get it 
after a specific timeout (1 heartbeat by default) then a rack local request is 
added, and finally the global request is added after another timeout.

It would be nice to have it be more generic so that some how the requests are 
tied together, but that would require an API change and may not be simple to do 
in the short term.

 Support delay scheduling for node locality in MR2's capacity scheduler
 --

 Key: MAPREDUCE-3210
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3210
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
  Components: mrv2
Affects Versions: 0.23.0
Reporter: Todd Lipcon

 The capacity scheduler in MR2 doesn't support delay scheduling for achieving 
 node-level locality. So, jobs exhibit poor data locality even if they have 
 good rack locality. Especially on clusters where disk throughput is much 
 better than network capacity, this hurts overall job performance. We should 
 optionally support node-level delay scheduling heuristics similar to what the 
 fair scheduler implements in MR1.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (MAPREDUCE-3299) Add AMInfo table to the AM job page

2012-01-03 Thread Jonathan Eagles (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-3299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Eagles updated MAPREDUCE-3299:
---

Target Version/s: 0.23.1, 0.24.0  (was: 0.24.0, 0.23.1)
  Status: Open  (was: Patch Available)

 Add AMInfo table to the AM job page
 ---

 Key: MAPREDUCE-3299
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3299
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
  Components: mrv2
Affects Versions: 0.23.0, 0.23.1, 0.24.0
Reporter: Siddharth Seth
Assignee: Jonathan Eagles
Priority: Minor
 Attachments: MAPREDUCE-3299.patch


 JobHistory has a table to list all AMs. A similar table can be added to the 
 AM for info on past failed AMs and the current running one.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (MAPREDUCE-3299) Add AMInfo table to the AM job page

2012-01-03 Thread Jonathan Eagles (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-3299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Eagles updated MAPREDUCE-3299:
---

Target Version/s: 0.23.1, 0.24.0  (was: 0.24.0, 0.23.1)
  Status: Patch Available  (was: Open)

 Add AMInfo table to the AM job page
 ---

 Key: MAPREDUCE-3299
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3299
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
  Components: mrv2
Affects Versions: 0.23.0, 0.23.1, 0.24.0
Reporter: Siddharth Seth
Assignee: Jonathan Eagles
Priority: Minor
 Attachments: MAPREDUCE-3299.patch


 JobHistory has a table to list all AMs. A similar table can be added to the 
 AM for info on past failed AMs and the current running one.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (MAPREDUCE-3354) JobHistoryServer should be started by bin/mapred and not by bin/yarn

2012-01-03 Thread Jonathan Eagles (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-3354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Eagles updated MAPREDUCE-3354:
---

Attachment: MAPREDUCE-3354.patch

 JobHistoryServer should be started by bin/mapred and not by bin/yarn
 

 Key: MAPREDUCE-3354
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3354
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: jobhistoryserver, mrv2
Affects Versions: 0.23.1, 0.24.0
Reporter: Vinod Kumar Vavilapalli
Assignee: Jonathan Eagles
Priority: Blocker
 Attachments: MAPREDUCE-3354.patch, MAPREDUCE-3354.patch, 
 MAPREDUCE-3354.patch, MAPREDUCE-3354.patch, MAPREDUCE-3354.patch


 JobHistoryServer belongs to mapreduce land.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (MAPREDUCE-3354) JobHistoryServer should be started by bin/mapred and not by bin/yarn

2012-01-03 Thread Jonathan Eagles (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-3354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Eagles updated MAPREDUCE-3354:
---

Attachment: (was: MAPREDUCE-3354.patch)

 JobHistoryServer should be started by bin/mapred and not by bin/yarn
 

 Key: MAPREDUCE-3354
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3354
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: jobhistoryserver, mrv2
Affects Versions: 0.23.1, 0.24.0
Reporter: Vinod Kumar Vavilapalli
Assignee: Jonathan Eagles
Priority: Blocker
 Attachments: MAPREDUCE-3354.patch, MAPREDUCE-3354.patch, 
 MAPREDUCE-3354.patch, MAPREDUCE-3354.patch


 JobHistoryServer belongs to mapreduce land.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (MAPREDUCE-3354) JobHistoryServer should be started by bin/mapred and not by bin/yarn

2012-01-03 Thread Jonathan Eagles (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-3354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Eagles updated MAPREDUCE-3354:
---

Attachment: MAPREDUCE-3354.patch

Addressing @Mahadev's comments

 JobHistoryServer should be started by bin/mapred and not by bin/yarn
 

 Key: MAPREDUCE-3354
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3354
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: jobhistoryserver, mrv2
Affects Versions: 0.23.1, 0.24.0
Reporter: Vinod Kumar Vavilapalli
Assignee: Jonathan Eagles
Priority: Blocker
 Attachments: MAPREDUCE-3354.patch, MAPREDUCE-3354.patch, 
 MAPREDUCE-3354.patch, MAPREDUCE-3354.patch


 JobHistoryServer belongs to mapreduce land.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (MAPREDUCE-3354) JobHistoryServer should be started by bin/mapred and not by bin/yarn

2012-01-03 Thread Jonathan Eagles (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-3354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Eagles updated MAPREDUCE-3354:
---

Attachment: (was: MAPREDUCE-3354.patch)

 JobHistoryServer should be started by bin/mapred and not by bin/yarn
 

 Key: MAPREDUCE-3354
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3354
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: jobhistoryserver, mrv2
Affects Versions: 0.23.1, 0.24.0
Reporter: Vinod Kumar Vavilapalli
Assignee: Jonathan Eagles
Priority: Blocker
 Attachments: MAPREDUCE-3354.patch, MAPREDUCE-3354.patch, 
 MAPREDUCE-3354.patch, MAPREDUCE-3354.patch


 JobHistoryServer belongs to mapreduce land.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (MAPREDUCE-3548) write unit tests for web services for mapreduce app master and job history server

2012-01-03 Thread Thomas Graves (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-3548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13178869#comment-13178869
 ] 

Thomas Graves commented on MAPREDUCE-3548:
--

Review comment from 3547 to be done here - move the NM mock objects into webapp 
package since they aren't used by any other tests and many other tests already 
have their own mock objects.

 write unit tests for web services for mapreduce app master and job history 
 server
 -

 Key: MAPREDUCE-3548
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3548
 Project: Hadoop Map/Reduce
  Issue Type: Sub-task
  Components: mrv2
Affects Versions: 0.23.0
Reporter: Thomas Graves
Assignee: Thomas Graves
Priority: Critical

 write more unit tests for mapreduce application master and job history server 
 web services added in MAPREDUCE-2863

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (MAPREDUCE-3354) JobHistoryServer should be started by bin/mapred and not by bin/yarn

2012-01-03 Thread Jonathan Eagles (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-3354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Eagles updated MAPREDUCE-3354:
---

Target Version/s: 0.23.1, 0.24.0  (was: 0.24.0, 0.23.1)
  Status: Patch Available  (was: Open)

 JobHistoryServer should be started by bin/mapred and not by bin/yarn
 

 Key: MAPREDUCE-3354
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3354
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: jobhistoryserver, mrv2
Affects Versions: 0.23.1, 0.24.0
Reporter: Vinod Kumar Vavilapalli
Assignee: Jonathan Eagles
Priority: Blocker
 Attachments: MAPREDUCE-3354.patch, MAPREDUCE-3354.patch, 
 MAPREDUCE-3354.patch, MAPREDUCE-3354.patch, MAPREDUCE-3354.patch


 JobHistoryServer belongs to mapreduce land.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (MAPREDUCE-3354) JobHistoryServer should be started by bin/mapred and not by bin/yarn

2012-01-03 Thread Jonathan Eagles (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-3354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Eagles updated MAPREDUCE-3354:
---

Attachment: MAPREDUCE-3354.patch

 JobHistoryServer should be started by bin/mapred and not by bin/yarn
 

 Key: MAPREDUCE-3354
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3354
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: jobhistoryserver, mrv2
Affects Versions: 0.23.1, 0.24.0
Reporter: Vinod Kumar Vavilapalli
Assignee: Jonathan Eagles
Priority: Blocker
 Attachments: MAPREDUCE-3354.patch, MAPREDUCE-3354.patch, 
 MAPREDUCE-3354.patch, MAPREDUCE-3354.patch, MAPREDUCE-3354.patch


 JobHistoryServer belongs to mapreduce land.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (MAPREDUCE-3354) JobHistoryServer should be started by bin/mapred and not by bin/yarn

2012-01-03 Thread Hadoop QA (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-3354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13178875#comment-13178875
 ] 

Hadoop QA commented on MAPREDUCE-3354:
--

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12509318/MAPREDUCE-3354.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

-1 patch.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/1525//console

This message is automatically generated.

 JobHistoryServer should be started by bin/mapred and not by bin/yarn
 

 Key: MAPREDUCE-3354
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3354
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: jobhistoryserver, mrv2
Affects Versions: 0.23.1, 0.24.0
Reporter: Vinod Kumar Vavilapalli
Assignee: Jonathan Eagles
Priority: Blocker
 Attachments: MAPREDUCE-3354.patch, MAPREDUCE-3354.patch, 
 MAPREDUCE-3354.patch, MAPREDUCE-3354.patch, MAPREDUCE-3354.patch


 JobHistoryServer belongs to mapreduce land.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (MAPREDUCE-3566) MR AM slows down due to repeatedly constructing ContainerLaunchContext

2012-01-03 Thread Vinod Kumar Vavilapalli (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-3566?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated MAPREDUCE-3566:
---

Status: Patch Available  (was: Open)

 MR AM slows down due to repeatedly constructing ContainerLaunchContext
 --

 Key: MAPREDUCE-3566
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3566
 Project: Hadoop Map/Reduce
  Issue Type: Sub-task
  Components: mr-am, mrv2
Affects Versions: 0.23.0
Reporter: Vinod Kumar Vavilapalli
Assignee: Vinod Kumar Vavilapalli
Priority: Critical
 Fix For: 0.23.1

 Attachments: MAPREDUCE-3566-20111215.txt, 
 MAPREDUCE-3566-20111220.txt, MAPREDUCE-3566-20120103.txt


 The construction of the context is expensive, includes per-task trips to 
 NameNode for obtaining the information about job.jar, job splits etc which is 
 redundant across all tasks.
 We should have a common job-level context and a task-specific context 
 inheriting from the common job-level context.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (MAPREDUCE-3566) MR AM slows down due to repeatedly constructing ContainerLaunchContext

2012-01-03 Thread Vinod Kumar Vavilapalli (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-3566?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated MAPREDUCE-3566:
---

Attachment: MAPREDUCE-3566-20120103.txt

Final clean patch with tests.

 MR AM slows down due to repeatedly constructing ContainerLaunchContext
 --

 Key: MAPREDUCE-3566
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3566
 Project: Hadoop Map/Reduce
  Issue Type: Sub-task
  Components: mr-am, mrv2
Affects Versions: 0.23.0
Reporter: Vinod Kumar Vavilapalli
Assignee: Vinod Kumar Vavilapalli
Priority: Critical
 Fix For: 0.23.1

 Attachments: MAPREDUCE-3566-20111215.txt, 
 MAPREDUCE-3566-20111220.txt, MAPREDUCE-3566-20120103.txt


 The construction of the context is expensive, includes per-task trips to 
 NameNode for obtaining the information about job.jar, job splits etc which is 
 redundant across all tasks.
 We should have a common job-level context and a task-specific context 
 inheriting from the common job-level context.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (MAPREDUCE-3613) web service calls header contains 2 content types

2012-01-03 Thread Thomas Graves (Created) (JIRA)
web service calls header contains 2 content types
-

 Key: MAPREDUCE-3613
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3613
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: mrv2
Affects Versions: 0.23.0
Reporter: Thomas Graves
Assignee: Thomas Graves
Priority: Blocker


when doing requesting info from the web services rest API, curl seems to return 
content-type of both text and json or xml:

 Accept: application/xml

 HTTP/1.1 200 OK
 Content-Type: text/plain; charset=utf-8
 Content-Type: application/xml

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (MAPREDUCE-3613) web service calls header contains 2 content types

2012-01-03 Thread Thomas Graves (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-3613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Graves updated MAPREDUCE-3613:
-

Issue Type: Sub-task  (was: Bug)
Parent: MAPREDUCE-2863

 web service calls header contains 2 content types
 -

 Key: MAPREDUCE-3613
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3613
 Project: Hadoop Map/Reduce
  Issue Type: Sub-task
  Components: mrv2
Affects Versions: 0.23.0
Reporter: Thomas Graves
Assignee: Thomas Graves
Priority: Blocker

 when doing requesting info from the web services rest API, curl seems to 
 return content-type of both text and json or xml:
  Accept: application/xml
 
  HTTP/1.1 200 OK
  Content-Type: text/plain; charset=utf-8
  Content-Type: application/xml

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (MAPREDUCE-3548) write unit tests for web services for mapreduce app master and job history server

2012-01-03 Thread Thomas Graves (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-3548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13178886#comment-13178886
 ] 

Thomas Graves commented on MAPREDUCE-3548:
--

Note that the issue with it returning 2 content types will be addressed in 
MAPREDUCE-3613.

 write unit tests for web services for mapreduce app master and job history 
 server
 -

 Key: MAPREDUCE-3548
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3548
 Project: Hadoop Map/Reduce
  Issue Type: Sub-task
  Components: mrv2
Affects Versions: 0.23.0
Reporter: Thomas Graves
Assignee: Thomas Graves
Priority: Critical

 write more unit tests for mapreduce application master and job history server 
 web services added in MAPREDUCE-2863

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (MAPREDUCE-3613) web service calls header contains 2 content types

2012-01-03 Thread Thomas Graves (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-3613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13178890#comment-13178890
 ] 

Thomas Graves commented on MAPREDUCE-3613:
--

This appears to be that the common HTTPServer class is setting the content-type 
in the doFilter routine first, then later jersey sets it to what it actually 
is.  Instead of over writing it, it appears to append it.  Need to investigate 
fix.

 web service calls header contains 2 content types
 -

 Key: MAPREDUCE-3613
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3613
 Project: Hadoop Map/Reduce
  Issue Type: Sub-task
  Components: mrv2
Affects Versions: 0.23.0
Reporter: Thomas Graves
Assignee: Thomas Graves
Priority: Blocker

 when doing requesting info from the web services rest API, curl seems to 
 return content-type of both text and json or xml:
  Accept: application/xml
 
  HTTP/1.1 200 OK
  Content-Type: text/plain; charset=utf-8
  Content-Type: application/xml

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Assigned] (MAPREDUCE-3612) Task.TaskReporter.done method blocked for some time when task is finishing

2012-01-03 Thread Arun C Murthy (Assigned) (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-3612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy reassigned MAPREDUCE-3612:


Assignee: Binglin Chang

 Task.TaskReporter.done method blocked for some time when task is finishing
 --

 Key: MAPREDUCE-3612
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3612
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Reporter: Binglin Chang
Assignee: Binglin Chang
 Attachments: MAPREDUCE-3612.patch


 We recently have done some tests to evaluate performances of different Hadoop 
 versions(1.0, 0.23, Baidu internal version), and found some weird results. 
 One of them is in 1.0 Task.TaskReporter.done() takes too much time, about 2s, 
 this is bad for small tasks. After reviewing source code and add some log, 
 the following code block Task.TaskReporter.done
 {code:title=src/mapred/org/apache/hadoop/mapred/Task.java}
  658   try {
  659 Thread.sleep(PROGRESS_INTERVAL);
  660   }
  723 public void stopCommunicationThread() throws InterruptedException {
  724   // Updating resources specified in ResourceCalculatorPlugin
  725   if (pingThread != null) {
  726 synchronized(lock) {
  727   while(!done) {
  728 lock.wait();
  729   }
  730 }
  731 pingThread.interrupt();
  732 pingThread.join();
  733   }
  734 }
 {code}
 Originally line 724-730 don't exists, I don't know why it is added. If it is 
 needed, we can replace Thread.sleep with Object.wait(timeout) and 
 Object.notify instead, so it won't block.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (MAPREDUCE-3583) ProcfsBasedProcessTree#constructProcessInfo() may throw NumberFormatException

2012-01-03 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-3583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13178893#comment-13178893
 ] 

Zhihong Yu commented on MAPREDUCE-3583:
---

See 
https://issues.apache.org/jira/browse/HBASE-5064?focusedCommentId=13176830page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13176830
 for one scenario where NumberFormatException should have been fixed so that 
other exceptions can be more easily uncovered.

 ProcfsBasedProcessTree#constructProcessInfo() may throw NumberFormatException
 -

 Key: MAPREDUCE-3583
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3583
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Affects Versions: 0.20.205.0
 Environment: 64-bit Linux:
 asf011.sp2.ygridcore.net
 Linux asf011.sp2.ygridcore.net 2.6.32-33-server #71-Ubuntu SMP Wed Jul 20 
 17:42:25 UTC 2011 x86_64 GNU/Linux
Reporter: Zhihong Yu
 Attachments: mapreduce-3583.txt


 HBase PreCommit builds frequently gave us NumberFormatException.
 From 
 https://builds.apache.org/job/PreCommit-HBASE-Build/553//testReport/org.apache.hadoop.hbase.mapreduce/TestHFileOutputFormat/testMRIncrementalLoad/:
 {code}
 2011-12-20 01:44:01,180 WARN  [main] mapred.JobClient(784): No job jar file 
 set.  User classes may not be found. See JobConf(Class) or 
 JobConf#setJar(String).
 java.lang.NumberFormatException: For input string: 18446743988060683582
   at 
 java.lang.NumberFormatException.forInputString(NumberFormatException.java:48)
   at java.lang.Long.parseLong(Long.java:422)
   at java.lang.Long.parseLong(Long.java:468)
   at 
 org.apache.hadoop.util.ProcfsBasedProcessTree.constructProcessInfo(ProcfsBasedProcessTree.java:413)
   at 
 org.apache.hadoop.util.ProcfsBasedProcessTree.getProcessTree(ProcfsBasedProcessTree.java:148)
   at 
 org.apache.hadoop.util.LinuxResourceCalculatorPlugin.getProcResourceValues(LinuxResourceCalculatorPlugin.java:401)
   at org.apache.hadoop.mapred.Task.initialize(Task.java:536)
   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:353)
   at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:396)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1083)
   at org.apache.hadoop.mapred.Child.main(Child.java:249)
 {code}
 From hadoop 0.20.205 source code, looks like ppid was 18446743988060683582, 
 causing NFE:
 {code}
 // Set (name) (ppid) (pgrpId) (session) (utime) (stime) (vsize) (rss)
  pinfo.updateProcessInfo(m.group(2), Integer.parseInt(m.group(3)),
 {code}
 You can find information on the OS at the beginning of 
 https://builds.apache.org/job/PreCommit-HBASE-Build/553/console:
 {code}
 asf011.sp2.ygridcore.net
 Linux asf011.sp2.ygridcore.net 2.6.32-33-server #71-Ubuntu SMP Wed Jul 20 
 17:42:25 UTC 2011 x86_64 GNU/Linux
 core file size  (blocks, -c) 0
 data seg size   (kbytes, -d) unlimited
 scheduling priority (-e) 20
 file size   (blocks, -f) unlimited
 pending signals (-i) 16382
 max locked memory   (kbytes, -l) 64
 max memory size (kbytes, -m) unlimited
 open files  (-n) 6
 pipe size(512 bytes, -p) 8
 POSIX message queues (bytes, -q) 819200
 real-time priority  (-r) 0
 stack size  (kbytes, -s) 8192
 cpu time   (seconds, -t) unlimited
 max user processes  (-u) 2048
 virtual memory  (kbytes, -v) unlimited
 file locks  (-x) unlimited
 6
 Running in Jenkins mode
 {code}
 From Nicolas Sze:
 {noformat}
 It looks like that the ppid is a 64-bit positive integer but Java long is 
 signed and so only works with 63-bit positive integers.  In your case,
   2^64  18446743988060683582  2^63.
 Therefore, there is a NFE. 
 {noformat}
 I propose changing allProcessInfo to MapString, ProcessInfo so that we 
 don't encounter this problem by avoiding parsing large integer.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Assigned] (MAPREDUCE-3596) Sort benchmark got hang after completion of 99% map phase

2012-01-03 Thread Arun C Murthy (Assigned) (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-3596?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy reassigned MAPREDUCE-3596:


Assignee: Vinod Kumar Vavilapalli

 Sort benchmark got hang after completion of 99% map phase
 -

 Key: MAPREDUCE-3596
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3596
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: applicationmaster, mrv2
Affects Versions: 0.23.0
Reporter: Ravi Prakash
Assignee: Vinod Kumar Vavilapalli
Priority: Blocker
 Attachments: logs.tar.bz2


 Courtesy [~vinaythota]
 {quote}
 Ran sort benchmark couple of times and every time the job got hang after 
 completion 99% map phase. There are some map tasks failed. Also it's not 
 scheduled some of the pending map tasks.
 Cluster size is 350 nodes.
 Build Details:
 ==
 Compiled:   Fri Dec 9 16:25:27 PST 2011 by someone from 
 branches/branch-0.23/hadoop-common-project/hadoop-common 
 ResourceManager version:revision 1212681 by someone source checksum 
 on Fri Dec 9 16:52:07 PST 2011
 Hadoop version: revision 1212592 by someone Fri Dec 9 16:25:27 PST 
 2011
 {quote}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (MAPREDUCE-3566) MR AM slows down due to repeatedly constructing ContainerLaunchContext

2012-01-03 Thread Hadoop QA (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-3566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13178907#comment-13178907
 ] 

Hadoop QA commented on MAPREDUCE-3566:
--

-1 overall.  Here are the results of testing the latest attachment 
  
http://issues.apache.org/jira/secure/attachment/12509321/MAPREDUCE-3566-20120103.txt
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 11 new or modified tests.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

-1 findbugs.  The patch appears to introduce 1 new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in .

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/1526//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/1526//artifact/trunk/hadoop-mapreduce-project/patchprocess/newPatchFindbugsWarningshadoop-mapreduce-client-app.html
Console output: 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/1526//console

This message is automatically generated.

 MR AM slows down due to repeatedly constructing ContainerLaunchContext
 --

 Key: MAPREDUCE-3566
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3566
 Project: Hadoop Map/Reduce
  Issue Type: Sub-task
  Components: mr-am, mrv2
Affects Versions: 0.23.0
Reporter: Vinod Kumar Vavilapalli
Assignee: Vinod Kumar Vavilapalli
Priority: Critical
 Fix For: 0.23.1

 Attachments: MAPREDUCE-3566-20111215.txt, 
 MAPREDUCE-3566-20111220.txt, MAPREDUCE-3566-20120103.txt


 The construction of the context is expensive, includes per-task trips to 
 NameNode for obtaining the information about job.jar, job splits etc which is 
 redundant across all tasks.
 We should have a common job-level context and a task-specific context 
 inheriting from the common job-level context.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (MAPREDUCE-1744) DistributedCache creates its own FileSytem instance when adding a file/archive to the path

2012-01-03 Thread Alejandro Abdelnur (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-1744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated MAPREDUCE-1744:
--

Attachment: MAPREDUCE-1744-0.23-trunk.patch

Patch ported to trunk  0.23.

Patch cannot use original testcase as it is no more. Piggybacking in an 
existing testcase .

 DistributedCache creates its own FileSytem instance when adding a 
 file/archive to the path
 --

 Key: MAPREDUCE-1744
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-1744
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Reporter: Dick King
Assignee: Alejandro Abdelnur
 Attachments: BZ-3503564--2010-05-06.patch, 
 MAPREDUCE-1744-0.23-trunk.patch, MAPREDUCE-1744.patch, h1744.patch, 
 mapred-1744-1.patch, mapred-1744-2.patch, mapred-1744-3.patch, 
 mapred-1744.patch


 According to the contract of {{UserGroupInformation.doAs()}} the only 
 required operations within the {{doAs()}} block are the
 creation of a {{JobClient}} or getting a {{FileSystem}} .
 The {{DistributedCache.add(File/Archive)ToClasspath()}} methods create a 
 {{FileSystem}} instance outside of the {{doAs()}} block,
 this {{FileSystem}} instance is not in the scope of the proxy user but of the 
 superuser and permissions may make the method
 fail.
 One option is to overload the methods above to receive a filesystem.
 Another option is to do obtain the {{FileSystem}} within a {{doAs()}} block, 
 for this it would be required to have the proxy
 user set in the passed configuration.
 The second option seems nicer, but I don't know if the proxy user is as a 
 property in the jobconf.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (MAPREDUCE-3490) RMContainerAllocator counts failed maps towards Reduce ramp up

2012-01-03 Thread Vinod Kumar Vavilapalli (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-3490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13178910#comment-13178910
 ] 

Vinod Kumar Vavilapalli commented on MAPREDUCE-3490:


bq. I am attaching a patch which drastically simplify this, without the need to 
add new events. Also I have removed the completedMaps and completedReduces 
counts in RMContainerAllocator. Arun/Vinod - see if this make sense ?
Thanks for the explanation, Sharad. Makes sense. Obviously we missed the big 
picture here :) At any rate, this code definitely needs some cleanup, way too 
complicated for my simple mind to track all of it ;)

Thanks for the update, Arun. Looking at the patch now.

 RMContainerAllocator counts failed maps towards Reduce ramp up
 --

 Key: MAPREDUCE-3490
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3490
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: mr-am, mrv2
Affects Versions: 0.23.0
Reporter: Siddharth Seth
Assignee: Sharad Agarwal
Priority: Blocker
 Attachments: MAPREDUCE-3490.patch, MAPREDUCE-3490.patch, 
 MAPREDUCE-3490.patch, MAPREDUCE-3490.patch, MAPREDUCE-3490.patch, 
 MAPREDUCE-3490.patch, MR-3490-alternate.patch, MR-3490-alternate1.patch


 The RMContainerAllocator does not differentiate between failed and successful 
 maps while calculating whether reduce tasks are ready to launch. Failed tasks 
 are also counted towards total completed tasks. 
 Example. 4 failed maps, 10 total maps. Map%complete = 4/14 * 100 instead of 
 being 0.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (MAPREDUCE-3131) Docs and Scripts for setting up single node MRV2 cluster.

2012-01-03 Thread Arun C Murthy (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-3131?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy updated MAPREDUCE-3131:
-

Status: Open  (was: Patch Available)

Prashanth, this is great progress. Sorry, for being late on this one.

Thinking a bit more... 

# It would be really nice to not require a new 'run.sh' script. Can we just get 
bin/hadoop-daemon.sh and bin/yarn-daemon.sh to start/stop multiple DNs/NMs on 
the same node? Then we can just document this in our site-docs.
# Can we add the -debug functionality to all daemons?

Basically, I'm trying to avoid maintain another set of scripts (run.sh) and put 
everything in the main scripts/configs...

 Docs and Scripts for setting up single node MRV2 cluster. 
 --

 Key: MAPREDUCE-3131
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3131
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
  Components: documentation, mrv2
Affects Versions: 0.24.0
Reporter: Prashant Sharma
Assignee: Prashant Sharma
Priority: Trivial
  Labels: documentation, hadoop
 Fix For: 0.24.0

 Attachments: MAPREDUCE-3131.patch, MAPREDUCE-3131.patch, 
 MAPREDUCE-3131.patch, MAPREDUCE-3131.patch

   Original Estimate: 168h
  Time Spent: 96h
  Remaining Estimate: 72h

 Scripts to run a single node cluster with a default configuration. Takes care 
 of running all the daemons including hdfs and yarn. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (MAPREDUCE-3529) TokenCache does not cache viewfs credentials correctly

2012-01-03 Thread Siddharth Seth (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-3529?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Seth updated MAPREDUCE-3529:
--

Status: Patch Available  (was: Open)

 TokenCache does not cache viewfs credentials correctly
 --

 Key: MAPREDUCE-3529
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3529
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: mrv2
Affects Versions: 0.23.0
Reporter: Siddharth Seth
Assignee: Siddharth Seth
Priority: Critical
 Attachments: MR3529_v1.txt, MR3529_v2.txt


 viewfs returns a list of delegation tokens for the actual namenodes. 
 TokenCache caches these based on the actual service name - subsequent calls 
 to TokenCache end up trying to get a new set of tokens.
 Tasks which happen to access TokenCache fail when using viewfs - since they 
 end up trying to get a new set of tokens even though the tokens are already 
 available.
 {noformat}
 Error: java.io.IOException: Delegation Token can be issued only with kerberos 
 or web authentication
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getDelegationToken(FSNamesystem.java:4027)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getDelegationToken(NameNodeRpcServer.java:281)
 at sun.reflect.GeneratedMethodAccessor38.invoke(Unknown Source)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at 
 org.apache.hadoop.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:365)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1490)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1486)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:396)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1152)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1484)
 at org.apache.hadoop.ipc.Client.call(Client.java:1085)
 at 
 org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:193)
 at $Proxy8.getDelegationToken(Unknown Source)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at 
 org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:100)
 at 
 org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:65)
 at $Proxy8.getDelegationToken(Unknown Source)
 at 
 org.apache.hadoop.hdfs.DFSClient.getDelegationToken(DFSClient.java:456)
 at 
 org.apache.hadoop.hdfs.DistributedFileSystem.getDelegationToken(DistributedFileSystem.java:812)
 at 
 org.apache.hadoop.hdfs.DistributedFileSystem.getDelegationTokens(DistributedFileSystem.java:839)
 at 
 org.apache.hadoop.fs.viewfs.ChRootedFileSystem.getDelegationTokens(ChRootedFileSystem.java:311)
 at 
 org.apache.hadoop.fs.viewfs.ViewFileSystem.getDelegationTokens(ViewFileSystem.java:490)
 at 
 org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:144)
 at 
 org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:91)
 at 
 org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodes(TokenCache.java:84)
 {noformat}
 This will likely require some changes in viewfs/hdfs - will open a Jira with 
 details.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (MAPREDUCE-1744) DistributedCache creates its own FileSytem instance when adding a file/archive to the path

2012-01-03 Thread Alejandro Abdelnur (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-1744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated MAPREDUCE-1744:
--

Status: Patch Available  (was: Reopened)

 DistributedCache creates its own FileSytem instance when adding a 
 file/archive to the path
 --

 Key: MAPREDUCE-1744
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-1744
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Reporter: Dick King
Assignee: Alejandro Abdelnur
 Attachments: BZ-3503564--2010-05-06.patch, 
 MAPREDUCE-1744-0.23-trunk.patch, MAPREDUCE-1744.patch, h1744.patch, 
 mapred-1744-1.patch, mapred-1744-2.patch, mapred-1744-3.patch, 
 mapred-1744.patch


 According to the contract of {{UserGroupInformation.doAs()}} the only 
 required operations within the {{doAs()}} block are the
 creation of a {{JobClient}} or getting a {{FileSystem}} .
 The {{DistributedCache.add(File/Archive)ToClasspath()}} methods create a 
 {{FileSystem}} instance outside of the {{doAs()}} block,
 this {{FileSystem}} instance is not in the scope of the proxy user but of the 
 superuser and permissions may make the method
 fail.
 One option is to overload the methods above to receive a filesystem.
 Another option is to do obtain the {{FileSystem}} within a {{doAs()}} block, 
 for this it would be required to have the proxy
 user set in the passed configuration.
 The second option seems nicer, but I don't know if the proxy user is as a 
 property in the jobconf.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (MAPREDUCE-3529) TokenCache does not cache viewfs credentials correctly

2012-01-03 Thread Hadoop QA (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-3529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13178962#comment-13178962
 ] 

Hadoop QA commented on MAPREDUCE-3529:
--

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12508124/MR3529_v2.txt
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 2 new or modified tests.

+1 javadoc.  The javadoc tool did not generate any warning messages.

-1 javac.  The patch appears to cause tar ant target to fail.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

-1 findbugs.  The patch appears to cause Findbugs (version 1.3.9) to fail.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed the unit tests build

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/1527//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/1527//console

This message is automatically generated.

 TokenCache does not cache viewfs credentials correctly
 --

 Key: MAPREDUCE-3529
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3529
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: mrv2
Affects Versions: 0.23.0
Reporter: Siddharth Seth
Assignee: Siddharth Seth
Priority: Critical
 Attachments: MR3529_v1.txt, MR3529_v2.txt


 viewfs returns a list of delegation tokens for the actual namenodes. 
 TokenCache caches these based on the actual service name - subsequent calls 
 to TokenCache end up trying to get a new set of tokens.
 Tasks which happen to access TokenCache fail when using viewfs - since they 
 end up trying to get a new set of tokens even though the tokens are already 
 available.
 {noformat}
 Error: java.io.IOException: Delegation Token can be issued only with kerberos 
 or web authentication
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getDelegationToken(FSNamesystem.java:4027)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getDelegationToken(NameNodeRpcServer.java:281)
 at sun.reflect.GeneratedMethodAccessor38.invoke(Unknown Source)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at 
 org.apache.hadoop.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:365)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1490)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1486)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:396)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1152)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1484)
 at org.apache.hadoop.ipc.Client.call(Client.java:1085)
 at 
 org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:193)
 at $Proxy8.getDelegationToken(Unknown Source)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at 
 org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:100)
 at 
 org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:65)
 at $Proxy8.getDelegationToken(Unknown Source)
 at 
 org.apache.hadoop.hdfs.DFSClient.getDelegationToken(DFSClient.java:456)
 at 
 org.apache.hadoop.hdfs.DistributedFileSystem.getDelegationToken(DistributedFileSystem.java:812)
 at 
 org.apache.hadoop.hdfs.DistributedFileSystem.getDelegationTokens(DistributedFileSystem.java:839)
 at 
 org.apache.hadoop.fs.viewfs.ChRootedFileSystem.getDelegationTokens(ChRootedFileSystem.java:311)
 at 
 org.apache.hadoop.fs.viewfs.ViewFileSystem.getDelegationTokens(ViewFileSystem.java:490)
 at 
 org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:144)
 at 
 org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:91)
 at 
 org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodes(TokenCache.java:84)
 {noformat}
 This 

[jira] [Updated] (MAPREDUCE-3490) RMContainerAllocator counts failed maps towards Reduce ramp up

2012-01-03 Thread Vinod Kumar Vavilapalli (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-3490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated MAPREDUCE-3490:
---

Status: Open  (was: Patch Available)

Patch looks better now. Except for the fact that we should continue to track 
completed tasks and not just successfulTasks to accommodate jobs for which 
task-failures are permissible.

 RMContainerAllocator counts failed maps towards Reduce ramp up
 --

 Key: MAPREDUCE-3490
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3490
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: mr-am, mrv2
Affects Versions: 0.23.0
Reporter: Siddharth Seth
Assignee: Sharad Agarwal
Priority: Blocker
 Attachments: MAPREDUCE-3490.patch, MAPREDUCE-3490.patch, 
 MAPREDUCE-3490.patch, MAPREDUCE-3490.patch, MAPREDUCE-3490.patch, 
 MAPREDUCE-3490.patch, MR-3490-alternate.patch, MR-3490-alternate1.patch


 The RMContainerAllocator does not differentiate between failed and successful 
 maps while calculating whether reduce tasks are ready to launch. Failed tasks 
 are also counted towards total completed tasks. 
 Example. 4 failed maps, 10 total maps. Map%complete = 4/14 * 100 instead of 
 being 0.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (MAPREDUCE-1744) DistributedCache creates its own FileSytem instance when adding a file/archive to the path

2012-01-03 Thread Hadoop QA (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-1744?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13178983#comment-13178983
 ] 

Hadoop QA commented on MAPREDUCE-1744:
--

+1 overall.  Here are the results of testing the latest attachment 
  
http://issues.apache.org/jira/secure/attachment/12509328/MAPREDUCE-1744-0.23-trunk.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 3 new or modified tests.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in .

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/1528//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/1528//console

This message is automatically generated.

 DistributedCache creates its own FileSytem instance when adding a 
 file/archive to the path
 --

 Key: MAPREDUCE-1744
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-1744
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Reporter: Dick King
Assignee: Alejandro Abdelnur
 Attachments: BZ-3503564--2010-05-06.patch, 
 MAPREDUCE-1744-0.23-trunk.patch, MAPREDUCE-1744.patch, h1744.patch, 
 mapred-1744-1.patch, mapred-1744-2.patch, mapred-1744-3.patch, 
 mapred-1744.patch


 According to the contract of {{UserGroupInformation.doAs()}} the only 
 required operations within the {{doAs()}} block are the
 creation of a {{JobClient}} or getting a {{FileSystem}} .
 The {{DistributedCache.add(File/Archive)ToClasspath()}} methods create a 
 {{FileSystem}} instance outside of the {{doAs()}} block,
 this {{FileSystem}} instance is not in the scope of the proxy user but of the 
 superuser and permissions may make the method
 fail.
 One option is to overload the methods above to receive a filesystem.
 Another option is to do obtain the {{FileSystem}} within a {{doAs()}} block, 
 for this it would be required to have the proxy
 user set in the passed configuration.
 The second option seems nicer, but I don't know if the proxy user is as a 
 property in the jobconf.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (MAPREDUCE-3529) TokenCache does not cache viewfs credentials correctly

2012-01-03 Thread Siddharth Seth (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-3529?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Seth updated MAPREDUCE-3529:
--

Status: Open  (was: Patch Available)

Cancelling - Jenkins doesn't seem to have picked up the HADOOP-7933.

Thanks for taking a look Daryn,

bq. If fs.getDelegationTokens(delegTokenRenewer, credentials) returns null 
because all tokens are already acquired, isn't this going to cause 
fs.getDelegationTokens(delegTokenRenewer) to be unnecessarily called?
fs.getDelegationTokens(renewer, creds) is supposed to return the full list of 
credentials (not an incremental list). If there's no tokens - the assumption is 
the API may not have been implemented and hence the fallback to 
getDelegationToken.

bq. credentials.addToken(fsNameText, token) really should be 
credentials.addToken(token.getService(), token).
Will upload a new patch with this change.

 TokenCache does not cache viewfs credentials correctly
 --

 Key: MAPREDUCE-3529
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3529
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: mrv2
Affects Versions: 0.23.0
Reporter: Siddharth Seth
Assignee: Siddharth Seth
Priority: Critical
 Attachments: MR3529_v1.txt, MR3529_v2.txt


 viewfs returns a list of delegation tokens for the actual namenodes. 
 TokenCache caches these based on the actual service name - subsequent calls 
 to TokenCache end up trying to get a new set of tokens.
 Tasks which happen to access TokenCache fail when using viewfs - since they 
 end up trying to get a new set of tokens even though the tokens are already 
 available.
 {noformat}
 Error: java.io.IOException: Delegation Token can be issued only with kerberos 
 or web authentication
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getDelegationToken(FSNamesystem.java:4027)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getDelegationToken(NameNodeRpcServer.java:281)
 at sun.reflect.GeneratedMethodAccessor38.invoke(Unknown Source)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at 
 org.apache.hadoop.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:365)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1490)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1486)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:396)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1152)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1484)
 at org.apache.hadoop.ipc.Client.call(Client.java:1085)
 at 
 org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:193)
 at $Proxy8.getDelegationToken(Unknown Source)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at 
 org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:100)
 at 
 org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:65)
 at $Proxy8.getDelegationToken(Unknown Source)
 at 
 org.apache.hadoop.hdfs.DFSClient.getDelegationToken(DFSClient.java:456)
 at 
 org.apache.hadoop.hdfs.DistributedFileSystem.getDelegationToken(DistributedFileSystem.java:812)
 at 
 org.apache.hadoop.hdfs.DistributedFileSystem.getDelegationTokens(DistributedFileSystem.java:839)
 at 
 org.apache.hadoop.fs.viewfs.ChRootedFileSystem.getDelegationTokens(ChRootedFileSystem.java:311)
 at 
 org.apache.hadoop.fs.viewfs.ViewFileSystem.getDelegationTokens(ViewFileSystem.java:490)
 at 
 org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:144)
 at 
 org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:91)
 at 
 org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodes(TokenCache.java:84)
 {noformat}
 This will likely require some changes in viewfs/hdfs - will open a Jira with 
 details.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: 

[jira] [Updated] (MAPREDUCE-3566) MR AM slows down due to repeatedly constructing ContainerLaunchContext

2012-01-03 Thread Vinod Kumar Vavilapalli (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-3566?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated MAPREDUCE-3566:
---

Status: Open  (was: Patch Available)

 MR AM slows down due to repeatedly constructing ContainerLaunchContext
 --

 Key: MAPREDUCE-3566
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3566
 Project: Hadoop Map/Reduce
  Issue Type: Sub-task
  Components: mr-am, mrv2
Affects Versions: 0.23.0
Reporter: Vinod Kumar Vavilapalli
Assignee: Vinod Kumar Vavilapalli
Priority: Critical
 Fix For: 0.23.1

 Attachments: MAPREDUCE-3566-20111215.txt, 
 MAPREDUCE-3566-20111220.txt, MAPREDUCE-3566-20120103.1.txt, 
 MAPREDUCE-3566-20120103.txt


 The construction of the context is expensive, includes per-task trips to 
 NameNode for obtaining the information about job.jar, job splits etc which is 
 redundant across all tasks.
 We should have a common job-level context and a task-specific context 
 inheriting from the common job-level context.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (MAPREDUCE-3566) MR AM slows down due to repeatedly constructing ContainerLaunchContext

2012-01-03 Thread Vinod Kumar Vavilapalli (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-3566?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated MAPREDUCE-3566:
---

Attachment: MAPREDUCE-3566-20120103.1.txt

Fixing findBugs warnings.

 MR AM slows down due to repeatedly constructing ContainerLaunchContext
 --

 Key: MAPREDUCE-3566
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3566
 Project: Hadoop Map/Reduce
  Issue Type: Sub-task
  Components: mr-am, mrv2
Affects Versions: 0.23.0
Reporter: Vinod Kumar Vavilapalli
Assignee: Vinod Kumar Vavilapalli
Priority: Critical
 Fix For: 0.23.1

 Attachments: MAPREDUCE-3566-20111215.txt, 
 MAPREDUCE-3566-20111220.txt, MAPREDUCE-3566-20120103.1.txt, 
 MAPREDUCE-3566-20120103.txt


 The construction of the context is expensive, includes per-task trips to 
 NameNode for obtaining the information about job.jar, job splits etc which is 
 redundant across all tasks.
 We should have a common job-level context and a task-specific context 
 inheriting from the common job-level context.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (MAPREDUCE-3566) MR AM slows down due to repeatedly constructing ContainerLaunchContext

2012-01-03 Thread Vinod Kumar Vavilapalli (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-3566?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated MAPREDUCE-3566:
---

Status: Patch Available  (was: Open)

 MR AM slows down due to repeatedly constructing ContainerLaunchContext
 --

 Key: MAPREDUCE-3566
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3566
 Project: Hadoop Map/Reduce
  Issue Type: Sub-task
  Components: mr-am, mrv2
Affects Versions: 0.23.0
Reporter: Vinod Kumar Vavilapalli
Assignee: Vinod Kumar Vavilapalli
Priority: Critical
 Fix For: 0.23.1

 Attachments: MAPREDUCE-3566-20111215.txt, 
 MAPREDUCE-3566-20111220.txt, MAPREDUCE-3566-20120103.1.txt, 
 MAPREDUCE-3566-20120103.txt


 The construction of the context is expensive, includes per-task trips to 
 NameNode for obtaining the information about job.jar, job splits etc which is 
 redundant across all tasks.
 We should have a common job-level context and a task-specific context 
 inheriting from the common job-level context.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (MAPREDUCE-3529) TokenCache does not cache viewfs credentials correctly

2012-01-03 Thread Siddharth Seth (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-3529?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Seth updated MAPREDUCE-3529:
--

Attachment: MR3529_v3.txt

Updated patch based on review feedback.

 TokenCache does not cache viewfs credentials correctly
 --

 Key: MAPREDUCE-3529
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3529
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: mrv2
Affects Versions: 0.23.0
Reporter: Siddharth Seth
Assignee: Siddharth Seth
Priority: Critical
 Attachments: MR3529_v1.txt, MR3529_v2.txt, MR3529_v3.txt


 viewfs returns a list of delegation tokens for the actual namenodes. 
 TokenCache caches these based on the actual service name - subsequent calls 
 to TokenCache end up trying to get a new set of tokens.
 Tasks which happen to access TokenCache fail when using viewfs - since they 
 end up trying to get a new set of tokens even though the tokens are already 
 available.
 {noformat}
 Error: java.io.IOException: Delegation Token can be issued only with kerberos 
 or web authentication
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getDelegationToken(FSNamesystem.java:4027)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getDelegationToken(NameNodeRpcServer.java:281)
 at sun.reflect.GeneratedMethodAccessor38.invoke(Unknown Source)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at 
 org.apache.hadoop.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:365)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1490)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1486)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:396)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1152)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1484)
 at org.apache.hadoop.ipc.Client.call(Client.java:1085)
 at 
 org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:193)
 at $Proxy8.getDelegationToken(Unknown Source)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at 
 org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:100)
 at 
 org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:65)
 at $Proxy8.getDelegationToken(Unknown Source)
 at 
 org.apache.hadoop.hdfs.DFSClient.getDelegationToken(DFSClient.java:456)
 at 
 org.apache.hadoop.hdfs.DistributedFileSystem.getDelegationToken(DistributedFileSystem.java:812)
 at 
 org.apache.hadoop.hdfs.DistributedFileSystem.getDelegationTokens(DistributedFileSystem.java:839)
 at 
 org.apache.hadoop.fs.viewfs.ChRootedFileSystem.getDelegationTokens(ChRootedFileSystem.java:311)
 at 
 org.apache.hadoop.fs.viewfs.ViewFileSystem.getDelegationTokens(ViewFileSystem.java:490)
 at 
 org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:144)
 at 
 org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:91)
 at 
 org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodes(TokenCache.java:84)
 {noformat}
 This will likely require some changes in viewfs/hdfs - will open a Jira with 
 details.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (MAPREDUCE-3566) MR AM slows down due to repeatedly constructing ContainerLaunchContext

2012-01-03 Thread Hadoop QA (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-3566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13179007#comment-13179007
 ] 

Hadoop QA commented on MAPREDUCE-3566:
--

+1 overall.  Here are the results of testing the latest attachment 
  
http://issues.apache.org/jira/secure/attachment/12509334/MAPREDUCE-3566-20120103.1.txt
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 11 new or modified tests.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in .

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/1529//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/1529//console

This message is automatically generated.

 MR AM slows down due to repeatedly constructing ContainerLaunchContext
 --

 Key: MAPREDUCE-3566
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3566
 Project: Hadoop Map/Reduce
  Issue Type: Sub-task
  Components: mr-am, mrv2
Affects Versions: 0.23.0
Reporter: Vinod Kumar Vavilapalli
Assignee: Vinod Kumar Vavilapalli
Priority: Critical
 Fix For: 0.23.1

 Attachments: MAPREDUCE-3566-20111215.txt, 
 MAPREDUCE-3566-20111220.txt, MAPREDUCE-3566-20120103.1.txt, 
 MAPREDUCE-3566-20120103.txt


 The construction of the context is expensive, includes per-task trips to 
 NameNode for obtaining the information about job.jar, job splits etc which is 
 redundant across all tasks.
 We should have a common job-level context and a task-specific context 
 inheriting from the common job-level context.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (MAPREDUCE-3529) TokenCache does not cache viewfs credentials correctly

2012-01-03 Thread Siddharth Seth (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-3529?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Seth updated MAPREDUCE-3529:
--

Attachment: MR3529_v4.txt

Added apache header to the unit test.

 TokenCache does not cache viewfs credentials correctly
 --

 Key: MAPREDUCE-3529
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3529
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: mrv2
Affects Versions: 0.23.0
Reporter: Siddharth Seth
Assignee: Siddharth Seth
Priority: Critical
 Attachments: MR3529_v1.txt, MR3529_v2.txt, MR3529_v3.txt, 
 MR3529_v4.txt


 viewfs returns a list of delegation tokens for the actual namenodes. 
 TokenCache caches these based on the actual service name - subsequent calls 
 to TokenCache end up trying to get a new set of tokens.
 Tasks which happen to access TokenCache fail when using viewfs - since they 
 end up trying to get a new set of tokens even though the tokens are already 
 available.
 {noformat}
 Error: java.io.IOException: Delegation Token can be issued only with kerberos 
 or web authentication
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getDelegationToken(FSNamesystem.java:4027)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getDelegationToken(NameNodeRpcServer.java:281)
 at sun.reflect.GeneratedMethodAccessor38.invoke(Unknown Source)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at 
 org.apache.hadoop.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:365)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1490)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1486)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:396)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1152)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1484)
 at org.apache.hadoop.ipc.Client.call(Client.java:1085)
 at 
 org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:193)
 at $Proxy8.getDelegationToken(Unknown Source)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at 
 org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:100)
 at 
 org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:65)
 at $Proxy8.getDelegationToken(Unknown Source)
 at 
 org.apache.hadoop.hdfs.DFSClient.getDelegationToken(DFSClient.java:456)
 at 
 org.apache.hadoop.hdfs.DistributedFileSystem.getDelegationToken(DistributedFileSystem.java:812)
 at 
 org.apache.hadoop.hdfs.DistributedFileSystem.getDelegationTokens(DistributedFileSystem.java:839)
 at 
 org.apache.hadoop.fs.viewfs.ChRootedFileSystem.getDelegationTokens(ChRootedFileSystem.java:311)
 at 
 org.apache.hadoop.fs.viewfs.ViewFileSystem.getDelegationTokens(ViewFileSystem.java:490)
 at 
 org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:144)
 at 
 org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:91)
 at 
 org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodes(TokenCache.java:84)
 {noformat}
 This will likely require some changes in viewfs/hdfs - will open a Jira with 
 details.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (MAPREDUCE-3299) Add AMInfo table to the AM job page

2012-01-03 Thread Siddharth Seth (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-3299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Seth updated MAPREDUCE-3299:
--

Target Version/s: 0.23.1, 0.24.0  (was: 0.24.0, 0.23.1)
  Status: Patch Available  (was: Open)

 Add AMInfo table to the AM job page
 ---

 Key: MAPREDUCE-3299
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3299
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
  Components: mrv2
Affects Versions: 0.23.0, 0.23.1, 0.24.0
Reporter: Siddharth Seth
Assignee: Jonathan Eagles
Priority: Minor
 Attachments: MAPREDUCE-3299.patch, MAPREDUCE-3299.patch


 JobHistory has a table to list all AMs. A similar table can be added to the 
 AM for info on past failed AMs and the current running one.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (MAPREDUCE-3299) Add AMInfo table to the AM job page

2012-01-03 Thread Siddharth Seth (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-3299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Seth updated MAPREDUCE-3299:
--

Attachment: MAPREDUCE-3299.patch

Re-uploading the same patch for jenkins.

 Add AMInfo table to the AM job page
 ---

 Key: MAPREDUCE-3299
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3299
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
  Components: mrv2
Affects Versions: 0.23.0, 0.23.1, 0.24.0
Reporter: Siddharth Seth
Assignee: Jonathan Eagles
Priority: Minor
 Attachments: MAPREDUCE-3299.patch, MAPREDUCE-3299.patch


 JobHistory has a table to list all AMs. A similar table can be added to the 
 AM for info on past failed AMs and the current running one.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (MAPREDUCE-3299) Add AMInfo table to the AM job page

2012-01-03 Thread Siddharth Seth (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-3299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Seth updated MAPREDUCE-3299:
--

Target Version/s: 0.23.1, 0.24.0  (was: 0.24.0, 0.23.1)
  Status: Open  (was: Patch Available)

 Add AMInfo table to the AM job page
 ---

 Key: MAPREDUCE-3299
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3299
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
  Components: mrv2
Affects Versions: 0.23.0, 0.23.1, 0.24.0
Reporter: Siddharth Seth
Assignee: Jonathan Eagles
Priority: Minor
 Attachments: MAPREDUCE-3299.patch, MAPREDUCE-3299.patch


 JobHistory has a table to list all AMs. A similar table can be added to the 
 AM for info on past failed AMs and the current running one.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (MAPREDUCE-3436) jobhistory link may be broken depending on the interface it is listening on

2012-01-03 Thread Siddharth Seth (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-3436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Seth updated MAPREDUCE-3436:
--

Status: Open  (was: Patch Available)

Canceling patch, till there's additional information on whether it fixes this 
issue.

 jobhistory link may be broken depending on the interface it is listening on
 ---

 Key: MAPREDUCE-3436
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3436
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: mrv2, webapps
Affects Versions: 0.23.0, 0.23.1
Reporter: Bruno Mahé
Assignee: Ahmed Radwan
  Labels: bigtop
 Attachments: MAPREDUCE-3436.patch, MAPREDUCE-3436_rev2.patch


 On the following page : http://RESOURCE_MANAGER:8088/cluster/apps
 There are links to the history for each application. None of them can be 
 reached since they all point to the ip 0.0.0.0. For instance:
 http://0.0.0.0:8088/proxy/application_1321658790349_0002/jobhistory/job/job_1321658790349_2_2
 Am I missing something?
 [root@bigtop-fedora-15 ~]# jps
 9968 ResourceManager
 1495 NameNode
 1645 DataNode
 12935 Jps
 11140 -- process information unavailable
 5309 JobHistoryServer
 10237 NodeManager
 [root@bigtop-fedora-15 ~]# netstat -tlpn | grep 8088
 tcp0  0 :::8088 :::*
 LISTEN  9968/java
 For reference, here is my configuration:
 root@bigtop-fedora-15 ~]# cat /etc/yarn/conf/yarn-site.xml 
 ?xml version=1.0?
 configuration
 !-- Site specific YARN configuration properties --
property
   nameyarn.nodemanager.aux-services/name
   valuemapreduce.shuffle/value
 /property
 property
   nameyarn.nodemanager.aux-services.mapreduce.shuffle.class/name
   valueorg.apache.hadoop.mapred.ShuffleHandler/value
 /property
 property
   namemapreduce.admin.user.env/name
   
 valueCLASSPATH=/etc/hadoop/conf/*:/usr/lib/hadoop/*:/usr/lib/hadoop/lib/*/value
 /property
 /configuration
 [root@bigtop-fedora-15 ~]# cat /etc/hadoop/conf/hdfs-site.xml 
 ?xml version=1.0?
 configuration
   property
 namedfs.replication/name
 value1/value
   /property
   property
  namedfs.permissions/name
  valuefalse/value
   /property
   property
  !-- specify this so that running 'hadoop namenode -format' formats the 
 right dir --
  namedfs.name.dir/name
  value/var/lib/hadoop/cache/hadoop/dfs/name/value
   /property
 /configuration
 [root@bigtop-fedora-15 ~]# cat /etc/hadoop/conf/core-site.xml 
 ?xml version=1.0?
 configuration
   property
 namefs.default.name/name
 valuehdfs://localhost:8020/value
   /property
   property
  namehadoop.tmp.dir/name
  value/var/lib/hadoop/cache/${user.name}/value
   /property
   !-- OOZIE proxy user setting --
   property
 namehadoop.proxyuser.oozie.hosts/name
 value*/value
   /property
   property
 namehadoop.proxyuser.oozie.groups/name
 value*/value
   /property
 /configuration

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (MAPREDUCE-3354) JobHistoryServer should be started by bin/mapred and not by bin/yarn

2012-01-03 Thread Jonathan Eagles (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-3354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13179021#comment-13179021
 ] 

Jonathan Eagles commented on MAPREDUCE-3354:


Patch modifies hadoop-assemblies so patch apply failure is expected.

 JobHistoryServer should be started by bin/mapred and not by bin/yarn
 

 Key: MAPREDUCE-3354
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3354
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: jobhistoryserver, mrv2
Affects Versions: 0.23.1, 0.24.0
Reporter: Vinod Kumar Vavilapalli
Assignee: Jonathan Eagles
Priority: Blocker
 Attachments: MAPREDUCE-3354.patch, MAPREDUCE-3354.patch, 
 MAPREDUCE-3354.patch, MAPREDUCE-3354.patch, MAPREDUCE-3354.patch


 JobHistoryServer belongs to mapreduce land.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (MAPREDUCE-3360) Provide information about lost nodes in the UI.

2012-01-03 Thread Jason Lowe (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-3360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13179027#comment-13179027
 ] 

Jason Lowe commented on MAPREDUCE-3360:
---

Thanks for the updates.  A couple of things about the handling of UNHEALTHY 
nodes:

- They are not removed from the list of nodes being tracked in the context ( 
{{rmNode.context.getRMNodes()}} ), so I don't think we want to add them to the 
list of inactive nodes.  Otherwise the node would be in the two node lists 
simultaneously, and that's probably not desireable.  Specifically we'd want to 
remove this code insertion from the patch:

{code}
@@ -394,6 +411,8 @@ public class RMNodeImpl implements RMNode, 
EventHandlerRMNodeEvent {
 // Inform the scheduler
 rmNode.context.getDispatcher().getEventHandler().handle(
 new NodeRemovedSchedulerEvent(rmNode));
+rmNode.context.getInactiveRMNodes()
+.put(rmNode.nodeId.getHost(), rmNode);
 ClusterMetrics.getMetrics().incrNumUnhealthyNMs();
 return RMNodeState.UNHEALTHY;
   }
{code}

- A node that's marked UNHEALTHY could still have a working nodemanager web 
page, so we don't want to remove the link to it on the status page.  Since the 
UNHEALTHY nodes are tracked in the normal node list, it's simplest to remove 
the UNHEALTHY case from the switch statement in NodesPages.java.


At some point unit tests need to be added/updated for this change (e.g.: 
updating TestNodesPage.java to verify nodes that transition into the LOST state 
appear on the LOST page, etc.)


 Provide information about lost nodes in the UI.
 ---

 Key: MAPREDUCE-3360
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3360
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
  Components: mrv2
Affects Versions: 0.23.0
 Environment: NA
Reporter: Bhallamudi Venkata Siva Kamesh
Priority: Critical
 Attachments: LostNodes.png, MAPREDUCE-3360-1.patch, 
 MAPREDUCE-3360-2.patch, MAPREDUCE-3360.patch, lostNodes.png


 Currently there is no information provided about *lost nodes*. Provide 
 information in the UI. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Assigned] (MAPREDUCE-3360) Provide information about lost nodes in the UI.

2012-01-03 Thread Jason Lowe (Assigned) (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-3360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe reassigned MAPREDUCE-3360:
-

Assignee: Bhallamudi Venkata Siva Kamesh

 Provide information about lost nodes in the UI.
 ---

 Key: MAPREDUCE-3360
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3360
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
  Components: mrv2
Affects Versions: 0.23.0
 Environment: NA
Reporter: Bhallamudi Venkata Siva Kamesh
Assignee: Bhallamudi Venkata Siva Kamesh
Priority: Critical
 Attachments: LostNodes.png, MAPREDUCE-3360-1.patch, 
 MAPREDUCE-3360-2.patch, MAPREDUCE-3360.patch, lostNodes.png


 Currently there is no information provided about *lost nodes*. Provide 
 information in the UI. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (MAPREDUCE-3299) Add AMInfo table to the AM job page

2012-01-03 Thread Siddharth Seth (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-3299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Seth updated MAPREDUCE-3299:
--

Target Version/s: 0.23.1, 0.24.0  (was: 0.24.0, 0.23.1)
  Status: Open  (was: Patch Available)

Changes look good, except the log link on the AM table needs to be fixed. It 
should point to the nodemanager running the AM instead of to the AM itself. 
(There's an AM logs link on the nav bar) 

 Add AMInfo table to the AM job page
 ---

 Key: MAPREDUCE-3299
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3299
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
  Components: mrv2
Affects Versions: 0.23.0, 0.23.1, 0.24.0
Reporter: Siddharth Seth
Assignee: Jonathan Eagles
Priority: Minor
 Attachments: MAPREDUCE-3299.patch, MAPREDUCE-3299.patch


 JobHistory has a table to list all AMs. A similar table can be added to the 
 AM for info on past failed AMs and the current running one.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (MAPREDUCE-1744) DistributedCache creates its own FileSytem instance when adding a file/archive to the path

2012-01-03 Thread Tom White (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-1744?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13179113#comment-13179113
 ] 

Tom White commented on MAPREDUCE-1744:
--

+1 for the trunk/0.23 patch.

 DistributedCache creates its own FileSytem instance when adding a 
 file/archive to the path
 --

 Key: MAPREDUCE-1744
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-1744
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Reporter: Dick King
Assignee: Alejandro Abdelnur
 Attachments: BZ-3503564--2010-05-06.patch, 
 MAPREDUCE-1744-0.23-trunk.patch, MAPREDUCE-1744.patch, h1744.patch, 
 mapred-1744-1.patch, mapred-1744-2.patch, mapred-1744-3.patch, 
 mapred-1744.patch


 According to the contract of {{UserGroupInformation.doAs()}} the only 
 required operations within the {{doAs()}} block are the
 creation of a {{JobClient}} or getting a {{FileSystem}} .
 The {{DistributedCache.add(File/Archive)ToClasspath()}} methods create a 
 {{FileSystem}} instance outside of the {{doAs()}} block,
 this {{FileSystem}} instance is not in the scope of the proxy user but of the 
 superuser and permissions may make the method
 fail.
 One option is to overload the methods above to receive a filesystem.
 Another option is to do obtain the {{FileSystem}} within a {{doAs()}} block, 
 for this it would be required to have the proxy
 user set in the passed configuration.
 The second option seems nicer, but I don't know if the proxy user is as a 
 property in the jobconf.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (MAPREDUCE-3614) finalState UNDEFINED if AM is killed by hand

2012-01-03 Thread Ravi Prakash (Created) (JIRA)
finalState UNDEFINED if AM is killed by hand


 Key: MAPREDUCE-3614
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3614
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: mrv2
Affects Versions: 0.23.0
Reporter: Ravi Prakash
Assignee: Ravi Prakash


Courtesy [~dcapwell]

{quote}
If the AM is running and you kill the process (sudo kill #pid), the State in 
Yarn would be FINISHED and FinalStatus is UNDEFINED.  The Tracking UI would say 
History and point to the proxy url (which will redirect to the history 
server).

The state should be more descriptive that the job failed and the tracker url 
shouldn't point to the history server.
{quote}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (MAPREDUCE-1744) DistributedCache creates its own FileSytem instance when adding a file/archive to the path

2012-01-03 Thread Alejandro Abdelnur (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-1744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated MAPREDUCE-1744:
--

Attachment: MAPREDUCE-1744-0.23-trunk.patch

updated patch removing dup @deprecated and using 
file/archive.getFileSystem(conf) instead of FileSystem.get(conf).

 DistributedCache creates its own FileSytem instance when adding a 
 file/archive to the path
 --

 Key: MAPREDUCE-1744
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-1744
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Reporter: Dick King
Assignee: Alejandro Abdelnur
 Attachments: BZ-3503564--2010-05-06.patch, 
 MAPREDUCE-1744-0.23-trunk.patch, MAPREDUCE-1744-0.23-trunk.patch, 
 MAPREDUCE-1744.patch, h1744.patch, mapred-1744-1.patch, mapred-1744-2.patch, 
 mapred-1744-3.patch, mapred-1744.patch


 According to the contract of {{UserGroupInformation.doAs()}} the only 
 required operations within the {{doAs()}} block are the
 creation of a {{JobClient}} or getting a {{FileSystem}} .
 The {{DistributedCache.add(File/Archive)ToClasspath()}} methods create a 
 {{FileSystem}} instance outside of the {{doAs()}} block,
 this {{FileSystem}} instance is not in the scope of the proxy user but of the 
 superuser and permissions may make the method
 fail.
 One option is to overload the methods above to receive a filesystem.
 Another option is to do obtain the {{FileSystem}} within a {{doAs()}} block, 
 for this it would be required to have the proxy
 user set in the passed configuration.
 The second option seems nicer, but I don't know if the proxy user is as a 
 property in the jobconf.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (MAPREDUCE-2765) DistCp Rewrite

2012-01-03 Thread Mithun Radhakrishnan (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-2765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13179163#comment-13179163
 ] 

Mithun Radhakrishnan commented on MAPREDUCE-2765:
-

Amareshwari, would it be possible for this to be committed?

 DistCp Rewrite
 --

 Key: MAPREDUCE-2765
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-2765
 Project: Hadoop Map/Reduce
  Issue Type: New Feature
  Components: distcp, mrv2
Affects Versions: 0.20.203.0
Reporter: Mithun Radhakrishnan
Assignee: Mithun Radhakrishnan
 Attachments: distcpv2.20.203.patch, distcpv2_hadoop-0.23.1.patch, 
 distcpv2_hadoop-0.23.1.patch, distcpv2_hadoop-trunk.patch, 
 distcpv2_patch_0.23.1-SNAPSHOT_tucu_reviewed.patch, 
 distcpv2_patch_hadoop-trunk_tucu_reviewed.patch, distcpv2_trunk.patch, 
 distcpv2_trunk_post_review_1.patch


 This is a slightly modified version of the DistCp rewrite that Yahoo uses in 
 production today. The rewrite was ground-up, with specific focus on:
 1. improved startup time (postponing as much work as possible to the MR job)
 2. support for multiple copy-strategies
 3. new features (e.g. -atomic, -async, -bandwidth.)
 4. improved programmatic use
 Some effort has gone into refactoring what used to be achieved by a single 
 large (1.7 KLOC) source file, into a design that (hopefully) reads better too.
 The proposed DistCpV2 preserves command-line-compatibility with the old 
 version, and should be a drop-in replacement.
 New to v2:
 1. Copy-strategies and the DynamicInputFormat:
   A copy-strategy determines the policy by which source-file-paths are 
 distributed between map-tasks. (These boil down to the choice of the 
 input-format.) 
   If no strategy is explicitly specified on the command-line, the policy 
 chosen is uniform size, where v2 behaves identically to old-DistCp. (The 
 number of bytes transferred by each map-task is roughly equal, at a per-file 
 granularity.) 
   Alternatively, v2 ships with a dynamic copy-strategy (in the 
 DynamicInputFormat). This policy acknowledges that 
   (a)  dividing files based only on file-size might not be an 
 even distribution (E.g. if some datanodes are slower than others, or if some 
 files are skipped.)
   (b) a static association of a source-path to a map increases 
 the likelihood of long-tails during copy.
   The dynamic strategy divides the list-of-source-paths into a number 
 ( nMaps) of smaller parts. When each map completes its current list of 
 paths, it picks up a new list to process, if available. So if a map-task is 
 stuck on a slow (and not necessarily large) file, other maps can pick up the 
 slack. The thinner the file-list is sliced, the greater the parallelism (and 
 the lower the chances of long-tails). Within reason, of course: the number of 
 these short-lived list-files is capped at an overridable maximum.
   Internal benchmarks against source/target clusters with some slow(ish) 
 datanodes have indicated significant performance gains when using the 
 dynamic-strategy. Gains are most pronounced when nFiles greatly exceeds nMaps.
   Please note that the DynamicInputFormat might prove useful outside of 
 DistCp. It is hence available as a mapred/lib, unfettered to DistCpV2. Also 
 note that the copy-strategies have no bearing on the CopyMapper.map() 
 implementation.
   
 2. Improved startup-time and programmatic use:
   When the old-DistCp runs with -update, and creates the 
 list-of-source-paths, it attempts to filter out files that might be skipped 
 (by comparing file-sizes, checksums, etc.) This significantly increases the 
 startup time (or the time spent in serial processing till the MR job is 
 launched), blocking the calling-thread. This becomes pronounced as nFiles 
 increases. (Internal benchmarks have seen situations where more time is spent 
 setting up the job than on the actual transfer.)
   DistCpV2 postpones as much work as possible to the MR job. The 
 file-listing isn't filtered until the map-task runs (at which time, identical 
 files are skipped). DistCpV2 can now be run asynchronously. The program 
 quits at job-launch, logging the job-id for tracking. Programmatically, the 
 DistCp.execute() returns a Job instance for progress-tracking.
   
 3. New features:
   (a)   -async: As described in #2.
   (b)   -atomic: Data is copied to a (user-specifiable) tmp-location, and 
 then moved atomically to destination.
   (c)   -bandwidth: Enforces a limit on the bandwidth consumed per map.
   (d)   -strategy: As above.
   
 A more comprehensive description the newer features, how the dynamic-strategy 
 works, etc. is available in src/site/xdoc/, and in the pdf that's generated 
 therefrom, during the build.
 High on the list 

[jira] [Updated] (MAPREDUCE-3490) RMContainerAllocator counts failed maps towards Reduce ramp up

2012-01-03 Thread Arun C Murthy (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-3490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy updated MAPREDUCE-3490:
-

Status: Patch Available  (was: Open)

 RMContainerAllocator counts failed maps towards Reduce ramp up
 --

 Key: MAPREDUCE-3490
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3490
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: mr-am, mrv2
Affects Versions: 0.23.0
Reporter: Siddharth Seth
Assignee: Sharad Agarwal
Priority: Blocker
 Attachments: MAPREDUCE-3490.patch, MAPREDUCE-3490.patch, 
 MAPREDUCE-3490.patch, MAPREDUCE-3490.patch, MAPREDUCE-3490.patch, 
 MAPREDUCE-3490.patch, MAPREDUCE-3490.patch, MR-3490-alternate.patch, 
 MR-3490-alternate1.patch


 The RMContainerAllocator does not differentiate between failed and successful 
 maps while calculating whether reduce tasks are ready to launch. Failed tasks 
 are also counted towards total completed tasks. 
 Example. 4 failed maps, 10 total maps. Map%complete = 4/14 * 100 instead of 
 being 0.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (MAPREDUCE-3490) RMContainerAllocator counts failed maps towards Reduce ramp up

2012-01-03 Thread Arun C Murthy (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-3490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy updated MAPREDUCE-3490:
-

Attachment: MAPREDUCE-3490.patch

Thanks Vinod, missed that.

I've updated the patch to use Job.completedMaps rather than Job.successfulMaps.

 RMContainerAllocator counts failed maps towards Reduce ramp up
 --

 Key: MAPREDUCE-3490
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3490
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: mr-am, mrv2
Affects Versions: 0.23.0
Reporter: Siddharth Seth
Assignee: Sharad Agarwal
Priority: Blocker
 Attachments: MAPREDUCE-3490.patch, MAPREDUCE-3490.patch, 
 MAPREDUCE-3490.patch, MAPREDUCE-3490.patch, MAPREDUCE-3490.patch, 
 MAPREDUCE-3490.patch, MAPREDUCE-3490.patch, MR-3490-alternate.patch, 
 MR-3490-alternate1.patch


 The RMContainerAllocator does not differentiate between failed and successful 
 maps while calculating whether reduce tasks are ready to launch. Failed tasks 
 are also counted towards total completed tasks. 
 Example. 4 failed maps, 10 total maps. Map%complete = 4/14 * 100 instead of 
 being 0.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (MAPREDUCE-3601) Add Delay Scheduling to MR2 Fair Scheduler

2012-01-03 Thread Patrick Wendell (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-3601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Patrick Wendell updated MAPREDUCE-3601:
---

Attachment: MAPREDUCE-3601.v1.patch

Attached is a patch to implement delay scheduling in MR2, similar to how delay 
scheduling works in the MR1 Fair Scheduler. 

This patch allows schedulers to triage locality levels from node-local, to 
rack-local, to off-switch requests over time. In the Fair schedulers, users can 
configure a threshold to dictate when this relaxation happens.

The Capacity Scheduler is left untouched (it currently uses a separate 
heuristic to failover to off-switch scheduling).

This patch assumes the patch attached to MAPREDUCE-3600 has already been 
applied.




 Add Delay Scheduling to MR2 Fair Scheduler
 --

 Key: MAPREDUCE-3601
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3601
 Project: Hadoop Map/Reduce
  Issue Type: Sub-task
  Components: scheduler
Reporter: Patrick Wendell
Assignee: Patrick Wendell
 Attachments: MAPREDUCE-3601.v1.patch


 JIRA for delay scheduling component.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (MAPREDUCE-1744) DistributedCache creates its own FileSytem instance when adding a file/archive to the path

2012-01-03 Thread Hadoop QA (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-1744?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13179194#comment-13179194
 ] 

Hadoop QA commented on MAPREDUCE-1744:
--

+1 overall.  Here are the results of testing the latest attachment 
  
http://issues.apache.org/jira/secure/attachment/12509361/MAPREDUCE-1744-0.23-trunk.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 3 new or modified tests.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in .

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/1531//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/1531//console

This message is automatically generated.

 DistributedCache creates its own FileSytem instance when adding a 
 file/archive to the path
 --

 Key: MAPREDUCE-1744
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-1744
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Reporter: Dick King
Assignee: Alejandro Abdelnur
 Attachments: BZ-3503564--2010-05-06.patch, 
 MAPREDUCE-1744-0.23-trunk.patch, MAPREDUCE-1744-0.23-trunk.patch, 
 MAPREDUCE-1744.patch, h1744.patch, mapred-1744-1.patch, mapred-1744-2.patch, 
 mapred-1744-3.patch, mapred-1744.patch


 According to the contract of {{UserGroupInformation.doAs()}} the only 
 required operations within the {{doAs()}} block are the
 creation of a {{JobClient}} or getting a {{FileSystem}} .
 The {{DistributedCache.add(File/Archive)ToClasspath()}} methods create a 
 {{FileSystem}} instance outside of the {{doAs()}} block,
 this {{FileSystem}} instance is not in the scope of the proxy user but of the 
 superuser and permissions may make the method
 fail.
 One option is to overload the methods above to receive a filesystem.
 Another option is to do obtain the {{FileSystem}} within a {{doAs()}} block, 
 for this it would be required to have the proxy
 user set in the passed configuration.
 The second option seems nicer, but I don't know if the proxy user is as a 
 property in the jobconf.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (MAPREDUCE-3490) RMContainerAllocator counts failed maps towards Reduce ramp up

2012-01-03 Thread Hadoop QA (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-3490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13179207#comment-13179207
 ] 

Hadoop QA commented on MAPREDUCE-3490:
--

+1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12509362/MAPREDUCE-3490.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 3 new or modified tests.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in .

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/1532//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/1532//console

This message is automatically generated.

 RMContainerAllocator counts failed maps towards Reduce ramp up
 --

 Key: MAPREDUCE-3490
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3490
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: mr-am, mrv2
Affects Versions: 0.23.0
Reporter: Siddharth Seth
Assignee: Sharad Agarwal
Priority: Blocker
 Attachments: MAPREDUCE-3490.patch, MAPREDUCE-3490.patch, 
 MAPREDUCE-3490.patch, MAPREDUCE-3490.patch, MAPREDUCE-3490.patch, 
 MAPREDUCE-3490.patch, MAPREDUCE-3490.patch, MR-3490-alternate.patch, 
 MR-3490-alternate1.patch


 The RMContainerAllocator does not differentiate between failed and successful 
 maps while calculating whether reduce tasks are ready to launch. Failed tasks 
 are also counted towards total completed tasks. 
 Example. 4 failed maps, 10 total maps. Map%complete = 4/14 * 100 instead of 
 being 0.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (MAPREDUCE-3287) The Watch link on the main page goes to a private video

2012-01-03 Thread Ronald Petty (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-3287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13179242#comment-13179242
 ] 

Ronald Petty commented on MAPREDUCE-3287:
-

I recommend removing the link if it is not going to be fixed.  I personally 
think it is a nice feature to have though.

 The Watch link on the main page goes to a private video
 ---

 Key: MAPREDUCE-3287
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3287
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: documentation
 Environment: http://hadoop.apache.org/mapreduce/index.html
Reporter: Jason Mattax
  Labels: documentation
   Original Estimate: 1h
  Remaining Estimate: 1h

 Link 3 under the Getting Started header takes me to a private video. Due to 
 vimeo's setup I can't determine even what user is hosting the video to 
 contact them personally.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (MAPREDUCE-3476) Optimize YARN API calls

2012-01-03 Thread Amar Kamat (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-3476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13179256#comment-13179256
 ] 

Amar Kamat commented on MAPREDUCE-3476:
---

Vinod,
I see some sub-tickets being opened for optimizing YARN. Can you kindly link 
them to this JIRA?

 Optimize YARN API calls
 ---

 Key: MAPREDUCE-3476
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3476
 Project: Hadoop Map/Reduce
  Issue Type: Sub-task
  Components: mrv2
Affects Versions: 0.23.0
Reporter: Ravi Prakash
Assignee: Vinod Kumar Vavilapalli
Priority: Critical

 Several YARN API calls are taking inordinately long. This might be a 
 performance blocker.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (MAPREDUCE-3462) Job submission failing in JUnit tests

2012-01-03 Thread Amar Kamat (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-3462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13179257#comment-13179257
 ] 

Amar Kamat commented on MAPREDUCE-3462:
---

Fixing contrib tests to respect {{src/test/mapred-site.xml}} can be addressed 
later. I will commit this patch for now.

 Job submission failing in JUnit tests
 -

 Key: MAPREDUCE-3462
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3462
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: mrv2, test
Affects Versions: 0.23.0
Reporter: Amar Kamat
Assignee: Ravi Prakash
Priority: Blocker
  Labels: junit, test
 Attachments: 3462.trunk.patch, MAPREDUCE-3462.branch-0.23.patch


 When I run JUnit tests (e.g. TestDistCacheEmulation, TestSleepJob and 
 TestCompressionEmulationUtils), I see job submission failing with the 
 following error:
 {noformat}
 java.lang.IllegalStateException: Variable substitution depth too large: 20 
 ${fs.default.name}
 at 
 org.apache.hadoop.conf.Configuration.substituteVars(Configuration.java:551)
 at org.apache.hadoop.conf.Configuration.get(Configuration.java:569)
 at 
 org.apache.hadoop.conf.Configuration.getStrings(Configuration.java:1020)
 at 
 org.apache.hadoop.mapreduce.JobSubmitter.populateTokenCache(JobSubmitter.java:564)
 at 
 org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:353)
 at org.apache.hadoop.mapreduce.Job$2.run(Job.java:1159)
 at org.apache.hadoop.mapreduce.Job$2.run(Job.java:1156)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:396)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1152)
 at org.apache.hadoop.mapreduce.Job.submit(Job.java:1156)
 at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1176)
 at 
 org.apache.hadoop.mapred.gridmix.Gridmix.launchGridmixJob(Gridmix.java:190)
 at 
 org.apache.hadoop.mapred.gridmix.Gridmix.writeInputData(Gridmix.java:150)
 at org.apache.hadoop.mapred.gridmix.Gridmix.start(Gridmix.java:425)
 at org.apache.hadoop.mapred.gridmix.Gridmix.runJob(Gridmix.java:380)
 at 
 org.apache.hadoop.mapred.gridmix.Gridmix.access$000(Gridmix.java:56)
 at org.apache.hadoop.mapred.gridmix.Gridmix$1.run(Gridmix.java:313)
 at org.apache.hadoop.mapred.gridmix.Gridmix$1.run(Gridmix.java:311)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:396)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1152)
 at org.apache.hadoop.mapred.gridmix.Gridmix.run(Gridmix.java:311)
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (MAPREDUCE-3462) Job submission failing in JUnit tests

2012-01-03 Thread Amar Kamat (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-3462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amar Kamat updated MAPREDUCE-3462:
--

   Resolution: Fixed
Fix Version/s: 0.24.0
   0.23.1
 Release Note: Fixed failing JUnit tests in Gridmix.
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I just committed this to trunk and branch-0.23. Thanks Ravi Prakash and Ravi 
Gummadi!

 Job submission failing in JUnit tests
 -

 Key: MAPREDUCE-3462
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3462
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: mrv2, test
Affects Versions: 0.23.0
Reporter: Amar Kamat
Assignee: Ravi Prakash
Priority: Blocker
  Labels: junit, test
 Fix For: 0.23.1, 0.24.0

 Attachments: 3462.trunk.patch, MAPREDUCE-3462.branch-0.23.patch


 When I run JUnit tests (e.g. TestDistCacheEmulation, TestSleepJob and 
 TestCompressionEmulationUtils), I see job submission failing with the 
 following error:
 {noformat}
 java.lang.IllegalStateException: Variable substitution depth too large: 20 
 ${fs.default.name}
 at 
 org.apache.hadoop.conf.Configuration.substituteVars(Configuration.java:551)
 at org.apache.hadoop.conf.Configuration.get(Configuration.java:569)
 at 
 org.apache.hadoop.conf.Configuration.getStrings(Configuration.java:1020)
 at 
 org.apache.hadoop.mapreduce.JobSubmitter.populateTokenCache(JobSubmitter.java:564)
 at 
 org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:353)
 at org.apache.hadoop.mapreduce.Job$2.run(Job.java:1159)
 at org.apache.hadoop.mapreduce.Job$2.run(Job.java:1156)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:396)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1152)
 at org.apache.hadoop.mapreduce.Job.submit(Job.java:1156)
 at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1176)
 at 
 org.apache.hadoop.mapred.gridmix.Gridmix.launchGridmixJob(Gridmix.java:190)
 at 
 org.apache.hadoop.mapred.gridmix.Gridmix.writeInputData(Gridmix.java:150)
 at org.apache.hadoop.mapred.gridmix.Gridmix.start(Gridmix.java:425)
 at org.apache.hadoop.mapred.gridmix.Gridmix.runJob(Gridmix.java:380)
 at 
 org.apache.hadoop.mapred.gridmix.Gridmix.access$000(Gridmix.java:56)
 at org.apache.hadoop.mapred.gridmix.Gridmix$1.run(Gridmix.java:313)
 at org.apache.hadoop.mapred.gridmix.Gridmix$1.run(Gridmix.java:311)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:396)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1152)
 at org.apache.hadoop.mapred.gridmix.Gridmix.run(Gridmix.java:311)
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (MAPREDUCE-3462) Job submission failing in JUnit tests

2012-01-03 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-3462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13179293#comment-13179293
 ] 

Hudson commented on MAPREDUCE-3462:
---

Integrated in Hadoop-Common-trunk-Commit #1488 (See 
[https://builds.apache.org/job/Hadoop-Common-trunk-Commit/1488/])
MAPREDUCE-3462. Fix Gridmix JUnit testcase failures. (Ravi Prakash and Ravi 
Gummadi via amarrk)

amarrk : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1227051
Files : 
* /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-mapreduce-project/src/contrib/gridmix/src/test/org/apache/hadoop/mapred/gridmix/TestCompressionEmulationUtils.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/src/contrib/gridmix/src/test/org/apache/hadoop/mapred/gridmix/TestDistCacheEmulation.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/src/contrib/gridmix/src/test/org/apache/hadoop/mapred/gridmix/TestGridmixSubmission.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/src/contrib/gridmix/src/test/org/apache/hadoop/mapred/gridmix/TestSleepJob.java


 Job submission failing in JUnit tests
 -

 Key: MAPREDUCE-3462
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3462
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: mrv2, test
Affects Versions: 0.23.0
Reporter: Amar Kamat
Assignee: Ravi Prakash
Priority: Blocker
  Labels: junit, test
 Fix For: 0.23.1, 0.24.0

 Attachments: 3462.trunk.patch, MAPREDUCE-3462.branch-0.23.patch


 When I run JUnit tests (e.g. TestDistCacheEmulation, TestSleepJob and 
 TestCompressionEmulationUtils), I see job submission failing with the 
 following error:
 {noformat}
 java.lang.IllegalStateException: Variable substitution depth too large: 20 
 ${fs.default.name}
 at 
 org.apache.hadoop.conf.Configuration.substituteVars(Configuration.java:551)
 at org.apache.hadoop.conf.Configuration.get(Configuration.java:569)
 at 
 org.apache.hadoop.conf.Configuration.getStrings(Configuration.java:1020)
 at 
 org.apache.hadoop.mapreduce.JobSubmitter.populateTokenCache(JobSubmitter.java:564)
 at 
 org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:353)
 at org.apache.hadoop.mapreduce.Job$2.run(Job.java:1159)
 at org.apache.hadoop.mapreduce.Job$2.run(Job.java:1156)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:396)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1152)
 at org.apache.hadoop.mapreduce.Job.submit(Job.java:1156)
 at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1176)
 at 
 org.apache.hadoop.mapred.gridmix.Gridmix.launchGridmixJob(Gridmix.java:190)
 at 
 org.apache.hadoop.mapred.gridmix.Gridmix.writeInputData(Gridmix.java:150)
 at org.apache.hadoop.mapred.gridmix.Gridmix.start(Gridmix.java:425)
 at org.apache.hadoop.mapred.gridmix.Gridmix.runJob(Gridmix.java:380)
 at 
 org.apache.hadoop.mapred.gridmix.Gridmix.access$000(Gridmix.java:56)
 at org.apache.hadoop.mapred.gridmix.Gridmix$1.run(Gridmix.java:313)
 at org.apache.hadoop.mapred.gridmix.Gridmix$1.run(Gridmix.java:311)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:396)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1152)
 at org.apache.hadoop.mapred.gridmix.Gridmix.run(Gridmix.java:311)
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (MAPREDUCE-3462) Job submission failing in JUnit tests

2012-01-03 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-3462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13179301#comment-13179301
 ] 

Hudson commented on MAPREDUCE-3462:
---

Integrated in Hadoop-Common-0.23-Commit #333 (See 
[https://builds.apache.org/job/Hadoop-Common-0.23-Commit/333/])
MAPREDUCE-3462. Fix Gridmix JUnit testcase failures. (Ravi Prakash and Ravi 
Gummadi via amarrk)

amarrk : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1227052
Files : 
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/gridmix/src/test/org/apache/hadoop/mapred/gridmix/TestCompressionEmulationUtils.java
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/gridmix/src/test/org/apache/hadoop/mapred/gridmix/TestDistCacheEmulation.java
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/gridmix/src/test/org/apache/hadoop/mapred/gridmix/TestGridmixSubmission.java
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/gridmix/src/test/org/apache/hadoop/mapred/gridmix/TestSleepJob.java


 Job submission failing in JUnit tests
 -

 Key: MAPREDUCE-3462
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3462
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: mrv2, test
Affects Versions: 0.23.0
Reporter: Amar Kamat
Assignee: Ravi Prakash
Priority: Blocker
  Labels: junit, test
 Fix For: 0.23.1, 0.24.0

 Attachments: 3462.trunk.patch, MAPREDUCE-3462.branch-0.23.patch


 When I run JUnit tests (e.g. TestDistCacheEmulation, TestSleepJob and 
 TestCompressionEmulationUtils), I see job submission failing with the 
 following error:
 {noformat}
 java.lang.IllegalStateException: Variable substitution depth too large: 20 
 ${fs.default.name}
 at 
 org.apache.hadoop.conf.Configuration.substituteVars(Configuration.java:551)
 at org.apache.hadoop.conf.Configuration.get(Configuration.java:569)
 at 
 org.apache.hadoop.conf.Configuration.getStrings(Configuration.java:1020)
 at 
 org.apache.hadoop.mapreduce.JobSubmitter.populateTokenCache(JobSubmitter.java:564)
 at 
 org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:353)
 at org.apache.hadoop.mapreduce.Job$2.run(Job.java:1159)
 at org.apache.hadoop.mapreduce.Job$2.run(Job.java:1156)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:396)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1152)
 at org.apache.hadoop.mapreduce.Job.submit(Job.java:1156)
 at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1176)
 at 
 org.apache.hadoop.mapred.gridmix.Gridmix.launchGridmixJob(Gridmix.java:190)
 at 
 org.apache.hadoop.mapred.gridmix.Gridmix.writeInputData(Gridmix.java:150)
 at org.apache.hadoop.mapred.gridmix.Gridmix.start(Gridmix.java:425)
 at org.apache.hadoop.mapred.gridmix.Gridmix.runJob(Gridmix.java:380)
 at 
 org.apache.hadoop.mapred.gridmix.Gridmix.access$000(Gridmix.java:56)
 at org.apache.hadoop.mapred.gridmix.Gridmix$1.run(Gridmix.java:313)
 at org.apache.hadoop.mapred.gridmix.Gridmix$1.run(Gridmix.java:311)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:396)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1152)
 at org.apache.hadoop.mapred.gridmix.Gridmix.run(Gridmix.java:311)
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (MAPREDUCE-3462) Job submission failing in JUnit tests

2012-01-03 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-3462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13179306#comment-13179306
 ] 

Hudson commented on MAPREDUCE-3462:
---

Integrated in Hadoop-Hdfs-trunk-Commit #1560 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/1560/])
MAPREDUCE-3462. Fix Gridmix JUnit testcase failures. (Ravi Prakash and Ravi 
Gummadi via amarrk)

amarrk : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1227051
Files : 
* /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-mapreduce-project/src/contrib/gridmix/src/test/org/apache/hadoop/mapred/gridmix/TestCompressionEmulationUtils.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/src/contrib/gridmix/src/test/org/apache/hadoop/mapred/gridmix/TestDistCacheEmulation.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/src/contrib/gridmix/src/test/org/apache/hadoop/mapred/gridmix/TestGridmixSubmission.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/src/contrib/gridmix/src/test/org/apache/hadoop/mapred/gridmix/TestSleepJob.java


 Job submission failing in JUnit tests
 -

 Key: MAPREDUCE-3462
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3462
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: mrv2, test
Affects Versions: 0.23.0
Reporter: Amar Kamat
Assignee: Ravi Prakash
Priority: Blocker
  Labels: junit, test
 Fix For: 0.23.1, 0.24.0

 Attachments: 3462.trunk.patch, MAPREDUCE-3462.branch-0.23.patch


 When I run JUnit tests (e.g. TestDistCacheEmulation, TestSleepJob and 
 TestCompressionEmulationUtils), I see job submission failing with the 
 following error:
 {noformat}
 java.lang.IllegalStateException: Variable substitution depth too large: 20 
 ${fs.default.name}
 at 
 org.apache.hadoop.conf.Configuration.substituteVars(Configuration.java:551)
 at org.apache.hadoop.conf.Configuration.get(Configuration.java:569)
 at 
 org.apache.hadoop.conf.Configuration.getStrings(Configuration.java:1020)
 at 
 org.apache.hadoop.mapreduce.JobSubmitter.populateTokenCache(JobSubmitter.java:564)
 at 
 org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:353)
 at org.apache.hadoop.mapreduce.Job$2.run(Job.java:1159)
 at org.apache.hadoop.mapreduce.Job$2.run(Job.java:1156)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:396)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1152)
 at org.apache.hadoop.mapreduce.Job.submit(Job.java:1156)
 at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1176)
 at 
 org.apache.hadoop.mapred.gridmix.Gridmix.launchGridmixJob(Gridmix.java:190)
 at 
 org.apache.hadoop.mapred.gridmix.Gridmix.writeInputData(Gridmix.java:150)
 at org.apache.hadoop.mapred.gridmix.Gridmix.start(Gridmix.java:425)
 at org.apache.hadoop.mapred.gridmix.Gridmix.runJob(Gridmix.java:380)
 at 
 org.apache.hadoop.mapred.gridmix.Gridmix.access$000(Gridmix.java:56)
 at org.apache.hadoop.mapred.gridmix.Gridmix$1.run(Gridmix.java:313)
 at org.apache.hadoop.mapred.gridmix.Gridmix$1.run(Gridmix.java:311)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:396)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1152)
 at org.apache.hadoop.mapred.gridmix.Gridmix.run(Gridmix.java:311)
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (MAPREDUCE-3462) Job submission failing in JUnit tests

2012-01-03 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-3462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13179305#comment-13179305
 ] 

Hudson commented on MAPREDUCE-3462:
---

Integrated in Hadoop-Hdfs-0.23-Commit #322 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-0.23-Commit/322/])
MAPREDUCE-3462. Fix Gridmix JUnit testcase failures. (Ravi Prakash and Ravi 
Gummadi via amarrk)

amarrk : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1227052
Files : 
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/gridmix/src/test/org/apache/hadoop/mapred/gridmix/TestCompressionEmulationUtils.java
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/gridmix/src/test/org/apache/hadoop/mapred/gridmix/TestDistCacheEmulation.java
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/gridmix/src/test/org/apache/hadoop/mapred/gridmix/TestGridmixSubmission.java
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/gridmix/src/test/org/apache/hadoop/mapred/gridmix/TestSleepJob.java


 Job submission failing in JUnit tests
 -

 Key: MAPREDUCE-3462
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3462
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: mrv2, test
Affects Versions: 0.23.0
Reporter: Amar Kamat
Assignee: Ravi Prakash
Priority: Blocker
  Labels: junit, test
 Fix For: 0.23.1, 0.24.0

 Attachments: 3462.trunk.patch, MAPREDUCE-3462.branch-0.23.patch


 When I run JUnit tests (e.g. TestDistCacheEmulation, TestSleepJob and 
 TestCompressionEmulationUtils), I see job submission failing with the 
 following error:
 {noformat}
 java.lang.IllegalStateException: Variable substitution depth too large: 20 
 ${fs.default.name}
 at 
 org.apache.hadoop.conf.Configuration.substituteVars(Configuration.java:551)
 at org.apache.hadoop.conf.Configuration.get(Configuration.java:569)
 at 
 org.apache.hadoop.conf.Configuration.getStrings(Configuration.java:1020)
 at 
 org.apache.hadoop.mapreduce.JobSubmitter.populateTokenCache(JobSubmitter.java:564)
 at 
 org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:353)
 at org.apache.hadoop.mapreduce.Job$2.run(Job.java:1159)
 at org.apache.hadoop.mapreduce.Job$2.run(Job.java:1156)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:396)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1152)
 at org.apache.hadoop.mapreduce.Job.submit(Job.java:1156)
 at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1176)
 at 
 org.apache.hadoop.mapred.gridmix.Gridmix.launchGridmixJob(Gridmix.java:190)
 at 
 org.apache.hadoop.mapred.gridmix.Gridmix.writeInputData(Gridmix.java:150)
 at org.apache.hadoop.mapred.gridmix.Gridmix.start(Gridmix.java:425)
 at org.apache.hadoop.mapred.gridmix.Gridmix.runJob(Gridmix.java:380)
 at 
 org.apache.hadoop.mapred.gridmix.Gridmix.access$000(Gridmix.java:56)
 at org.apache.hadoop.mapred.gridmix.Gridmix$1.run(Gridmix.java:313)
 at org.apache.hadoop.mapred.gridmix.Gridmix$1.run(Gridmix.java:311)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:396)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1152)
 at org.apache.hadoop.mapred.gridmix.Gridmix.run(Gridmix.java:311)
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (MAPREDUCE-3597) Provide a way to access other info of history file from Rumentool

2012-01-03 Thread Amar Kamat (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-3597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13179309#comment-13179309
 ] 

Amar Kamat commented on MAPREDUCE-3597:
---

Ravi, is it possible to port this to branch-1?

 Provide a way to access other info of history file from Rumentool
 -

 Key: MAPREDUCE-3597
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3597
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
  Components: tools/rumen
Affects Versions: 0.24.0
Reporter: Ravi Gummadi
Assignee: Ravi Gummadi
 Fix For: 0.24.0

 Attachments: 3597.v0.patch, 3597.v1.patch


 As the trace file generated by Rumen TraceBuilder is skipping some of the 
 info like job counters, task counters, etc. we need a way to access other 
 info available in history file which is not dumped to trace file. This is 
 useful for components which want to parse history files and get info. These 
 components can directly use/leverage Rumen's parsing of history files across 
 hadoop releases and get history info in a consistent way for further 
 analysis/processing.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (MAPREDUCE-3462) Job submission failing in JUnit tests

2012-01-03 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-3462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13179310#comment-13179310
 ] 

Hudson commented on MAPREDUCE-3462:
---

Integrated in Hadoop-Mapreduce-trunk-Commit #1509 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/1509/])
MAPREDUCE-3462. Fix Gridmix JUnit testcase failures. (Ravi Prakash and Ravi 
Gummadi via amarrk)

amarrk : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1227051
Files : 
* /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-mapreduce-project/src/contrib/gridmix/src/test/org/apache/hadoop/mapred/gridmix/TestCompressionEmulationUtils.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/src/contrib/gridmix/src/test/org/apache/hadoop/mapred/gridmix/TestDistCacheEmulation.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/src/contrib/gridmix/src/test/org/apache/hadoop/mapred/gridmix/TestGridmixSubmission.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/src/contrib/gridmix/src/test/org/apache/hadoop/mapred/gridmix/TestSleepJob.java


 Job submission failing in JUnit tests
 -

 Key: MAPREDUCE-3462
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3462
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: mrv2, test
Affects Versions: 0.23.0
Reporter: Amar Kamat
Assignee: Ravi Prakash
Priority: Blocker
  Labels: junit, test
 Fix For: 0.23.1, 0.24.0

 Attachments: 3462.trunk.patch, MAPREDUCE-3462.branch-0.23.patch


 When I run JUnit tests (e.g. TestDistCacheEmulation, TestSleepJob and 
 TestCompressionEmulationUtils), I see job submission failing with the 
 following error:
 {noformat}
 java.lang.IllegalStateException: Variable substitution depth too large: 20 
 ${fs.default.name}
 at 
 org.apache.hadoop.conf.Configuration.substituteVars(Configuration.java:551)
 at org.apache.hadoop.conf.Configuration.get(Configuration.java:569)
 at 
 org.apache.hadoop.conf.Configuration.getStrings(Configuration.java:1020)
 at 
 org.apache.hadoop.mapreduce.JobSubmitter.populateTokenCache(JobSubmitter.java:564)
 at 
 org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:353)
 at org.apache.hadoop.mapreduce.Job$2.run(Job.java:1159)
 at org.apache.hadoop.mapreduce.Job$2.run(Job.java:1156)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:396)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1152)
 at org.apache.hadoop.mapreduce.Job.submit(Job.java:1156)
 at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1176)
 at 
 org.apache.hadoop.mapred.gridmix.Gridmix.launchGridmixJob(Gridmix.java:190)
 at 
 org.apache.hadoop.mapred.gridmix.Gridmix.writeInputData(Gridmix.java:150)
 at org.apache.hadoop.mapred.gridmix.Gridmix.start(Gridmix.java:425)
 at org.apache.hadoop.mapred.gridmix.Gridmix.runJob(Gridmix.java:380)
 at 
 org.apache.hadoop.mapred.gridmix.Gridmix.access$000(Gridmix.java:56)
 at org.apache.hadoop.mapred.gridmix.Gridmix$1.run(Gridmix.java:313)
 at org.apache.hadoop.mapred.gridmix.Gridmix$1.run(Gridmix.java:311)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:396)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1152)
 at org.apache.hadoop.mapred.gridmix.Gridmix.run(Gridmix.java:311)
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (MAPREDUCE-3462) Job submission failing in JUnit tests

2012-01-03 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-3462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13179317#comment-13179317
 ] 

Hudson commented on MAPREDUCE-3462:
---

Integrated in Hadoop-Mapreduce-0.23-Commit #344 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-0.23-Commit/344/])
MAPREDUCE-3462. Fix Gridmix JUnit testcase failures. (Ravi Prakash and Ravi 
Gummadi via amarrk)

amarrk : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1227052
Files : 
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/gridmix/src/test/org/apache/hadoop/mapred/gridmix/TestCompressionEmulationUtils.java
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/gridmix/src/test/org/apache/hadoop/mapred/gridmix/TestDistCacheEmulation.java
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/gridmix/src/test/org/apache/hadoop/mapred/gridmix/TestGridmixSubmission.java
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/gridmix/src/test/org/apache/hadoop/mapred/gridmix/TestSleepJob.java


 Job submission failing in JUnit tests
 -

 Key: MAPREDUCE-3462
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3462
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: mrv2, test
Affects Versions: 0.23.0
Reporter: Amar Kamat
Assignee: Ravi Prakash
Priority: Blocker
  Labels: junit, test
 Fix For: 0.23.1, 0.24.0

 Attachments: 3462.trunk.patch, MAPREDUCE-3462.branch-0.23.patch


 When I run JUnit tests (e.g. TestDistCacheEmulation, TestSleepJob and 
 TestCompressionEmulationUtils), I see job submission failing with the 
 following error:
 {noformat}
 java.lang.IllegalStateException: Variable substitution depth too large: 20 
 ${fs.default.name}
 at 
 org.apache.hadoop.conf.Configuration.substituteVars(Configuration.java:551)
 at org.apache.hadoop.conf.Configuration.get(Configuration.java:569)
 at 
 org.apache.hadoop.conf.Configuration.getStrings(Configuration.java:1020)
 at 
 org.apache.hadoop.mapreduce.JobSubmitter.populateTokenCache(JobSubmitter.java:564)
 at 
 org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:353)
 at org.apache.hadoop.mapreduce.Job$2.run(Job.java:1159)
 at org.apache.hadoop.mapreduce.Job$2.run(Job.java:1156)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:396)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1152)
 at org.apache.hadoop.mapreduce.Job.submit(Job.java:1156)
 at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1176)
 at 
 org.apache.hadoop.mapred.gridmix.Gridmix.launchGridmixJob(Gridmix.java:190)
 at 
 org.apache.hadoop.mapred.gridmix.Gridmix.writeInputData(Gridmix.java:150)
 at org.apache.hadoop.mapred.gridmix.Gridmix.start(Gridmix.java:425)
 at org.apache.hadoop.mapred.gridmix.Gridmix.runJob(Gridmix.java:380)
 at 
 org.apache.hadoop.mapred.gridmix.Gridmix.access$000(Gridmix.java:56)
 at org.apache.hadoop.mapred.gridmix.Gridmix$1.run(Gridmix.java:313)
 at org.apache.hadoop.mapred.gridmix.Gridmix$1.run(Gridmix.java:311)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:396)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1152)
 at org.apache.hadoop.mapred.gridmix.Gridmix.run(Gridmix.java:311)
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (MAPREDUCE-3576) Hadoop Eclipse Plugin doesn't work in OS X Lion.

2012-01-03 Thread Will L (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-3576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Will L updated MAPREDUCE-3576:
--

Attachment: Screen Shot 2012-01-03 at 11.17.47 PM.png

 Hadoop Eclipse Plugin doesn't work in OS X Lion.
 

 Key: MAPREDUCE-3576
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3576
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: contrib/eclipse-plugin
Affects Versions: 0.20.203.0
 Environment: Mac OS X Lion
Reporter: Will L
Priority: Critical
 Fix For: 0.20.203.0

 Attachments: Screen Shot 2012-01-03 at 11.17.47 PM.png


 The Hadoop Eclipse plugin version 0.20.203 and 0.20.205 works on Mac OS X 
 Snow Leopard with Eclipse 3.7.1. 
 It gives an error when trying to connect to the DFS saying Failed to login. 
  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (MAPREDUCE-3576) Hadoop Eclipse Plugin doesn't work in OS X Lion.

2012-01-03 Thread Will L (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-3576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13179324#comment-13179324
 ] 

Will L commented on MAPREDUCE-3576:
---

I have tried this under OS X Lion using: 
Eclipse 3.7.1 using the following Hadoop versions:
* 1.0.0
* 0.20.203.0
* 0.20.205.0
* 0.21.0
* 0.23.0

I still get the same error. I am able to SSH into localhost without supplying a 
password.

 Hadoop Eclipse Plugin doesn't work in OS X Lion.
 

 Key: MAPREDUCE-3576
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3576
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: contrib/eclipse-plugin
Affects Versions: 0.20.203.0
 Environment: Mac OS X Lion
Reporter: Will L
Priority: Critical
 Fix For: 0.20.203.0

 Attachments: Screen Shot 2012-01-03 at 11.17.47 PM.png


 The Hadoop Eclipse plugin version 0.20.203 and 0.20.205 works on Mac OS X 
 Snow Leopard with Eclipse 3.7.1. 
 It gives an error when trying to connect to the DFS saying Failed to login. 
  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira