[jira] Commented: (MAPREDUCE-1731) Process tree clean up suspended task tests.

2010-04-29 Thread Balaji Rajagopalan (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-1731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12862564#action_12862564
 ] 

Balaji Rajagopalan commented on MAPREDUCE-1731:
---

If that is the case please call out the dependencies. 

> Process tree clean up suspended task tests.
> ---
>
> Key: MAPREDUCE-1731
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-1731
> Project: Hadoop Map/Reduce
>  Issue Type: Task
>  Components: test
>Reporter: Vinay Kumar Thota
>Assignee: Vinay Kumar Thota
> Attachments: suspendtask_1731.patch
>
>
> 1 .Verify the process tree cleanup of suspended task and task should be 
> terminated after timeout.
> 2. Verify the process tree cleanup of suspended task and resume the task 
> before task timeout.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (MAPREDUCE-1747) Remove documentation for the 'unstable' job-acls feature

2010-04-29 Thread Vinod K V (JIRA)
Remove documentation for the 'unstable' job-acls feature


 Key: MAPREDUCE-1747
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-1747
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: documentation
Affects Versions: 0.21.0
Reporter: Vinod K V
Assignee: Vinod K V
Priority: Blocker
 Fix For: 0.21.0


As discussed 
[here|https://issues.apache.org/jira/browse/MAPREDUCE-1604?focusedCommentId=12862151&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#action_12862151]
 and 
[here|https://issues.apache.org/jira/browse/MAPREDUCE-1604?focusedCommentId=12860916&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#action_12860916]
 at MAPREDUCE-1604, the job-acls feature is currently unstable. Without 
MAPREDUCE-1664, job-acls are practically useless because of their problematic 
interactions with queue-acls. Removing them for 0.21 will both relieve 
ourselves of these problems as well as the burden to support the backwards 
compatibility of the configuration options as well as the going-to-be-changed 
semantics of the feature. This jira is about removing the documentation from 
0.21 so that the completed feature can be added in 0.22 with ease.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (MAPREDUCE-1079) Split commands_manual.xml into common, mapreduce and hdfs parts

2010-04-29 Thread Vinod K V (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-1079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod K V updated MAPREDUCE-1079:
-

Priority: Major  (was: Blocker)

bq. But can you do that in another JIRA so that we can use this one for the 
split effort in future?
I created HADOOP-6740 for this. Tom, I took the liberty of assigning that issue 
to you. I could have done it myself but for the fact that I don't have commit 
access to common.

Relegating the blocker status of this issue to major.

> Split commands_manual.xml into common, mapreduce and hdfs parts
> ---
>
> Key: MAPREDUCE-1079
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-1079
> Project: Hadoop Map/Reduce
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Vinod K V
> Fix For: 0.21.0
>
>


-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (MAPREDUCE-1079) Split commands_manual.xml into common, mapreduce and hdfs parts

2010-04-29 Thread Vinod K V (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-1079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12862556#action_12862556
 ] 

Vinod K V commented on MAPREDUCE-1079:
--

Tom, MAPREDUCE-1404 didn't talk about commands_manual.xml at all. But for the 
sake of getting 0.21 release going, I guess we can move it to common there for 
now. Long term, I still feel we should split things up.

So +1 for moving it from mapred into common for now. But can you do that in 
another JIRA so that we can use this one for the split effort in future? Thanks!

> Split commands_manual.xml into common, mapreduce and hdfs parts
> ---
>
> Key: MAPREDUCE-1079
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-1079
> Project: Hadoop Map/Reduce
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Vinod K V
>Priority: Blocker
> Fix For: 0.21.0
>
>


-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (MAPREDUCE-1741) Automate the test scenario of job related files are moved from history directory to done directory

2010-04-29 Thread Vinay Kumar Thota (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-1741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12862553#action_12862553
 ] 

Vinay Kumar Thota commented on MAPREDUCE-1741:
--

{noformat}
Test names are bit confusing for me. Can you please change to some appropriate 
names? 
 Here is my suggestion.
testRetiredJobsCompletedLocation() - > testRetiredJobsHistoryLocation()
testRetiredMultipleJobsCompletedLocation() -> 
testRetiredMultipleJobsHistoryLocation()
{noformat}

{noformat}
String hadoopLogDirString = jconf.get("hadoop.log.dir");
jobHistoryDonePathString = 
jconf.get("mapred.job.tracker.history.completed.location");
String jobHistoryPathString = jconf.get("hadoop.job.history.location");

Can please check the above three statements with assert condition. Otherwise if 
any of these attributes are not available in the configuration 
then test fails w/ NPE w/out any message. So please check the above three 
values before proceeding to further steps
{noformat}


{noformat}
Assert.assertTrue("jobFileFound is false", jobFileFound);

You are checking the above condition while running the job right, in this case 
the above statement would be wrong because 
the jobFileFound should be false and it should be like below.

Assert.assertFalse("Job history files available in history location for running 
job itself.",jobFileFound)
{noformat}


How do you confirm whether its moved to retired job state or not after 
completion of job ?
I don't see any specific condition in the code for retire job. Is there any 
default interval available in the configuration for moving jobs to retire? 

{noformat}
Put the assert statement like below after completion of job.So that user will 
have clear information whenever test fails.
Assert.assertTrue("Job history files are not available in history location 
after job retired.", jobFileFound);
{noformat}

Even I have same question that Balaji has asked regarding creating 1000 files 
in done location. Can you please
elaborate more on this in your comments.

Can you please put the annotation @Test after java doc comments instead of top.




> Automate the test scenario of  job related files are moved from history 
> directory to done directory
> ---
>
> Key: MAPREDUCE-1741
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-1741
> Project: Hadoop Map/Reduce
>  Issue Type: Test
>  Components: test
>Affects Versions: 0.22.0
>Reporter: Iyappan Srinivasan
> Fix For: 0.22.0
>
> Attachments: TestJobHistoryLocation.patch
>
>
> Job related files are moved from history directory to done directory, when
> 1) Job succeeds
> 2) Job is killed
> 3) When 100 files are put in the done directory
> 4) When multiple jobs are completed at the same time, some successful, some 
> failed.
> Also, two files, conf.xml and job files should be present in the done 
> directory.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (MAPREDUCE-1707) TaskRunner can get NPE in getting ugi from TaskTracker

2010-04-29 Thread Amareshwari Sriramadasu (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-1707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12862547#action_12862547
 ] 

Amareshwari Sriramadasu commented on MAPREDUCE-1707:


Patch skips localizing distributed cache and continues further, if 
getRunningJob() returns null. I think it should not any more processing. It 
should just return from there.

bq. Passing UGI all the way down to TaskRunner both looked wierd and resulted 
in ugly code changes.
I still feel TaskRunner should not do a back call to TaskTracker. Can we pass 
UGI as part of Task object to make things simpler?

> TaskRunner can get NPE in getting ugi from TaskTracker
> --
>
> Key: MAPREDUCE-1707
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-1707
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: tasktracker
>Affects Versions: 0.22.0
>Reporter: Amareshwari Sriramadasu
>Assignee: Vinod K V
> Fix For: 0.22.0
>
> Attachments: MAPREDUCE-1707-20100429.txt
>
>
> The following code in TaskRunner can get NPE in the scenario described below.
> {code}
>   UserGroupInformation ugi = 
> tracker.getRunningJob(t.getJobID()).getUGI();
> {code}
> The scenario:
> Tracker got a LaunchTaskAction; Task is localized and TaskRunner is started.
> Then Tracker got a KillJobAction; This would issue a kill for the task. But, 
> kill will be a no-op because the task did not actually start; The job is 
> removed from runningJobs. 
> Then if TaskRunner calls tracker.getRunningJob(t.getJobID()), it will be null.
> Instead of TaskRunner doing a back call to tasktracker to get the ugi, 
> tracker.getRunningJob(t.getJobID()).getUGI(), ugi should be passed a 
> parameter in the constructor of TaskRunner. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (MAPREDUCE-1742) Job.setNumReduceTasks doesn't work

2010-04-29 Thread Amareshwari Sriramadasu (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-1742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12862536#action_12862536
 ] 

Amareshwari Sriramadasu commented on MAPREDUCE-1742:


I just ran some examples and I see number of reduce tasks being picked up 
properly.

> Job.setNumReduceTasks doesn't work
> --
>
> Key: MAPREDUCE-1742
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-1742
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: job submission
>Affects Versions: 0.22.0
> Environment: Hadoop 0.22.0-SNAPSHOT, latest version from trunc.
>Reporter: Danny Leshem
>Priority: Blocker
> Fix For: 0.21.0
>
>
> Calling Job.setNumReduceTasks(0) doesn't seem to work with the latest trunc, 
> and the job still goes through a reduction phase.
> Also, Job.setNumReduceTasks(1) doesn't seem to work either, and several 
> reducers are spawned.
> It seems that something about Job.setNumReduceTasks got broken recently.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (MAPREDUCE-1730) Automate test scenario for successful/killed jobs' memory is properly removed from jobtracker after these jobs retire.

2010-04-29 Thread Vinay Kumar Thota (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-1730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12862534#action_12862534
 ] 

Vinay Kumar Thota commented on MAPREDUCE-1730:
--

Also I don't see any assert conditions specific to retired job in the code.

> Automate test scenario for successful/killed jobs' memory is properly removed 
> from jobtracker after these jobs retire.
> --
>
> Key: MAPREDUCE-1730
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-1730
> Project: Hadoop Map/Reduce
>  Issue Type: Test
>Affects Versions: 0.22.0
>Reporter: Iyappan Srinivasan
> Fix For: 0.22.0
>
> Attachments: TestRetiredJobs.patch
>
>
> Automate using herriot framework,  test scenario for successful/killed jobs' 
> memory is properly removed from jobtracker after these jobs retire.
> This should test when successful and failed jobs are retired,  their 
> jobInProgress object are removed properly.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (MAPREDUCE-1730) Automate test scenario for successful/killed jobs' memory is properly removed from jobtracker after these jobs retire.

2010-04-29 Thread Vinay Kumar Thota (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-1730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12862533#action_12862533
 ] 

Vinay Kumar Thota commented on MAPREDUCE-1730:
--

 {noformat}
  int count = 0;
  while (jInfo.getStatus().getRunState() != JobStatus.RUNNING) {
UtilsForTests.waitFor(1);
count++;
jInfo = wovenClient.getJobInfo(jobID);
//If the count goes more than 100 seconds, then fail; This is to 
//avoid infinite loop
if (count > 10) {
  Assert.fail("Since the job has not started even after" +
" 100 seconds, failing at this point");
}
  }
  
For above functionality can use the below statement.

clsuster.isJobStarted(jobID);  -> this method is available in the 
MRCluster. 
{noformat} 

{noformat} 
  JobInfo jobInfo = cluster.getJTClient().getProxy()
  .getJobInfo(jobID);

  Assert.assertNotNull("The Job information is not present ", jobInfo); 

Why are your checking the jobInfo second time? is there any reason behind it?

Already checking the jobinfo in the top whether its null or not.In that case,
why do you need the second time checking?
{noformat} 

{noformat} 
Assert.assertNull("The Job information is not present ", jobInfo);

The message in Assert statement might be wrong, because after completion of job 
the information should be null right.So you are checking the assertNull 
condition and its correct.But the message is not correct.

If jobinfo available after completion of job, it should fail and shows the 
message saying that 'Job information is still available after completion of 
job'.
{noformat} 

> Automate test scenario for successful/killed jobs' memory is properly removed 
> from jobtracker after these jobs retire.
> --
>
> Key: MAPREDUCE-1730
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-1730
> Project: Hadoop Map/Reduce
>  Issue Type: Test
>Affects Versions: 0.22.0
>Reporter: Iyappan Srinivasan
> Fix For: 0.22.0
>
> Attachments: TestRetiredJobs.patch
>
>
> Automate using herriot framework,  test scenario for successful/killed jobs' 
> memory is properly removed from jobtracker after these jobs retire.
> This should test when successful and failed jobs are retired,  their 
> jobInProgress object are removed properly.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (MAPREDUCE-1604) Job acls should be documented in forrest.

2010-04-29 Thread Hemanth Yamijala (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-1604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12862526#action_12862526
 ] 

Hemanth Yamijala commented on MAPREDUCE-1604:
-

No, I don't object. If you feel that's the right call that will simplify life 
in future, please go ahead.

> Job acls should be documented in forrest.
> -
>
> Key: MAPREDUCE-1604
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-1604
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: documentation, security
>Affects Versions: 0.22.0
>Reporter: Amareshwari Sriramadasu
>Assignee: Amareshwari Sriramadasu
> Fix For: 0.22.0
>
> Attachments: patch-1604-1.txt, patch-1604-ydist.txt, patch-1604.txt
>
>
> Job acls introduced in MAPREDUCE-1307 should be documented in forrest.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (MAPREDUCE-1742) Job.setNumReduceTasks doesn't work

2010-04-29 Thread Amareshwari Sriramadasu (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-1742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12862525#action_12862525
 ] 

Amareshwari Sriramadasu commented on MAPREDUCE-1742:


Danny, Can you give more details on how you are hitting this? 
I strongly feel this is not broken because there are many unit tests and 
examples calling this api and none of them fail.

> Job.setNumReduceTasks doesn't work
> --
>
> Key: MAPREDUCE-1742
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-1742
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: job submission
>Affects Versions: 0.22.0
> Environment: Hadoop 0.22.0-SNAPSHOT, latest version from trunc.
>Reporter: Danny Leshem
>Priority: Blocker
> Fix For: 0.21.0
>
>
> Calling Job.setNumReduceTasks(0) doesn't seem to work with the latest trunc, 
> and the job still goes through a reduction phase.
> Also, Job.setNumReduceTasks(1) doesn't seem to work either, and several 
> reducers are spawned.
> It seems that something about Job.setNumReduceTasks got broken recently.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (MAPREDUCE-1680) Add a metrics to track the number of heartbeats processed

2010-04-29 Thread Amareshwari Sriramadasu (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-1680?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12862524#action_12862524
 ] 

Amareshwari Sriramadasu commented on MAPREDUCE-1680:


+1 patch looks fine to me.

> Add a metrics to track the number of heartbeats processed
> -
>
> Key: MAPREDUCE-1680
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-1680
> Project: Hadoop Map/Reduce
>  Issue Type: New Feature
>Reporter: Hong Tang
>Assignee: Dick King
> Fix For: 0.22.0
>
> Attachments: mapreduce-1680--2010-04-07.patch, 
> mapreduce-1680--2010-04-08-for-trunk.patch, mapreduce-1680--2010-04-08.patch, 
> mapreduce-1680--2010-04-29.patch
>
>
> It would be nice to add a metrics that tracks the number of heartbeats 
> processed by JT.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (MAPREDUCE-1746) DistributedCache add(File|Archive)ToClassPath should only use FileSystem if path is not fully qualified

2010-04-29 Thread Amareshwari Sriramadasu (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-1746?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amareshwari Sriramadasu updated MAPREDUCE-1746:
---

Component/s: distributed-cache

> DistributedCache add(File|Archive)ToClassPath should only use FileSystem if 
> path is not fully qualified
> ---
>
> Key: MAPREDUCE-1746
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-1746
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>  Components: distributed-cache
>Reporter: Alejandro Abdelnur
>
> Currently setting a file/archive in the DistributedCache classpath creates a 
> FileSystem instance to fully qualify the path, even if the path given is 
> already fully qualified.
> This forces a connection to the cluster.
> The methods should check if the path is already fully qualified and if so it 
> should not create a FileSystem instance and try to qualify the path.
> This would allow creating a jobconf in disconnected mode until submission 
> time.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Moved: (MAPREDUCE-1746) DistributedCache add(File|Archive)ToClassPath should only use FileSystem if path is not fully qualified

2010-04-29 Thread Amareshwari Sriramadasu (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-1746?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amareshwari Sriramadasu moved HADOOP-6739 to MAPREDUCE-1746:


Project: Hadoop Map/Reduce  (was: Hadoop Common)
Key: MAPREDUCE-1746  (was: HADOOP-6739)

> DistributedCache add(File|Archive)ToClassPath should only use FileSystem if 
> path is not fully qualified
> ---
>
> Key: MAPREDUCE-1746
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-1746
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>Reporter: Alejandro Abdelnur
>
> Currently setting a file/archive in the DistributedCache classpath creates a 
> FileSystem instance to fully qualify the path, even if the path given is 
> already fully qualified.
> This forces a connection to the cluster.
> The methods should check if the path is already fully qualified and if so it 
> should not create a FileSystem instance and try to qualify the path.
> This would allow creating a jobconf in disconnected mode until submission 
> time.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (MAPREDUCE-1743) conf.get("map.input.file") returns null when using MultipleInputs in Hadoop 0.20

2010-04-29 Thread Tom White (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-1743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12862485#action_12862485
 ] 

Tom White commented on MAPREDUCE-1743:
--

> One solution is to let TaggedInputSplit extend FileSplit.

Except that MultipleInputs works with any InputFormat, not just 
FileInputFormat. Another way of doing this would be to invert the logic so that 
the InputSplit updates properties on Configuration (this would need a new 
method on InputSplit).

> conf.get("map.input.file") returns null when using MultipleInputs in Hadoop 
> 0.20
> 
>
> Key: MAPREDUCE-1743
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-1743
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>Affects Versions: 0.20.2
>Reporter: Yuanyuan Tian
>
> There is a problem in getting the input file name in the mapper when uisng 
> MultipleInputs in Hadoop 0.20. I need to use MultipleInputs to support 
> different formats for my inputs to the my MapReduce job. And inside each 
> mapper, I also need to know the exact input file that the mapper is 
> processing. However, conf.get("map.input.file") returns null. Can anybody 
> help me solve this problem? Thanks in advance.
> public class Test extends Configured implements Tool{
>   static class InnerMapper extends MapReduceBase implements 
> Mapper
>   {
>   
>   
>   public void configure(JobConf conf)
>   {   
>   String inputName=conf.get("map.input.file"));
>   ...
>   }
>   
>   }
>   
>   public int run(String[] arg0) throws Exception {
>   JonConf job;
>   job = new JobConf(Test.class);
>   ...
>   
>   MultipleInputs.addInputPath(conf, new Path("A"), 
> TextInputFormat.class);
>   MultipleInputs.addInputPath(conf, new Path("B"), 
> SequenceFileFormat.class);
>   ...
>   }
> }

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (MAPREDUCE-1067) Default state of queues is undefined when unspecified

2010-04-29 Thread Tom White (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-1067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12862474#action_12862474
 ] 

Tom White commented on MAPREDUCE-1067:
--

I suspect this may not be a blocker.

> Default state of queues is undefined when unspecified
> -
>
> Key: MAPREDUCE-1067
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-1067
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: jobtracker
>Affects Versions: 0.21.0
>Reporter: V.V.Chaitanya Krishna
>Assignee: V.V.Chaitanya Krishna
>Priority: Blocker
> Fix For: 0.21.0
>
> Attachments: MAPREDUCE-1067-1.patch, MAPREDUCE-1067-2.patch, 
> MAPREDUCE-1067-3.patch, MAPREDUCE-1067-4.patch, MAPREDUCE-1067-5.patch, 
> MAPREDUCE-1067-6.patch
>
>
> Currently, if the state of a queue is not specified, it is being set to 
> "undefined" state instead of running state.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (MAPREDUCE-1079) Split commands_manual.xml into common, mapreduce and hdfs parts

2010-04-29 Thread Tom White (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-1079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12862468#action_12862468
 ] 

Tom White commented on MAPREDUCE-1079:
--

Following the suggestion in MAPREDUCE-1404, we might just move 
commands_manual.xml into common.

> Split commands_manual.xml into common, mapreduce and hdfs parts
> ---
>
> Key: MAPREDUCE-1079
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-1079
> Project: Hadoop Map/Reduce
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Vinod K V
>Priority: Blocker
> Fix For: 0.21.0
>
>


-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (MAPREDUCE-1745) FileInputFormat.setInputPaths() has sideffects on the passed conf besides setting the input.dir path

2010-04-29 Thread Dick King (JIRA)
FileInputFormat.setInputPaths() has sideffects on the passed conf besides 
setting the input.dir path


 Key: MAPREDUCE-1745
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-1745
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Reporter: Dick King


it sets other props, like {{mapred.working.dir}} , it sets this using the 
current user (which may be the superuser), then
the job submission for a proxy user fails because the {{mapred.working.dir}} is 
not the right one for the proxy user.

This is observed when using relative directories, they are resolved to the 
homeDir of the superuser instead of the
proxy user.

I did not check, but I suspect the {{FileOutputForamt.setOutputPath()}} may 
have similar side effects

There is a workaround, setting up the {{mapred.input.dir}} and 
{{mapred.output.dir}} directly by hand in the conf instead using
the methods above

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (MAPREDUCE-1568) TrackerDistributedCacheManager should clean up cache in a background thread

2010-04-29 Thread Zheng Shao (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-1568?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zheng Shao updated MAPREDUCE-1568:
--

  Status: Resolved  (was: Patch Available)
Hadoop Flags: [Reviewed]
Release Note: MAPREDUCE-1568. TrackerDistributedCacheManager should clean 
up cache in a background thread. (Scott Chen via zshao)
  Resolution: Fixed

Committed. Thanks Scott!

> TrackerDistributedCacheManager should clean up cache in a background thread
> ---
>
> Key: MAPREDUCE-1568
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-1568
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>Affects Versions: 0.22.0
>Reporter: Scott Chen
>Assignee: Scott Chen
> Fix For: 0.22.0
>
> Attachments: MAPREDUCE-1568-v2.1.txt, MAPREDUCE-1568-v2.txt, 
> MAPREDUCE-1568-v3.1.txt, MAPREDUCE-1568-v3.txt, MAPREDUCE-1568.txt
>
>
> Right now the TrackerDistributedCacheManager do the clean up with the 
> following code path:
> {code}
> TaskRunner.run() -> 
> TrackerDistributedCacheManager.setup() ->
> TrackerDistributedCacheManager.getLocalCache() -> 
> TrackerDistributedCacheManager.deleteCache()
> {code}
> The deletion of the cache files can take a long time and it should not be 
> done by a task. We suggest that there should be a separate thread checking 
> and clean up the cache files.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (MAPREDUCE-1535) Replace usage of FileStatus#isDir()

2010-04-29 Thread Eli Collins (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-1535?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated MAPREDUCE-1535:
---

Fix Version/s: 0.21.0
Affects Version/s: 0.21.0
 Priority: Blocker  (was: Major)

> Replace usage of FileStatus#isDir()
> ---
>
> Key: MAPREDUCE-1535
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-1535
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>Affects Versions: 0.21.0, 0.22.0
>Reporter: Eli Collins
>Assignee: Eli Collins
>Priority: Blocker
> Fix For: 0.21.0, 0.22.0
>
> Attachments: mapreduce-1535-1.patch
>
>
> HADOOP-6585 will deprecate FileStatus#isDir(). This jira is for replacing all 
> uses of isDir() in MR with checks of isDirectory() or isFile() as needed.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (MAPREDUCE-1744) DistributedCache creates its own FileSytem instance when adding a file/archive to the path

2010-04-29 Thread Dick King (JIRA)
DistributedCache creates its own FileSytem instance when adding a file/archive 
to the path
--

 Key: MAPREDUCE-1744
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-1744
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Reporter: Dick King


According to the contract of {{UserGroupInformation.doAs()}} the only required 
operations within the {{doAs()}} block are the
creation of a {{JobClient}} or getting a {{FileSystem}} .

The {{DistributedCache.add(File/Archive)ToClasspath()}} methods create a 
{{FileSystem}} instance outside of the {{doAs()}} block,
this {{FileSystem}} instance is not in the scope of the proxy user but of the 
superuser and permissions may make the method
fail.

One option is to overload the methods above to receive a filesystem.

Another option is to do obtain the {{FileSystem}} within a {{doAs()}} block, 
for this it would be required to have the proxy
user set in the passed configuration.

The second option seems nicer, but I don't know if the proxy user is as a 
property in the jobconf.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (MAPREDUCE-220) Collecting cpu and memory usage for MapReduce tasks

2010-04-29 Thread Scott Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Scott Chen updated MAPREDUCE-220:
-

Status: Patch Available  (was: Open)

> Collecting cpu and memory usage for MapReduce tasks
> ---
>
> Key: MAPREDUCE-220
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-220
> Project: Hadoop Map/Reduce
>  Issue Type: New Feature
>Reporter: Hong Tang
>Assignee: Scott Chen
> Attachments: MAPREDUCE-220-v1.txt, MAPREDUCE-220.txt
>
>
> It would be nice for TaskTracker to collect cpu and memory usage for 
> individual Map or Reduce tasks over time.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (MAPREDUCE-220) Collecting cpu and memory usage for MapReduce tasks

2010-04-29 Thread Scott Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12862438#action_12862438
 ] 

Scott Chen commented on MAPREDUCE-220:
--

Rerun the failed contrib test TestSimulatorDeterministicreplay. It succeed on 
my dev box.

{code}
 b/c/m/t/TEST-org.apache.hadoop.mapred.TestSimulatorDeterministicReplay.txt 

Testsuite: org.apache.hadoop.mapred.TestSimulatorDeterministicReplay
Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 31.101 sec
- Standard Output ---
Job job_200904211745_0002 is submitted at 103010
Job job_200904211745_0002 completed at 141990 with status: SUCCEEDED runtime: 
38980
Job job_200904211745_0003 is submitted at 984078
Job job_200904211745_0004 is submitted at 993516
Job job_200904211745_0003 completed at 1011051 with status: SUCCEEDED runtime: 
26973
Job job_200904211745_0005 is submitted at 1033963
Done, total events processed: 595469
Job job_200904211745_0002 is submitted at 103010
Job job_200904211745_0002 completed at 141990 with status: SUCCEEDED runtime: 
38980
Job job_200904211745_0003 is submitted at 984078
Job job_200904211745_0004 is submitted at 993516
Job job_200904211745_0003 completed at 1011051 with status: SUCCEEDED runtime: 
26973
Job job_200904211745_0005 is submitted at 1033963
Done, total events processed: 595469
-  ---
{code}


> Collecting cpu and memory usage for MapReduce tasks
> ---
>
> Key: MAPREDUCE-220
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-220
> Project: Hadoop Map/Reduce
>  Issue Type: New Feature
>Reporter: Hong Tang
>Assignee: Scott Chen
> Attachments: MAPREDUCE-220-v1.txt, MAPREDUCE-220.txt
>
>
> It would be nice for TaskTracker to collect cpu and memory usage for 
> individual Map or Reduce tasks over time.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (MAPREDUCE-1680) Add a metrics to track the number of heartbeats processed

2010-04-29 Thread Dick King (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-1680?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dick King updated MAPREDUCE-1680:
-

Status: Patch Available  (was: Open)

> Add a metrics to track the number of heartbeats processed
> -
>
> Key: MAPREDUCE-1680
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-1680
> Project: Hadoop Map/Reduce
>  Issue Type: New Feature
>Reporter: Hong Tang
>Assignee: Dick King
> Fix For: 0.22.0
>
> Attachments: mapreduce-1680--2010-04-07.patch, 
> mapreduce-1680--2010-04-08-for-trunk.patch, mapreduce-1680--2010-04-08.patch, 
> mapreduce-1680--2010-04-29.patch
>
>
> It would be nice to add a metrics that tracks the number of heartbeats 
> processed by JT.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (MAPREDUCE-1680) Add a metrics to track the number of heartbeats processed

2010-04-29 Thread Dick King (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-1680?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dick King updated MAPREDUCE-1680:
-

Attachment: mapreduce-1680--2010-04-29.patch

OK, this adds a clause to one of the test cases in 
{{TestJobTrackerInstrumentation}}, for heartbeats.


> Add a metrics to track the number of heartbeats processed
> -
>
> Key: MAPREDUCE-1680
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-1680
> Project: Hadoop Map/Reduce
>  Issue Type: New Feature
>Reporter: Hong Tang
>Assignee: Dick King
> Fix For: 0.22.0
>
> Attachments: mapreduce-1680--2010-04-07.patch, 
> mapreduce-1680--2010-04-08-for-trunk.patch, mapreduce-1680--2010-04-08.patch, 
> mapreduce-1680--2010-04-29.patch
>
>
> It would be nice to add a metrics that tracks the number of heartbeats 
> processed by JT.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (MAPREDUCE-1680) Add a metrics to track the number of heartbeats processed

2010-04-29 Thread Dick King (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-1680?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dick King updated MAPREDUCE-1680:
-

Status: Open  (was: Patch Available)

> Add a metrics to track the number of heartbeats processed
> -
>
> Key: MAPREDUCE-1680
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-1680
> Project: Hadoop Map/Reduce
>  Issue Type: New Feature
>Reporter: Hong Tang
>Assignee: Dick King
> Fix For: 0.22.0
>
> Attachments: mapreduce-1680--2010-04-07.patch, 
> mapreduce-1680--2010-04-08-for-trunk.patch, mapreduce-1680--2010-04-08.patch
>
>
> It would be nice to add a metrics that tracks the number of heartbeats 
> processed by JT.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (MAPREDUCE-220) Collecting cpu and memory usage for MapReduce tasks

2010-04-29 Thread Scott Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Scott Chen updated MAPREDUCE-220:
-

Attachment: MAPREDUCE-220-v1.txt

The problem found in findbug is because I made the method resourceUpdate() 
synchronized.
This is unnecessary. I have remove it.

> Collecting cpu and memory usage for MapReduce tasks
> ---
>
> Key: MAPREDUCE-220
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-220
> Project: Hadoop Map/Reduce
>  Issue Type: New Feature
>Reporter: Hong Tang
>Assignee: Scott Chen
> Attachments: MAPREDUCE-220-v1.txt, MAPREDUCE-220.txt
>
>
> It would be nice for TaskTracker to collect cpu and memory usage for 
> individual Map or Reduce tasks over time.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (MAPREDUCE-1743) conf.get("map.input.file") returns null when using MultipleInputs in Hadoop 0.20

2010-04-29 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-1743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12862408#action_12862408
 ] 

Ted Yu commented on MAPREDUCE-1743:
---

There is this check in updateJobWithSplit():
if (inputSplit instanceof FileSplit) {

Since TaggedInputSplit isn't an instance of FileSplit, "map.input.file" isn't 
set by updateJobWithSplit().

According to the usage in DelegatingInputFormat.getSplits():
 splits.add(new TaggedInputSplit(pathSplit, conf, format.getClass(),
 mapperClass));
One solution is to let TaggedInputSplit extend FileSplit.


> conf.get("map.input.file") returns null when using MultipleInputs in Hadoop 
> 0.20
> 
>
> Key: MAPREDUCE-1743
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-1743
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>Affects Versions: 0.20.2
>Reporter: Yuanyuan Tian
>
> There is a problem in getting the input file name in the mapper when uisng 
> MultipleInputs in Hadoop 0.20. I need to use MultipleInputs to support 
> different formats for my inputs to the my MapReduce job. And inside each 
> mapper, I also need to know the exact input file that the mapper is 
> processing. However, conf.get("map.input.file") returns null. Can anybody 
> help me solve this problem? Thanks in advance.
> public class Test extends Configured implements Tool{
>   static class InnerMapper extends MapReduceBase implements 
> Mapper
>   {
>   
>   
>   public void configure(JobConf conf)
>   {   
>   String inputName=conf.get("map.input.file"));
>   ...
>   }
>   
>   }
>   
>   public int run(String[] arg0) throws Exception {
>   JonConf job;
>   job = new JobConf(Test.class);
>   ...
>   
>   MultipleInputs.addInputPath(conf, new Path("A"), 
> TextInputFormat.class);
>   MultipleInputs.addInputPath(conf, new Path("B"), 
> SequenceFileFormat.class);
>   ...
>   }
> }

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (MAPREDUCE-220) Collecting cpu and memory usage for MapReduce tasks

2010-04-29 Thread Scott Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Scott Chen updated MAPREDUCE-220:
-

Status: Open  (was: Patch Available)

> Collecting cpu and memory usage for MapReduce tasks
> ---
>
> Key: MAPREDUCE-220
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-220
> Project: Hadoop Map/Reduce
>  Issue Type: New Feature
>Reporter: Hong Tang
>Assignee: Scott Chen
> Attachments: MAPREDUCE-220.txt
>
>
> It would be nice for TaskTracker to collect cpu and memory usage for 
> individual Map or Reduce tasks over time.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (MAPREDUCE-1568) TrackerDistributedCacheManager should clean up cache in a background thread

2010-04-29 Thread Scott Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-1568?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12862323#action_12862323
 ] 

Scott Chen commented on MAPREDUCE-1568:
---

And thank you very much for helping me on the patch, Amareshwari.
I think the quality of the patch has been improved a lot compare to the 
original one.

> TrackerDistributedCacheManager should clean up cache in a background thread
> ---
>
> Key: MAPREDUCE-1568
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-1568
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>Affects Versions: 0.22.0
>Reporter: Scott Chen
>Assignee: Scott Chen
> Fix For: 0.22.0
>
> Attachments: MAPREDUCE-1568-v2.1.txt, MAPREDUCE-1568-v2.txt, 
> MAPREDUCE-1568-v3.1.txt, MAPREDUCE-1568-v3.txt, MAPREDUCE-1568.txt
>
>
> Right now the TrackerDistributedCacheManager do the clean up with the 
> following code path:
> {code}
> TaskRunner.run() -> 
> TrackerDistributedCacheManager.setup() ->
> TrackerDistributedCacheManager.getLocalCache() -> 
> TrackerDistributedCacheManager.deleteCache()
> {code}
> The deletion of the cache files can take a long time and it should not be 
> done by a task. We suggest that there should be a separate thread checking 
> and clean up the cache files.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (MAPREDUCE-1568) TrackerDistributedCacheManager should clean up cache in a background thread

2010-04-29 Thread Scott Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-1568?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12862318#action_12862318
 ] 

Scott Chen commented on MAPREDUCE-1568:
---

{quote}
Sorry, I meant all Linux task controller tests here and by passing 
taskcontroller-path and taskcontroller-ugi. I understand that running these 
tests is difficult until MAPREDUCE-1429 is fixed. 
I ran all linux task controller tests with the patch (both as tt user and some 
other user), all tests passed.
{quote}
Sorry, I did not know about this. Thanks for the help.

I will ask Zheng to see if he can commit this patch.

> TrackerDistributedCacheManager should clean up cache in a background thread
> ---
>
> Key: MAPREDUCE-1568
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-1568
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>Affects Versions: 0.22.0
>Reporter: Scott Chen
>Assignee: Scott Chen
> Fix For: 0.22.0
>
> Attachments: MAPREDUCE-1568-v2.1.txt, MAPREDUCE-1568-v2.txt, 
> MAPREDUCE-1568-v3.1.txt, MAPREDUCE-1568-v3.txt, MAPREDUCE-1568.txt
>
>
> Right now the TrackerDistributedCacheManager do the clean up with the 
> following code path:
> {code}
> TaskRunner.run() -> 
> TrackerDistributedCacheManager.setup() ->
> TrackerDistributedCacheManager.getLocalCache() -> 
> TrackerDistributedCacheManager.deleteCache()
> {code}
> The deletion of the cache files can take a long time and it should not be 
> done by a task. We suggest that there should be a separate thread checking 
> and clean up the cache files.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (MAPREDUCE-1731) Process tree clean up suspended task tests.

2010-04-29 Thread Vinay Kumar Thota (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-1731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12862313#action_12862313
 ] 

Vinay Kumar Thota commented on MAPREDUCE-1731:
--

Maintaining separate ticket for that functionality.

> Process tree clean up suspended task tests.
> ---
>
> Key: MAPREDUCE-1731
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-1731
> Project: Hadoop Map/Reduce
>  Issue Type: Task
>  Components: test
>Reporter: Vinay Kumar Thota
>Assignee: Vinay Kumar Thota
> Attachments: suspendtask_1731.patch
>
>
> 1 .Verify the process tree cleanup of suspended task and task should be 
> terminated after timeout.
> 2. Verify the process tree cleanup of suspended task and resume the task 
> before task timeout.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (MAPREDUCE-1546) Jobtracker JSP pages should automatically redirect to the corresponding history page if not in memory

2010-04-29 Thread Scott Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-1546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12862304#action_12862304
 ] 

Scott Chen commented on MAPREDUCE-1546:
---

Thanks for the information, Ravi. I will work on TestJobRetire and also take a 
look at TestWebUIAuthorization.

> Jobtracker JSP pages should automatically redirect to the corresponding 
> history page if not in memory
> -
>
> Key: MAPREDUCE-1546
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-1546
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>Affects Versions: 0.22.0
>Reporter: Scott Chen
>Assignee: Scott Chen
>Priority: Minor
> Fix For: 0.22.0
>
> Attachments: MAPREDUCE-1546-v2.txt, MAPREDUCE-1546.txt
>
>
> MAPREDUCE-1185 redirects jobdetails.jsp to it's corresponding history page.
> For convenience, we should also redirect the following JSP pages to the 
> corresponding history pages:
> jobconf.jsp
> jobtasks.jsp
> taskdetails.jsp
> taskstats.jsp

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (MAPREDUCE-1546) Jobtracker JSP pages should automatically redirect to the corresponding history page if not in memory

2010-04-29 Thread Ravi Gummadi (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-1546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12862299#action_12862299
 ] 

Ravi Gummadi commented on MAPREDUCE-1546:
-

TestWebUIAuthorization does have tests for almost all jsps, but mostly 
concentrates on if authorization works as expected or not. Anyway, 
TestJobRetire seems to be good enough starting point for the tests for this 
JIRA.

> Jobtracker JSP pages should automatically redirect to the corresponding 
> history page if not in memory
> -
>
> Key: MAPREDUCE-1546
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-1546
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>Affects Versions: 0.22.0
>Reporter: Scott Chen
>Assignee: Scott Chen
>Priority: Minor
> Fix For: 0.22.0
>
> Attachments: MAPREDUCE-1546-v2.txt, MAPREDUCE-1546.txt
>
>
> MAPREDUCE-1185 redirects jobdetails.jsp to it's corresponding history page.
> For convenience, we should also redirect the following JSP pages to the 
> corresponding history pages:
> jobconf.jsp
> jobtasks.jsp
> taskdetails.jsp
> taskstats.jsp

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (MAPREDUCE-1546) Jobtracker JSP pages should automatically redirect to the corresponding history page if not in memory

2010-04-29 Thread Scott Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-1546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12862285#action_12862285
 ] 

Scott Chen commented on MAPREDUCE-1546:
---

bq. Can you add tests for redirection of urls? You can look at TestJobRetire, 
it already has test for jobdetails.jsp
I see. I thought there is no way to test JSP. I will look at TestJobRetire and 
add some tests there.

> Jobtracker JSP pages should automatically redirect to the corresponding 
> history page if not in memory
> -
>
> Key: MAPREDUCE-1546
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-1546
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>Affects Versions: 0.22.0
>Reporter: Scott Chen
>Assignee: Scott Chen
>Priority: Minor
> Fix For: 0.22.0
>
> Attachments: MAPREDUCE-1546-v2.txt, MAPREDUCE-1546.txt
>
>
> MAPREDUCE-1185 redirects jobdetails.jsp to it's corresponding history page.
> For convenience, we should also redirect the following JSP pages to the 
> corresponding history pages:
> jobconf.jsp
> jobtasks.jsp
> taskdetails.jsp
> taskstats.jsp

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (MAPREDUCE-1743) conf.get("map.input.file") returns null when using MultipleInputs in Hadoop 0.20

2010-04-29 Thread Yuanyuan Tian (JIRA)
conf.get("map.input.file") returns null when using MultipleInputs in Hadoop 0.20


 Key: MAPREDUCE-1743
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-1743
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Affects Versions: 0.20.2
Reporter: Yuanyuan Tian


There is a problem in getting the input file name in the mapper when uisng 
MultipleInputs in Hadoop 0.20. I need to use MultipleInputs to support 
different formats for my inputs to the my MapReduce job. And inside each 
mapper, I also need to know the exact input file that the mapper is processing. 
However, conf.get("map.input.file") returns null. Can anybody help me solve 
this problem? Thanks in advance.

public class Test extends Configured implements Tool{

static class InnerMapper extends MapReduceBase implements 
Mapper
{



public void configure(JobConf conf)
{   
String inputName=conf.get("map.input.file"));
...
}

}

public int run(String[] arg0) throws Exception {
JonConf job;
job = new JobConf(Test.class);
...

MultipleInputs.addInputPath(conf, new Path("A"), 
TextInputFormat.class);
MultipleInputs.addInputPath(conf, new Path("B"), 
SequenceFileFormat.class);
...
}
}


-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (MAPREDUCE-1710) Process tree clean up of exceeding memory limit tasks.

2010-04-29 Thread Vinay Kumar Thota (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-1710?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinay Kumar Thota updated MAPREDUCE-1710:
-

Attachment: memorylimittask_1710.patch

New patch for the tests due to pushconfig utility wrapper changes.

> Process tree clean up of exceeding memory limit tasks.
> --
>
> Key: MAPREDUCE-1710
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-1710
> Project: Hadoop Map/Reduce
>  Issue Type: Task
>  Components: test
>Reporter: Vinay Kumar Thota
>Assignee: Vinay Kumar Thota
> Attachments: memorylimittask_1710.patch, memorylimittask_1710.patch, 
> memorylimittask_1710.patch, memorylimittask_1710.patch, 
> memorylimittask_1710.patch
>
>
> 1. Submit a job which would spawn child processes and each of the child 
> processes exceeds the memory limits. Let the job complete . Check if all the 
> child processes are killed, the overall job should fail.
> 2. Submit a job which would spawn child processes and each of the child 
> processes exceeds the memory limits. Kill/fail the job while in progress. 
> Check if all the child processes are killed.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (MAPREDUCE-1693) Process tree clean up of either a failed task or killed task tests.

2010-04-29 Thread Vinay Kumar Thota (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-1693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinay Kumar Thota updated MAPREDUCE-1693:
-

Attachment: taskchildskilling_1693.patch

New patch for the tests due to changes in pushconfig utility wrapper.

> Process tree clean up of either a failed task or killed task tests.
> ---
>
> Key: MAPREDUCE-1693
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-1693
> Project: Hadoop Map/Reduce
>  Issue Type: Task
>  Components: test
>Reporter: Vinay Kumar Thota
>Assignee: Vinay Kumar Thota
> Attachments: taskchildskilling_1693.diff, 
> taskchildskilling_1693.diff, taskchildskilling_1693.patch, 
> taskchildskilling_1693.patch, taskchildskilling_1693.patch, 
> taskchildskilling_1693.patch, taskchildskilling_1693.patch, 
> taskchildskilling_1693.patch
>
>
> The following scenarios covered in the test.
> 1. Run a job which spawns subshells in the tasks. Kill one of the task. All 
> the child process of the killed task must be killed.
> 2. Run a job which spawns subshells in tasks. Fail one of the task. All the 
> child process of the killed task must be killed along with the task after its 
> failure.
> 3. Check process tree cleanup on paritcular task-tracker when we use 
> -kill-task and -fail-task with both map and reduce.
> 4. Submit a job which would spawn child processes and each of the child 
> processes exceeds the memory limits. Let the job complete . Check if all the 
> child processes are killed, the overall job should fail.
> l)Submit a job which would spawn child processes and each of the child 
> processes exceeds the memory limits. Kill/fail the job while in progress. 
> Check if all the child processes are killed.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (MAPREDUCE-1742) Job.setNumReduceTasks doesn't work

2010-04-29 Thread Tom White (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-1742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom White updated MAPREDUCE-1742:
-

Fix Version/s: 0.21.0
 Priority: Blocker  (was: Major)

I think this is a blocker.

> Job.setNumReduceTasks doesn't work
> --
>
> Key: MAPREDUCE-1742
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-1742
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: job submission
>Affects Versions: 0.22.0
> Environment: Hadoop 0.22.0-SNAPSHOT, latest version from trunc.
>Reporter: Danny Leshem
>Priority: Blocker
> Fix For: 0.21.0
>
>
> Calling Job.setNumReduceTasks(0) doesn't seem to work with the latest trunc, 
> and the job still goes through a reduction phase.
> Also, Job.setNumReduceTasks(1) doesn't seem to work either, and several 
> reducers are spawned.
> It seems that something about Job.setNumReduceTasks got broken recently.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (MAPREDUCE-1742) Job.setNumReduceTasks doesn't work

2010-04-29 Thread Danny Leshem (JIRA)
Job.setNumReduceTasks doesn't work
--

 Key: MAPREDUCE-1742
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-1742
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: job submission
Affects Versions: 0.22.0
 Environment: Hadoop 0.22.0-SNAPSHOT, latest version from trunc.
Reporter: Danny Leshem


Calling Job.setNumReduceTasks(0) doesn't seem to work with the latest trunc, 
and the job still goes through a reduction phase.
Also, Job.setNumReduceTasks(1) doesn't seem to work either, and several 
reducers are spawned.

It seems that something about Job.setNumReduceTasks got broken recently.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (MAPREDUCE-1731) Process tree clean up suspended task tests.

2010-04-29 Thread Balaji Rajagopalan (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-1731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12862241#action_12862241
 ] 

Balaji Rajagopalan commented on MAPREDUCE-1731:
---

Mostly looks good, but I do not find the server side code for suspending and 
resuming task with this patch. 

> Process tree clean up suspended task tests.
> ---
>
> Key: MAPREDUCE-1731
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-1731
> Project: Hadoop Map/Reduce
>  Issue Type: Task
>  Components: test
>Reporter: Vinay Kumar Thota
>Assignee: Vinay Kumar Thota
> Attachments: suspendtask_1731.patch
>
>
> 1 .Verify the process tree cleanup of suspended task and task should be 
> terminated after timeout.
> 2. Verify the process tree cleanup of suspended task and resume the task 
> before task timeout.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (MAPREDUCE-1741) Automate the test scenario of job related files are moved from history directory to done directory

2010-04-29 Thread Balaji Rajagopalan (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-1741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12862234#action_12862234
 ] 

Balaji Rajagopalan commented on MAPREDUCE-1741:
---

1. What is the reason we create manually 100 files, please provide the 
documentation, looking at the code it is not evident to me. 

2. In the multiple job  scenario are the jobs launches simultaneously or 
sequentially one after the other ?

> Automate the test scenario of  job related files are moved from history 
> directory to done directory
> ---
>
> Key: MAPREDUCE-1741
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-1741
> Project: Hadoop Map/Reduce
>  Issue Type: Test
>  Components: test
>Affects Versions: 0.22.0
>Reporter: Iyappan Srinivasan
> Fix For: 0.22.0
>
> Attachments: TestJobHistoryLocation.patch
>
>
> Job related files are moved from history directory to done directory, when
> 1) Job succeeds
> 2) Job is killed
> 3) When 100 files are put in the done directory
> 4) When multiple jobs are completed at the same time, some successful, some 
> failed.
> Also, two files, conf.xml and job files should be present in the done 
> directory.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (MAPREDUCE-1730) Automate test scenario for successful/killed jobs' memory is properly removed from jobtracker after these jobs retire.

2010-04-29 Thread Balaji Rajagopalan (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-1730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12862228#action_12862228
 ] 

Balaji Rajagopalan commented on MAPREDUCE-1730:
---

I don't see where the test is really verifying that job in progress object is 
really removed. 

> Automate test scenario for successful/killed jobs' memory is properly removed 
> from jobtracker after these jobs retire.
> --
>
> Key: MAPREDUCE-1730
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-1730
> Project: Hadoop Map/Reduce
>  Issue Type: Test
>Affects Versions: 0.22.0
>Reporter: Iyappan Srinivasan
> Fix For: 0.22.0
>
> Attachments: TestRetiredJobs.patch
>
>
> Automate using herriot framework,  test scenario for successful/killed jobs' 
> memory is properly removed from jobtracker after these jobs retire.
> This should test when successful and failed jobs are retired,  their 
> jobInProgress object are removed properly.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (MAPREDUCE-1607) Task controller may not set permissions for a task cleanup attempt's log directory

2010-04-29 Thread Vinod K V (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-1607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12862168#action_12862168
 ] 

Vinod K V commented on MAPREDUCE-1607:
--

I started looking at this patch.

> Task controller may not set permissions for a task cleanup attempt's log 
> directory
> --
>
> Key: MAPREDUCE-1607
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-1607
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: task-controller
>Affects Versions: 0.21.0
>Reporter: Hemanth Yamijala
>Assignee: Amareshwari Sriramadasu
> Fix For: 0.21.0
>
> Attachments: patch-1607-1.txt, patch-1607-ydist.txt, patch-1607.txt
>
>
> Task controller uses the INITIALIZE_TASK command to initialize task attempt 
> and task log directories. For cleanup tasks, task attempt directories are 
> named as task-attempt-id.cleanup. But log directories do not have the 
> .cleanup suffix. The task controller is not aware of this distinction and 
> tries to set permissions for log directories named task-attempt-id.cleanup. 
> This is a NO-OP. Typically the task cleanup runs on the same node that ran 
> the original task attempt as well. So, the task log directories are already 
> properly initialized. However, the task cleanup can run on a node that has 
> not run the original task attempt. In that case, the initialization would not 
> happen and this could result in the cleanup task failing.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (MAPREDUCE-1731) Process tree clean up suspended task tests.

2010-04-29 Thread Iyappan Srinivasan (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-1731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12862159#action_12862159
 ] 

Iyappan Srinivasan commented on MAPREDUCE-1731:
---

Hi Vinay. Code looks good. Some comments.

1) After suspending task, immediately the task may not die, it will still be 
visible when ps command is given. So, check the  Assert statements immediately 
after suspending.

2) space between paramters of methods

3) Assert.AssertEquals ExitCode - Give  more detailed info like "cannot suspend 
task"



> Process tree clean up suspended task tests.
> ---
>
> Key: MAPREDUCE-1731
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-1731
> Project: Hadoop Map/Reduce
>  Issue Type: Task
>  Components: test
>Reporter: Vinay Kumar Thota
>Assignee: Vinay Kumar Thota
> Attachments: suspendtask_1731.patch
>
>
> 1 .Verify the process tree cleanup of suspended task and task should be 
> terminated after timeout.
> 2. Verify the process tree cleanup of suspended task and resume the task 
> before task timeout.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (MAPREDUCE-1604) Job acls should be documented in forrest.

2010-04-29 Thread Vinod K V (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-1604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12862151#action_12862151
 ] 

Vinod K V commented on MAPREDUCE-1604:
--

bq. I think we are assuming a lot of people use queues and queue ACLs. How true 
is that ? The default scheduler and the fair scheduler don't support queues. 
Isn't that a lot of users ? Wouldn't it mean they'd simply ignore queue ACLs 
and then job ACLs actually work fine, as documented ?

Missed this one, and the freeze date is right here.

Irrespective of the number of users using job-acls, post MAPREDUCE-1664, 
_job-authorization.enabled_ flag will be either removed or deprecated in favour 
of _mapred-queues.enabled_ flag. Irrespecitve of whether it is removed or 
deprecated, it will no longer be backwards compatible because semantics of the 
whole ACLs across queues and jobs are going to change. Given this, I am still 
inclined to removing the documentation for job-acls from 0.21. Unless you veto 
this, of course :) Hemanth, can you quickly respond?

> Job acls should be documented in forrest.
> -
>
> Key: MAPREDUCE-1604
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-1604
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: documentation, security
>Affects Versions: 0.22.0
>Reporter: Amareshwari Sriramadasu
>Assignee: Amareshwari Sriramadasu
> Fix For: 0.22.0
>
> Attachments: patch-1604-1.txt, patch-1604-ydist.txt, patch-1604.txt
>
>
> Job acls introduced in MAPREDUCE-1307 should be documented in forrest.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (MAPREDUCE-1707) TaskRunner can get NPE in getting ugi from TaskTracker

2010-04-29 Thread Vinod K V (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-1707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod K V updated MAPREDUCE-1707:
-

Attachment: MAPREDUCE-1707-20100429.txt

Trunk patch for this issue and MAPREDUCE-1703.

A race condition really. Cannot write useful test-cases without aggressive 
refactoring of TaskTracker/TaskRunner.

Amareshwari, can you please look at this?

> TaskRunner can get NPE in getting ugi from TaskTracker
> --
>
> Key: MAPREDUCE-1707
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-1707
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: tasktracker
>Affects Versions: 0.22.0
>Reporter: Amareshwari Sriramadasu
> Fix For: 0.22.0
>
>     Attachments: MAPREDUCE-1707-20100429.txt
>
>
> The following code in TaskRunner can get NPE in the scenario described below.
> {code}
>   UserGroupInformation ugi = 
> tracker.getRunningJob(t.getJobID()).getUGI();
> {code}
> The scenario:
> Tracker got a LaunchTaskAction; Task is localized and TaskRunner is started.
> Then Tracker got a KillJobAction; This would issue a kill for the task. But, 
> kill will be a no-op because the task did not actually start; The job is 
> removed from runningJobs. 
> Then if TaskRunner calls tracker.getRunningJob(t.getJobID()), it will be null.
> Instead of TaskRunner doing a back call to tasktracker to get the ugi, 
> tracker.getRunningJob(t.getJobID()).getUGI(), ugi should be passed a 
> parameter in the constructor of TaskRunner. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Assigned: (MAPREDUCE-1707) TaskRunner can get NPE in getting ugi from TaskTracker

2010-04-29 Thread Vinod K V (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-1707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod K V reassigned MAPREDUCE-1707:


Assignee: Vinod K V

> TaskRunner can get NPE in getting ugi from TaskTracker
> --
>
> Key: MAPREDUCE-1707
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-1707
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: tasktracker
>Affects Versions: 0.22.0
>Reporter: Amareshwari Sriramadasu
>Assignee: Vinod K V
> Fix For: 0.22.0
>
>     Attachments: MAPREDUCE-1707-20100429.txt
>
>
> The following code in TaskRunner can get NPE in the scenario described below.
> {code}
>   UserGroupInformation ugi = 
> tracker.getRunningJob(t.getJobID()).getUGI();
> {code}
> The scenario:
> Tracker got a LaunchTaskAction; Task is localized and TaskRunner is started.
> Then Tracker got a KillJobAction; This would issue a kill for the task. But, 
> kill will be a no-op because the task did not actually start; The job is 
> removed from runningJobs. 
> Then if TaskRunner calls tracker.getRunningJob(t.getJobID()), it will be null.
> Instead of TaskRunner doing a back call to tasktracker to get the ugi, 
> tracker.getRunningJob(t.getJobID()).getUGI(), ugi should be passed a 
> parameter in the constructor of TaskRunner. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (MAPREDUCE-1703) TaskRunner would crash in finally block if taskDistributedCacheManager is null

2010-04-29 Thread Vinod K V (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-1703?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod K V resolved MAPREDUCE-1703.
--

Resolution: Duplicate

Going to track this trivial fix as part of MAPREDUCE-1707 that is making 
changes to the same class.

> TaskRunner would crash in finally block if taskDistributedCacheManager is null
> --
>
> Key: MAPREDUCE-1703
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-1703
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: tasktracker
>Affects Versions: 0.21.0
>Reporter: Amareshwari Sriramadasu
> Fix For: 0.22.0
>
>
> If TaskRunner throws an Exception before initializing 
> taskDistributedCacheManager, it would crash in finally block at 
> taskDistributedCacheManager.release(). TaskRunner would crash without doing 
> tip.reportTaskFinished() thus not failing the task. Task will be marked 
> FAILED after "mapred.task.timeout" because there is no report from  the task.
> We should add a not null check for taskDistributedCacheManager in finally 
> block.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (MAPREDUCE-1707) TaskRunner can get NPE in getting ugi from TaskTracker

2010-04-29 Thread Vinod K V (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-1707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12862131#action_12862131
 ] 

Vinod K V commented on MAPREDUCE-1707:
--

bq. Instead of TaskRunner doing a back call to tasktracker to get the ugi, 
tracker.getRunningJob(t.getJobID()).getUGI(), ugi should be passed a parameter 
in the constructor of TaskRunner.
Passing UGI all the way down to TaskRunner both looked wierd and resulted in 
ugly code changes. Instead, I am making the TaskRunner simply check if the the 
returned RunningJob null, i.e if the job is killed already, and skip the 
localization of dist-cache files.

Long term, we need to cleanup the whole mess with the TaskTracker, TaskRunner 
and JvmManger interactions. The code is simply-not-maintainable in this form.

> TaskRunner can get NPE in getting ugi from TaskTracker
> --
>
> Key: MAPREDUCE-1707
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-1707
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: tasktracker
>Affects Versions: 0.22.0
>Reporter: Amareshwari Sriramadasu
> Fix For: 0.22.0
>
>
> The following code in TaskRunner can get NPE in the scenario described below.
> {code}
>   UserGroupInformation ugi = 
> tracker.getRunningJob(t.getJobID()).getUGI();
> {code}
> The scenario:
> Tracker got a LaunchTaskAction; Task is localized and TaskRunner is started.
> Then Tracker got a KillJobAction; This would issue a kill for the task. But, 
> kill will be a no-op because the task did not actually start; The job is 
> removed from runningJobs. 
> Then if TaskRunner calls tracker.getRunningJob(t.getJobID()), it will be null.
> Instead of TaskRunner doing a back call to tasktracker to get the ugi, 
> tracker.getRunningJob(t.getJobID()).getUGI(), ugi should be passed a 
> parameter in the constructor of TaskRunner. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (MAPREDUCE-1741) Automate the test scenario of job related files are moved from history directory to done directory

2010-04-29 Thread Iyappan Srinivasan (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-1741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Iyappan Srinivasan updated MAPREDUCE-1741:
--

Attachment: TestJobHistoryLocation.patch

> Automate the test scenario of  job related files are moved from history 
> directory to done directory
> ---
>
> Key: MAPREDUCE-1741
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-1741
> Project: Hadoop Map/Reduce
>  Issue Type: Test
>  Components: test
>Affects Versions: 0.22.0
>Reporter: Iyappan Srinivasan
> Fix For: 0.22.0
>
> Attachments: TestJobHistoryLocation.patch
>
>
> Job related files are moved from history directory to done directory, when
> 1) Job succeeds
> 2) Job is killed
> 3) When 100 files are put in the done directory
> 4) When multiple jobs are completed at the same time, some successful, some 
> failed.
> Also, two files, conf.xml and job files should be present in the done 
> directory.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (MAPREDUCE-1741) Automate the test scenario of job related files are moved from history directory to done directory

2010-04-29 Thread Iyappan Srinivasan (JIRA)
Automate the test scenario of  job related files are moved from history 
directory to done directory
---

 Key: MAPREDUCE-1741
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-1741
 Project: Hadoop Map/Reduce
  Issue Type: Test
  Components: test
Affects Versions: 0.22.0
Reporter: Iyappan Srinivasan
 Fix For: 0.22.0


Job related files are moved from history directory to done directory, when

1) Job succeeds
2) Job is killed
3) When 100 files are put in the done directory
4) When multiple jobs are completed at the same time, some successful, some 
failed.

Also, two files, conf.xml and job files should be present in the done directory.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.