[jira] Updated: (MAPREDUCE-1697) Document the behavior of -file option in streaming

2010-06-08 Thread Amareshwari Sriramadasu (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-1697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amareshwari Sriramadasu updated MAPREDUCE-1697:
---

Attachment: patch-1697-3.txt

Patch with minor editorial changes to documentation, suggested by Vinod offline.

Ran ant docs with the patch on both trunk and branch 0.21.

 Document the behavior of -file option in streaming
 --

 Key: MAPREDUCE-1697
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-1697
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: contrib/streaming, documentation
Affects Versions: 0.20.1
Reporter: Amareshwari Sriramadasu
Assignee: Amareshwari Sriramadasu
 Fix For: 0.21.0, 0.22.0

 Attachments: patch-1697-1.txt, patch-1697-2.txt, patch-1697-3.txt, 
 patch-1697.txt


 The behavior of -file option in streaming is not documented anywhere.
 The behavior of -file is the following :
 1) All the files passed through  -file option are packaged into job.jar.
 2) If -file option is used for .class or .jar files, they are unjarred on 
 tasktracker and placed in 
 ${mapred.local.dir}/taskTracker/jobcache/job_ID/jars/classes or /lib, 
 respectively. Symlinks to the directories classes and lib are created from 
 the cwd of the task, . The names of symlinks are classes, lib. So file 
 names of .class or .jar files do not appear in cwd of the task. 
 Paths to these files are automatically added to classpath. The tricky part is 
 that hadoop framework can pick .class or .jar using classpath, but actual 
 mapper script cannot. If you'd like to access these .class or .jar inside 
 script, please do something like java -cp lib/*;classes/* ClassName. 
 3) If -file option is used for files other than .class or .jar (e.g, .txt or 
 .pl), these files are unjarred into 
 ${mapred.local.dir}/taskTracker/jobcache/job_ID/jars/. Symlinks to these 
 files are created from the cwd of the task. Names of these symlinks are 
 actually file names. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (MAPREDUCE-1697) Document the behavior of -file option in streaming

2010-06-08 Thread Vinod K V (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-1697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12876561#action_12876561
 ] 

Vinod K V commented on MAPREDUCE-1697:
--

+1 for the patch. I built the docs and verified how it looks too. I'm going to 
check this into trunk and 0.21.

 Document the behavior of -file option in streaming
 --

 Key: MAPREDUCE-1697
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-1697
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: contrib/streaming, documentation
Affects Versions: 0.20.1
Reporter: Amareshwari Sriramadasu
Assignee: Amareshwari Sriramadasu
 Fix For: 0.21.0, 0.22.0

 Attachments: patch-1697-1.txt, patch-1697-2.txt, patch-1697-3.txt, 
 patch-1697.txt


 The behavior of -file option in streaming is not documented anywhere.
 The behavior of -file is the following :
 1) All the files passed through  -file option are packaged into job.jar.
 2) If -file option is used for .class or .jar files, they are unjarred on 
 tasktracker and placed in 
 ${mapred.local.dir}/taskTracker/jobcache/job_ID/jars/classes or /lib, 
 respectively. Symlinks to the directories classes and lib are created from 
 the cwd of the task, . The names of symlinks are classes, lib. So file 
 names of .class or .jar files do not appear in cwd of the task. 
 Paths to these files are automatically added to classpath. The tricky part is 
 that hadoop framework can pick .class or .jar using classpath, but actual 
 mapper script cannot. If you'd like to access these .class or .jar inside 
 script, please do something like java -cp lib/*;classes/* ClassName. 
 3) If -file option is used for files other than .class or .jar (e.g, .txt or 
 .pl), these files are unjarred into 
 ${mapred.local.dir}/taskTracker/jobcache/job_ID/jars/. Symlinks to these 
 files are created from the cwd of the task. Names of these symlinks are 
 actually file names. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (MAPREDUCE-1697) Document the behavior of -file option and deprecate it in favour of -files option in streaming.

2010-06-08 Thread Vinod K V (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-1697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod K V updated MAPREDUCE-1697:
-

  Summary: Document the behavior of -file option and deprecate it in 
favour of -files option in streaming.  (was: Document the behavior of -file 
option in streaming)
 Hadoop Flags: [Reviewed]
Fix Version/s: (was: 0.22.0)

 Document the behavior of -file option and deprecate it in favour of -files 
 option in streaming.
 ---

 Key: MAPREDUCE-1697
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-1697
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: contrib/streaming, documentation
Affects Versions: 0.20.1
Reporter: Amareshwari Sriramadasu
Assignee: Amareshwari Sriramadasu
 Fix For: 0.21.0

 Attachments: patch-1697-1.txt, patch-1697-2.txt, patch-1697-3.txt, 
 patch-1697.txt


 The behavior of -file option in streaming is not documented anywhere.
 The behavior of -file is the following :
 1) All the files passed through  -file option are packaged into job.jar.
 2) If -file option is used for .class or .jar files, they are unjarred on 
 tasktracker and placed in 
 ${mapred.local.dir}/taskTracker/jobcache/job_ID/jars/classes or /lib, 
 respectively. Symlinks to the directories classes and lib are created from 
 the cwd of the task, . The names of symlinks are classes, lib. So file 
 names of .class or .jar files do not appear in cwd of the task. 
 Paths to these files are automatically added to classpath. The tricky part is 
 that hadoop framework can pick .class or .jar using classpath, but actual 
 mapper script cannot. If you'd like to access these .class or .jar inside 
 script, please do something like java -cp lib/*;classes/* ClassName. 
 3) If -file option is used for files other than .class or .jar (e.g, .txt or 
 .pl), these files are unjarred into 
 ${mapred.local.dir}/taskTracker/jobcache/job_ID/jars/. Symlinks to these 
 files are created from the cwd of the task. Names of these symlinks are 
 actually file names. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (MAPREDUCE-1697) Document the behavior of -file option and deprecate it in favour of -files option in streaming.

2010-06-08 Thread Vinod K V (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-1697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod K V updated MAPREDUCE-1697:
-

  Status: Resolved  (was: Patch Available)
Release Note: Documents the behavior of -file option and deprecates it in 
favor of -files option in streaming. 
  Resolution: Fixed

I just committed this to trunk and 0.21. Thanks Amareshwari!

 Document the behavior of -file option and deprecate it in favour of -files 
 option in streaming.
 ---

 Key: MAPREDUCE-1697
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-1697
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: contrib/streaming, documentation
Affects Versions: 0.20.1
Reporter: Amareshwari Sriramadasu
Assignee: Amareshwari Sriramadasu
 Fix For: 0.21.0

 Attachments: patch-1697-1.txt, patch-1697-2.txt, patch-1697-3.txt, 
 patch-1697.txt


 The behavior of -file option in streaming is not documented anywhere.
 The behavior of -file is the following :
 1) All the files passed through  -file option are packaged into job.jar.
 2) If -file option is used for .class or .jar files, they are unjarred on 
 tasktracker and placed in 
 ${mapred.local.dir}/taskTracker/jobcache/job_ID/jars/classes or /lib, 
 respectively. Symlinks to the directories classes and lib are created from 
 the cwd of the task, . The names of symlinks are classes, lib. So file 
 names of .class or .jar files do not appear in cwd of the task. 
 Paths to these files are automatically added to classpath. The tricky part is 
 that hadoop framework can pick .class or .jar using classpath, but actual 
 mapper script cannot. If you'd like to access these .class or .jar inside 
 script, please do something like java -cp lib/*;classes/* ClassName. 
 3) If -file option is used for files other than .class or .jar (e.g, .txt or 
 .pl), these files are unjarred into 
 ${mapred.local.dir}/taskTracker/jobcache/job_ID/jars/. Symlinks to these 
 files are created from the cwd of the task. Names of these symlinks are 
 actually file names. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (MAPREDUCE-1844) Tests failing with java.lang.NoClassDefFoundError

2010-06-08 Thread Giridharan Kesavan (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-1844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12876565#action_12876565
 ] 

Giridharan Kesavan commented on MAPREDUCE-1844:
---

It looks like hdfs -mvn-system-install target is broken which inturn didnt 
allow the mvn-install task to publish hdfs artifacts in the last 6 days

 Tests failing with java.lang.NoClassDefFoundError
 -

 Key: MAPREDUCE-1844
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-1844
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: test
Reporter: Amar Kamat

 Tests are failing with java.lang.NoClassDefFoundError (see 
 http://pastebin.com/Y3E8iDw0). Steps to reproduce on trunk
 1) Delete ~/.ivy2
 2) checkout trunk
 3) ant -Dtestcase=TestMRCLI run-test-mapred

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (MAPREDUCE-1844) Tests failing with java.lang.NoClassDefFoundError

2010-06-08 Thread Giridharan Kesavan (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-1844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12876571#action_12876571
 ] 

Giridharan Kesavan commented on MAPREDUCE-1844:
---

Its the mvn-install target that publishes but the mvn-deploy target which didnt 
run in the last 6 days as the -mvn-system-install target is broken. 

 Tests failing with java.lang.NoClassDefFoundError
 -

 Key: MAPREDUCE-1844
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-1844
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: test
Reporter: Amar Kamat

 Tests are failing with java.lang.NoClassDefFoundError (see 
 http://pastebin.com/Y3E8iDw0). Steps to reproduce on trunk
 1) Delete ~/.ivy2
 2) checkout trunk
 3) ant -Dtestcase=TestMRCLI run-test-mapred

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (MAPREDUCE-1844) Tests failing with java.lang.NoClassDefFoundError

2010-06-08 Thread Giridharan Kesavan (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-1844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12876575#action_12876575
 ] 

Giridharan Kesavan commented on MAPREDUCE-1844:
---

Filed a jira for the above said failure.
https://issues.apache.org/jira/browse/HDFS-1193

 Tests failing with java.lang.NoClassDefFoundError
 -

 Key: MAPREDUCE-1844
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-1844
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: test
Reporter: Amar Kamat

 Tests are failing with java.lang.NoClassDefFoundError (see 
 http://pastebin.com/Y3E8iDw0). Steps to reproduce on trunk
 1) Delete ~/.ivy2
 2) checkout trunk
 3) ant -Dtestcase=TestMRCLI run-test-mapred

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (MAPREDUCE-1844) Tests failing with java.lang.NoClassDefFoundError

2010-06-08 Thread Amar Kamat (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-1844?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amar Kamat updated MAPREDUCE-1844:
--

Priority: Blocker  (was: Major)

Marking it as a blocker.

 Tests failing with java.lang.NoClassDefFoundError
 -

 Key: MAPREDUCE-1844
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-1844
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: test
Reporter: Amar Kamat
Priority: Blocker

 Tests are failing with java.lang.NoClassDefFoundError (see 
 http://pastebin.com/Y3E8iDw0). Steps to reproduce on trunk
 1) Delete ~/.ivy2
 2) checkout trunk
 3) ant -Dtestcase=TestMRCLI run-test-mapred

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (MAPREDUCE-323) Improve the way job history files are managed

2010-06-08 Thread Amar Kamat (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12876594#action_12876594
 ] 

Amar Kamat commented on MAPREDUCE-323:
--

Few comments
# W.r.t your [comment | http://tinyurl.com/2aado36], we could very well use the 
finishtime of the job. This is very well published in the job summary, stored 
in the job status cache within jobtracker and later archived to 
completed-job-status-store. Maybe we can reuse these features (i.e the job 
status cache and status store).
# We should log jobhistory activities like 
  ## jobhistory folder regex used
  ## jobid to foldername mappings
  Logging will help in debugging and post mortem analysis.
# Formats can change across runs. How do we plan to take care of that. One 
thing we can do it to have a unique folder per pattern for storing the files. 
The (unique) folder-name should be based on the jobhistory structure pattern. 
This mapping of jobhistory folder regex to the foldername should be logged. 
  Clients that need really old jobhistory files analyzed, will dig up the 
jobhistory folder format, map it to the folder, provide the _username_, _jobid_ 
and _finishtime_ to get the file. The client can get the _username_ and 
_finishtime_ by quering the JobTracker for the job status (via 
completed-jobstatus-store). See _Future Steps #1_.
# How about keeping _N_ items in the top level directory and moving them to the 
appropriate place only when the total item count crosses _N_. 
  Example (assume /done/%user/%jobid as the format and N=5)
  ## The first job gets added to /done/job1
  ## 5th job gets added to /done/job5
  ## 6th job gets added to /done/job6 and /done/job1 gets moves to 
/done/user1/job1
  ## and so on
So the movement happens only on overflow. The benefit of this change is that 
without any indexing, we can show the recent N jobs on the jobhistory webui. 
This pattern can be enabled for all subfolders also. So if the jobhistory 
format specified is %user/ then queries like '_give the recent 5 items all the 
users_' can also be answered quickly.
# Webui should provide 2 views
   ## top/recent few (show jobs from the topmost level folder)
   ## browse-able view where /MM/DD etc is shows as it is. This can be 
configurable and turned off for complicated structures like 00/00/00-99 etc, 
which the users might now be able to make sense. Also there should be somekind 
of widget in JobHistory that given _username_, _joibid_ and _finishtime_ 
provides the complete jobhistory filename. See _Future steps #2_.
# bq.  He raised the issue that a practical cluster has more distinct users 
than we would want to create DFS directories, especially if the directory 
structure is further split on timestamps.
I would prefer username to be one of the configuration options. Since its 
configurable, it can be turned off for clusters having lots of users.

Future steps :
# As of today, we have jobhistory files directly dumped in the done folder. We 
might want to move these files in the format we want (for a good user 
experience). Maybe some kind of offline admin tool can help here (maybe under 
mradmin?). It might make sense to name the final jobhistory file (leaf-level) 
as $username_$jobid_$finishtime. This will enable use to restructure job 
history files across various formats. 
# There should be someway to find out which regex/format was used given the 
jobtracker start time (which is one of the components in jobid). To make it 
easier for clients, maybe the log files related to jobhistory upadates can be 
published or the JobTracker should be in a position to answer this.
Thoughts? 

 Improve the way job history files are managed
 -

 Key: MAPREDUCE-323
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-323
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: jobtracker
Affects Versions: 0.21.0, 0.22.0
Reporter: Amar Kamat
Assignee: Dick King
Priority: Critical

 Today all the jobhistory files are dumped in one _job-history_ folder. This 
 can cause problems when there is a need to search the history folder 
 (job-recovery etc). It would be nice if we group all the jobs under a _user_ 
 folder. So all the jobs for user _amar_ will go in _history-folder/amar/_. 
 Jobs can be categorized using various features like _jobid, date, jobname_ 
 etc but using _username_ will make the search much more efficient and also 
 will not result into namespace explosion. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (MAPREDUCE-1697) Document the behavior of -file option in streaming and deprecate it in favour of generic -files option.

2010-06-08 Thread Amareshwari Sriramadasu (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-1697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amareshwari Sriramadasu updated MAPREDUCE-1697:
---

 Summary: Document the behavior of -file option in streaming and 
deprecate it in favour of generic -files option.  (was: Document the behavior 
of -file option and deprecate it in favour of -files option in streaming.)
Release Note: Documented the behavior of -file option in streaming and 
deprecated it in favor of generic -files option.   (was: Documents the behavior 
of -file option and deprecates it in favor of -files option in streaming. )

 Document the behavior of -file option in streaming and deprecate it in favour 
 of generic -files option.
 ---

 Key: MAPREDUCE-1697
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-1697
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: contrib/streaming, documentation
Affects Versions: 0.20.1
Reporter: Amareshwari Sriramadasu
Assignee: Amareshwari Sriramadasu
 Fix For: 0.21.0

 Attachments: patch-1697-1.txt, patch-1697-2.txt, patch-1697-3.txt, 
 patch-1697.txt


 The behavior of -file option in streaming is not documented anywhere.
 The behavior of -file is the following :
 1) All the files passed through  -file option are packaged into job.jar.
 2) If -file option is used for .class or .jar files, they are unjarred on 
 tasktracker and placed in 
 ${mapred.local.dir}/taskTracker/jobcache/job_ID/jars/classes or /lib, 
 respectively. Symlinks to the directories classes and lib are created from 
 the cwd of the task, . The names of symlinks are classes, lib. So file 
 names of .class or .jar files do not appear in cwd of the task. 
 Paths to these files are automatically added to classpath. The tricky part is 
 that hadoop framework can pick .class or .jar using classpath, but actual 
 mapper script cannot. If you'd like to access these .class or .jar inside 
 script, please do something like java -cp lib/*;classes/* ClassName. 
 3) If -file option is used for files other than .class or .jar (e.g, .txt or 
 .pl), these files are unjarred into 
 ${mapred.local.dir}/taskTracker/jobcache/job_ID/jars/. Symlinks to these 
 files are created from the cwd of the task. Names of these symlinks are 
 actually file names. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (MAPREDUCE-1813) NPE in PipeMapred.MRErrorThread

2010-06-08 Thread Ravi Gummadi (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-1813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Gummadi updated MAPREDUCE-1813:


Attachment: 1813.v1.2.patch

Attaching patch by changing the testcases to have {perl script with empty input 
having reporter:status: lines and reporter:counter: lines written to 
stderr} instead of {StderrApp.class being used as streaming task}. Because, 
with nonempty input, earlier patch's testcase was not causing NPE 
consistently(fails based on timing) without the fix of the patch.

Please review and provide your comments.

 NPE in PipeMapred.MRErrorThread
 ---

 Key: MAPREDUCE-1813
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-1813
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: contrib/streaming
Affects Versions: 0.20.1
Reporter: Amareshwari Sriramadasu
Assignee: Ravi Gummadi
 Fix For: 0.20.3

 Attachments: 1813.patch, 1813.v1.2.patch, 1813.v1.patch


 Some reduce tasks fail with following NPE
 java.lang.RuntimeException: java.lang.NullPointerException
 at 
 org.apache.hadoop.streaming.PipeMapRed.waitOutputThreads(PipeMapRed.java:325)
 at 
 org.apache.hadoop.streaming.PipeMapRed.mapRedFinished(PipeMapRed.java:540)
 at org.apache.hadoop.streaming.PipeReducer.close(PipeReducer.java:137)
 at 
 org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:474)
 at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:412)
 at org.apache.hadoop.mapred.Child.main(Child.java:159)
 Caused by: java.lang.NullPointerException
at 
 org.apache.hadoop.streaming.PipeMapRed$MRErrorThread.setStatus(PipeMapRed.java:517)
 at 
 org.apache.hadoop.streaming.PipeMapRed$MRErrorThread.run(PipeMapRed.java:449)

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (MAPREDUCE-1813) NPE in PipeMapred.MRErrorThread

2010-06-08 Thread Amareshwari Sriramadasu (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-1813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amareshwari Sriramadasu updated MAPREDUCE-1813:
---

Status: Patch Available  (was: Open)

 NPE in PipeMapred.MRErrorThread
 ---

 Key: MAPREDUCE-1813
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-1813
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: contrib/streaming
Affects Versions: 0.20.1
Reporter: Amareshwari Sriramadasu
Assignee: Ravi Gummadi
 Fix For: 0.20.3

 Attachments: 1813.patch, 1813.v1.2.patch, 1813.v1.patch


 Some reduce tasks fail with following NPE
 java.lang.RuntimeException: java.lang.NullPointerException
 at 
 org.apache.hadoop.streaming.PipeMapRed.waitOutputThreads(PipeMapRed.java:325)
 at 
 org.apache.hadoop.streaming.PipeMapRed.mapRedFinished(PipeMapRed.java:540)
 at org.apache.hadoop.streaming.PipeReducer.close(PipeReducer.java:137)
 at 
 org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:474)
 at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:412)
 at org.apache.hadoop.mapred.Child.main(Child.java:159)
 Caused by: java.lang.NullPointerException
at 
 org.apache.hadoop.streaming.PipeMapRed$MRErrorThread.setStatus(PipeMapRed.java:517)
 at 
 org.apache.hadoop.streaming.PipeMapRed$MRErrorThread.run(PipeMapRed.java:449)

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (MAPREDUCE-1813) NPE in PipeMapred.MRErrorThread

2010-06-08 Thread Amareshwari Sriramadasu (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-1813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amareshwari Sriramadasu updated MAPREDUCE-1813:
---

Fix Version/s: 0.22.0
   (was: 0.20.3)

 NPE in PipeMapred.MRErrorThread
 ---

 Key: MAPREDUCE-1813
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-1813
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: contrib/streaming
Affects Versions: 0.20.1
Reporter: Amareshwari Sriramadasu
Assignee: Ravi Gummadi
 Fix For: 0.22.0

 Attachments: 1813.patch, 1813.v1.2.patch, 1813.v1.patch


 Some reduce tasks fail with following NPE
 java.lang.RuntimeException: java.lang.NullPointerException
 at 
 org.apache.hadoop.streaming.PipeMapRed.waitOutputThreads(PipeMapRed.java:325)
 at 
 org.apache.hadoop.streaming.PipeMapRed.mapRedFinished(PipeMapRed.java:540)
 at org.apache.hadoop.streaming.PipeReducer.close(PipeReducer.java:137)
 at 
 org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:474)
 at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:412)
 at org.apache.hadoop.mapred.Child.main(Child.java:159)
 Caused by: java.lang.NullPointerException
at 
 org.apache.hadoop.streaming.PipeMapRed$MRErrorThread.setStatus(PipeMapRed.java:517)
 at 
 org.apache.hadoop.streaming.PipeMapRed$MRErrorThread.run(PipeMapRed.java:449)

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (MAPREDUCE-1813) NPE in PipeMapred.MRErrorThread

2010-06-08 Thread Amareshwari Sriramadasu (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-1813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12876626#action_12876626
 ] 

Amareshwari Sriramadasu commented on MAPREDUCE-1813:


Latest patch looks fine.

 NPE in PipeMapred.MRErrorThread
 ---

 Key: MAPREDUCE-1813
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-1813
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: contrib/streaming
Affects Versions: 0.20.1
Reporter: Amareshwari Sriramadasu
Assignee: Ravi Gummadi
 Fix For: 0.22.0

 Attachments: 1813.patch, 1813.v1.2.patch, 1813.v1.patch


 Some reduce tasks fail with following NPE
 java.lang.RuntimeException: java.lang.NullPointerException
 at 
 org.apache.hadoop.streaming.PipeMapRed.waitOutputThreads(PipeMapRed.java:325)
 at 
 org.apache.hadoop.streaming.PipeMapRed.mapRedFinished(PipeMapRed.java:540)
 at org.apache.hadoop.streaming.PipeReducer.close(PipeReducer.java:137)
 at 
 org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:474)
 at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:412)
 at org.apache.hadoop.mapred.Child.main(Child.java:159)
 Caused by: java.lang.NullPointerException
at 
 org.apache.hadoop.streaming.PipeMapRed$MRErrorThread.setStatus(PipeMapRed.java:517)
 at 
 org.apache.hadoop.streaming.PipeMapRed$MRErrorThread.run(PipeMapRed.java:449)

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (MAPREDUCE-1641) Job submission should fail if same uri is added for mapred.cache.files and mapred.cache.archives

2010-06-08 Thread Amareshwari Sriramadasu (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-1641?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12876629#action_12876629
 ] 

Amareshwari Sriramadasu commented on MAPREDUCE-1641:


latest patch looks fine.

 Job submission should fail if same uri is added for mapred.cache.files and 
 mapred.cache.archives
 

 Key: MAPREDUCE-1641
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-1641
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: distributed-cache
Reporter: Amareshwari Sriramadasu
Assignee: Dick King
 Fix For: 0.22.0

 Attachments: BZ-3539321--off-0-20-101--2010-04-20.patch, 
 duped-files-archives--off-0-20-101--2010-04-21.patch, 
 duped-files-archives--off-0-20-101--2010-04-23--1819.patch, 
 mapreduce-1641--2010-04-27.patch, mapreduce-1641--2010-05-19.patch, 
 mapreduce-1641--2010-05-21.patch, patch-1641-ydist-bugfix.txt


 The behavior of mapred.cache.files and mapred.cache.archives is different 
 during localization in the following way:
 If a jar file is added to mapred.cache.files,  it will be localized under 
 TaskTracker under a unique path. 
 If a jar file is added to mapred.cache.archives, it will be localized under a 
 unique path in a directory named the jar file name, and will be unarchived 
 under the same directory.
 If same jar file is passed for both the configurations, the behavior 
 undefined. Thus the job submission should fail.
 Currently, since distributed cache processes files before archives, the jar 
 file will be just localized and not unarchived.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (MAPREDUCE-1744) DistributedCache creates its own FileSytem instance when adding a file/archive to the path

2010-06-08 Thread Amareshwari Sriramadasu (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-1744?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12876639#action_12876639
 ] 

Amareshwari Sriramadasu commented on MAPREDUCE-1744:


Deprecate usage message in DistributedCache apis should not changed. Since the 
DistributedCache class itself is deprecated, DistributedCache.add*ToClassPath() 
methods should still have javadoc to use Job methods instead. 

 DistributedCache creates its own FileSytem instance when adding a 
 file/archive to the path
 --

 Key: MAPREDUCE-1744
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-1744
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Reporter: Dick King
Assignee: Krishna Ramachandran
 Attachments: BZ-3503564--2010-05-06.patch, h1744.patch, 
 mapred-1744-1.patch, mapred-1744.patch, MAPREDUCE-1744.patch


 According to the contract of {{UserGroupInformation.doAs()}} the only 
 required operations within the {{doAs()}} block are the
 creation of a {{JobClient}} or getting a {{FileSystem}} .
 The {{DistributedCache.add(File/Archive)ToClasspath()}} methods create a 
 {{FileSystem}} instance outside of the {{doAs()}} block,
 this {{FileSystem}} instance is not in the scope of the proxy user but of the 
 superuser and permissions may make the method
 fail.
 One option is to overload the methods above to receive a filesystem.
 Another option is to do obtain the {{FileSystem}} within a {{doAs()}} block, 
 for this it would be required to have the proxy
 user set in the passed configuration.
 The second option seems nicer, but I don't know if the proxy user is as a 
 property in the jobconf.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Reopened: (MAPREDUCE-1442) StackOverflowError when JobHistory parses a really long line

2010-06-08 Thread Vinod K V (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-1442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod K V reopened MAPREDUCE-1442:
--


This isn't committed on any of the branches on apache svn. Reopening this.

 StackOverflowError when JobHistory parses a really long line
 

 Key: MAPREDUCE-1442
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-1442
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: jobtracker
Affects Versions: 0.20.1
Reporter: bc Wong
Assignee: Luke Lu
 Fix For: 0.22.0

 Attachments: mr-1442-y20s-v1.patch, overflow.history


 JobHistory.parseLine() fails with StackOverflowError on a really big COUNTER 
 value, triggered via the web interface. See attached file.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (MAPREDUCE-1442) StackOverflowError when JobHistory parses a really long line

2010-06-08 Thread Vinod K V (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-1442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod K V updated MAPREDUCE-1442:
-

 Hadoop Flags:   (was: [Reviewed])
 Release Note:   (was: Fixed by MAPREDUCE-157 in trunk (0.21+))
Fix Version/s: 0.20.3
   (was: 0.22.0)

This is valid only for 20.3 as it is fixed on branches 0.21 and above.

Please close this as WON'T FIX if there is no intention to provide a patch for 
0.20 branch.

 StackOverflowError when JobHistory parses a really long line
 

 Key: MAPREDUCE-1442
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-1442
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: jobtracker
Affects Versions: 0.20.1
Reporter: bc Wong
Assignee: Luke Lu
 Fix For: 0.20.3

 Attachments: mr-1442-y20s-v1.patch, overflow.history


 JobHistory.parseLine() fails with StackOverflowError on a really big COUNTER 
 value, triggered via the web interface. See attached file.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (MAPREDUCE-596) can't package zip file with hadoop streaming -file argument

2010-06-08 Thread Amareshwari Sriramadasu (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-596?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amareshwari Sriramadasu resolved MAPREDUCE-596.
---

Resolution: Invalid

zip file is packaged under lib directory. Documentation is updated in 
MAPREDUCE-1697.


 can't package zip file with hadoop streaming -file argument
 ---

 Key: MAPREDUCE-596
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-596
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: contrib/streaming
Reporter: Karl Anderson

 I'm unable to ship a file with a .zip suffix to the mapper using the -file 
 argument for hadoop streaming.  I am able to ship it if I change the suffix 
 to .zipp.  Is this a bug, or perhaps has something to do with the jar file 
 format which is used to send files to the instance?
 For example, with this hadoop invocation, and local files /tmp/boto.zip and 
 /tmp/boto.zipp which are copies of each other:
 $HADOOP_HOME/bin/hadoop jar 
 $HADOOP_HOME/contrib/streaming/hadoop-0.17.0-streaming.jar -mapper 
 $KCLUSTER_SRC/testmapper.py -reducer $KCLUSTER_SRC/testreducer.py -input 
 input/foo -output output -file /tmp/foo.txt -file /tmp/boto.zip -file 
 /tmp/boto.zipp
 I see this line in the invocation standard output:
 packageJobJar: [/tmp/foo.txt, /tmp/boto.zip, /tmp/boto.zipp, 
 /tmp/hadoop-karl/hadoop-unjar6899/] [] /tmp/streamjob6900.jar tmpDir=null
 But in the current directory of the mapper process, boto.zip does not 
 exist, while boto.zipp does.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (MAPREDUCE-1778) CompletedJobStatusStore initialization should fail if {mapred.job.tracker.persist.jobstatus.dir} is unwritable

2010-06-08 Thread Krishna Ramachandran (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-1778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=1287#action_1287
 ] 

Krishna Ramachandran commented on MAPREDUCE-1778:
-

If the directory exists and permissions are correct jt can read from or write 
to - is this not correct?

Permissions are similar to what is done in JobHistory data

The only other way at least i know is to create a temporary file - something 
like

   if (!exists) create;

   else{
Path testpath = new Path(path, testfile);
DataOutputStream out = fs.create(testpath);
out.writeBytes(testdata);
out.close();
}

Chris is not in favor. If there is a better way let me know. Though this is a 
one time thing (when jt starts)

I used hadoop code formatter - will make sure alignments are correct

As for test case, scope of JobStatusPersistency is different  would rather have 
a separate one


 CompletedJobStatusStore initialization should fail if 
 {mapred.job.tracker.persist.jobstatus.dir} is unwritable
 --

 Key: MAPREDUCE-1778
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-1778
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
  Components: jobtracker
Reporter: Amar Kamat
Assignee: Amar Kamat
 Attachments: mapred-1778-1.patch, mapred-1778-2.patch, 
 mapred-1778.patch


 If {mapred.job.tracker.persist.jobstatus.dir} points to an unwritable 
 location or mkdir of {mapred.job.tracker.persist.jobstatus.dir} fails, then 
 CompletedJobStatusStore silently ignores the failure and disables 
 CompletedJobStatusStore. Ideally the JobTracker should bail out early 
 indicating a misconfiguration.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (MAPREDUCE-1744) DistributedCache creates its own FileSytem instance when adding a file/archive to the path

2010-06-08 Thread Krishna Ramachandran (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-1744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Krishna Ramachandran updated MAPREDUCE-1744:


Attachment: mapred-1744-2.patch

restored 
* @deprecated Use {...@link Job#



 DistributedCache creates its own FileSytem instance when adding a 
 file/archive to the path
 --

 Key: MAPREDUCE-1744
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-1744
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Reporter: Dick King
Assignee: Krishna Ramachandran
 Attachments: BZ-3503564--2010-05-06.patch, h1744.patch, 
 mapred-1744-1.patch, mapred-1744-2.patch, mapred-1744.patch, 
 MAPREDUCE-1744.patch


 According to the contract of {{UserGroupInformation.doAs()}} the only 
 required operations within the {{doAs()}} block are the
 creation of a {{JobClient}} or getting a {{FileSystem}} .
 The {{DistributedCache.add(File/Archive)ToClasspath()}} methods create a 
 {{FileSystem}} instance outside of the {{doAs()}} block,
 this {{FileSystem}} instance is not in the scope of the proxy user but of the 
 superuser and permissions may make the method
 fail.
 One option is to overload the methods above to receive a filesystem.
 Another option is to do obtain the {{FileSystem}} within a {{doAs()}} block, 
 for this it would be required to have the proxy
 user set in the passed configuration.
 The second option seems nicer, but I don't know if the proxy user is as a 
 property in the jobconf.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (MAPREDUCE-1778) CompletedJobStatusStore initialization should fail if {mapred.job.tracker.persist.jobstatus.dir} is unwritable

2010-06-08 Thread Amar Kamat (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-1778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12876680#action_12876680
 ] 

Amar Kamat commented on MAPREDUCE-1778:
---

bq. If the directory exists and permissions are correct jt can read from or 
write to - is this not correct? 
How do you define the phrase permissions are correct?
- If the directory matches predefined and expected permissions
- if its writable and readable 

The good part about #2 is that no matter what the directory permissions, the 
verification code is simply concerned with the job at hand i.e to test if the 
dir is r+w. #1 wont work is completedjob-status-store dir is pre created with 
different permissions.

bq. As for test case, scope of JobStatusPersistency is different would rather 
have a separate one
Krishna, there is a test scenario in TestJobStatusPersistency.java i.e 
testJobStoreDisablingWithInvalidPath, which checks exactly the same thing as we 
intend to check here, except for the fact that instead of silently disabling 
CompletedJobStatusStore, we should check if the jobtracker bails out. 
TestJobStatusPersistency might sound as a misnomer but its exactly the feature 
provided by the CompletedJobStatusStore.java.

 CompletedJobStatusStore initialization should fail if 
 {mapred.job.tracker.persist.jobstatus.dir} is unwritable
 --

 Key: MAPREDUCE-1778
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-1778
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
  Components: jobtracker
Reporter: Amar Kamat
Assignee: Amar Kamat
 Attachments: mapred-1778-1.patch, mapred-1778-2.patch, 
 mapred-1778.patch


 If {mapred.job.tracker.persist.jobstatus.dir} points to an unwritable 
 location or mkdir of {mapred.job.tracker.persist.jobstatus.dir} fails, then 
 CompletedJobStatusStore silently ignores the failure and disables 
 CompletedJobStatusStore. Ideally the JobTracker should bail out early 
 indicating a misconfiguration.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (MAPREDUCE-1813) NPE in PipeMapred.MRErrorThread

2010-06-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-1813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12876688#action_12876688
 ] 

Hadoop QA commented on MAPREDUCE-1813:
--

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12446581/1813.v1.2.patch
  against trunk revision 952548.

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 3 new or modified tests.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 findbugs.  The patch does not introduce any new Findbugs warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed core unit tests.

-1 contrib tests.  The patch failed contrib unit tests.

Test results: 
http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-h4.grid.sp2.yahoo.net/229/testReport/
Findbugs warnings: 
http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-h4.grid.sp2.yahoo.net/229/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
Checkstyle results: 
http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-h4.grid.sp2.yahoo.net/229/artifact/trunk/build/test/checkstyle-errors.html
Console output: 
http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-h4.grid.sp2.yahoo.net/229/console

This message is automatically generated.

 NPE in PipeMapred.MRErrorThread
 ---

 Key: MAPREDUCE-1813
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-1813
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: contrib/streaming
Affects Versions: 0.20.1
Reporter: Amareshwari Sriramadasu
Assignee: Ravi Gummadi
 Fix For: 0.22.0

 Attachments: 1813.patch, 1813.v1.2.patch, 1813.v1.patch


 Some reduce tasks fail with following NPE
 java.lang.RuntimeException: java.lang.NullPointerException
 at 
 org.apache.hadoop.streaming.PipeMapRed.waitOutputThreads(PipeMapRed.java:325)
 at 
 org.apache.hadoop.streaming.PipeMapRed.mapRedFinished(PipeMapRed.java:540)
 at org.apache.hadoop.streaming.PipeReducer.close(PipeReducer.java:137)
 at 
 org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:474)
 at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:412)
 at org.apache.hadoop.mapred.Child.main(Child.java:159)
 Caused by: java.lang.NullPointerException
at 
 org.apache.hadoop.streaming.PipeMapRed$MRErrorThread.setStatus(PipeMapRed.java:517)
 at 
 org.apache.hadoop.streaming.PipeMapRed$MRErrorThread.run(PipeMapRed.java:449)

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (MAPREDUCE-1442) StackOverflowError when JobHistory parses a really long line

2010-06-08 Thread Luke Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-1442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luke Lu resolved MAPREDUCE-1442.


Resolution: Won't Fix

 StackOverflowError when JobHistory parses a really long line
 

 Key: MAPREDUCE-1442
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-1442
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: jobtracker
Affects Versions: 0.20.1
Reporter: bc Wong
Assignee: Luke Lu
 Fix For: 0.20.3

 Attachments: mr-1442-y20s-v1.patch, overflow.history


 JobHistory.parseLine() fails with StackOverflowError on a really big COUNTER 
 value, triggered via the web interface. See attached file.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (MAPREDUCE-1829) JobInProgress.findSpeculativeTask should use min() to find the candidate instead of sort()

2010-06-08 Thread Scott Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-1829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12876742#action_12876742
 ] 

Scott Chen commented on MAPREDUCE-1829:
---

The tests failed because of class not found exception.
For example, java.lang.NoClassDefFoundError: 
org/apache/hadoop/security/RefreshUserToGroupMappingsProtocol

I do not know why this happened.
I have manually run the test, TestSpeculativeExecution which covers this code 
path and it succeed.
{code}
Testsuite: org.apache.hadoop.mapred.TestSpeculativeExecution
Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 39.893 sec
{code}

There is no unit test included because the code path is been tested by the 
existing TestSpeculativeExecution.
I am submitting this to Hudson again.

 JobInProgress.findSpeculativeTask should use min() to find the candidate 
 instead of sort()
 --

 Key: MAPREDUCE-1829
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-1829
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
  Components: jobtracker
Affects Versions: 0.22.0
Reporter: Scott Chen
Assignee: Scott Chen
 Fix For: 0.22.0

 Attachments: MAPREDUCE-1829.txt


 findSpeculativeTask needs only one candidate to speculate so it does not need 
 to sort the whole list. It may looks OK but someone can still submit big jobs 
 with small slow task thresholds. In this case, this sorting becomes expensive.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (MAPREDUCE-1829) JobInProgress.findSpeculativeTask should use min() to find the candidate instead of sort()

2010-06-08 Thread Scott Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-1829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Scott Chen updated MAPREDUCE-1829:
--

Status: Open  (was: Patch Available)

 JobInProgress.findSpeculativeTask should use min() to find the candidate 
 instead of sort()
 --

 Key: MAPREDUCE-1829
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-1829
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
  Components: jobtracker
Affects Versions: 0.22.0
Reporter: Scott Chen
Assignee: Scott Chen
 Fix For: 0.22.0

 Attachments: MAPREDUCE-1829.txt


 findSpeculativeTask needs only one candidate to speculate so it does not need 
 to sort the whole list. It may looks OK but someone can still submit big jobs 
 with small slow task thresholds. In this case, this sorting becomes expensive.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (MAPREDUCE-1829) JobInProgress.findSpeculativeTask should use min() to find the candidate instead of sort()

2010-06-08 Thread Scott Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-1829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Scott Chen updated MAPREDUCE-1829:
--

Status: Patch Available  (was: Open)

 JobInProgress.findSpeculativeTask should use min() to find the candidate 
 instead of sort()
 --

 Key: MAPREDUCE-1829
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-1829
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
  Components: jobtracker
Affects Versions: 0.22.0
Reporter: Scott Chen
Assignee: Scott Chen
 Fix For: 0.22.0

 Attachments: MAPREDUCE-1829.txt


 findSpeculativeTask needs only one candidate to speculate so it does not need 
 to sort the whole list. It may looks OK but someone can still submit big jobs 
 with small slow task thresholds. In this case, this sorting becomes expensive.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (MAPREDUCE-1825) jobqueue_details.jsp and FairSchedulerServelet should not call finishedMaps and finishedReduces when job is not initialized

2010-06-08 Thread Scott Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-1825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Scott Chen updated MAPREDUCE-1825:
--

Summary: jobqueue_details.jsp and FairSchedulerServelet should not call 
finishedMaps and finishedReduces when job is not initialized  (was: 
jobqueue_details.jsp does not come up if any job is in initialization.)
   Assignee: Scott Chen
Description: 
JobInProgress.finishedMaps() and finishedReduces() are synchronized. They are 
called from jobqueue_details.jsp and FairSchedulerServelet which iterates 
through all jobs. If any job is in initialization, we should skip the job until 
the initialization finishes.

See 
[comment|https://issues.apache.org/jira/browse/MAPREDUCE-1354?focusedCommentId=12834139page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#action_12834139]
 for more details

  was:
JobInProgress.finishedMaps() and finishedReduces() are synchronized. They are 
called from jobqueue_details.jsp which iterates through all jobs. If any job is 
in initialization, this page doesn't come up until the initialization finishes.

See 
[comment|https://issues.apache.org/jira/browse/MAPREDUCE-1354?focusedCommentId=12834139page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#action_12834139]
 for more details


 jobqueue_details.jsp and FairSchedulerServelet should not call finishedMaps 
 and finishedReduces when job is not initialized
 ---

 Key: MAPREDUCE-1825
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-1825
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: jobtracker
Affects Versions: 0.20.1
Reporter: Amareshwari Sriramadasu
Assignee: Scott Chen
 Fix For: 0.22.0


 JobInProgress.finishedMaps() and finishedReduces() are synchronized. They are 
 called from jobqueue_details.jsp and FairSchedulerServelet which iterates 
 through all jobs. If any job is in initialization, we should skip the job 
 until the initialization finishes.
 See 
 [comment|https://issues.apache.org/jira/browse/MAPREDUCE-1354?focusedCommentId=12834139page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#action_12834139]
  for more details

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (MAPREDUCE-1825) jobqueue_details.jsp and FairSchedulerServelet should not call finishedMaps and finishedReduces when job is not initialized

2010-06-08 Thread Scott Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-1825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Scott Chen updated MAPREDUCE-1825:
--

Description: 
JobInProgress.finishedMaps() and finishedReduces() are synchronized. They are 
called from jobqueue_details.jsp and FairSchedulerServelet which iterates 
through all jobs. If any job is in initialization, these pages don't come up 
until the initialization finishes.

See 
[comment|https://issues.apache.org/jira/browse/MAPREDUCE-1354?focusedCommentId=12834139page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#action_12834139]
 for more details

  was:
JobInProgress.finishedMaps() and finishedReduces() are synchronized. They are 
called from jobqueue_details.jsp and FairSchedulerServelet which iterates 
through all jobs. If any job is in initialization, we should skip the job until 
the initialization finishes.

See 
[comment|https://issues.apache.org/jira/browse/MAPREDUCE-1354?focusedCommentId=12834139page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#action_12834139]
 for more details


 jobqueue_details.jsp and FairSchedulerServelet should not call finishedMaps 
 and finishedReduces when job is not initialized
 ---

 Key: MAPREDUCE-1825
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-1825
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: jobtracker
Affects Versions: 0.20.1
Reporter: Amareshwari Sriramadasu
Assignee: Scott Chen
 Fix For: 0.22.0


 JobInProgress.finishedMaps() and finishedReduces() are synchronized. They are 
 called from jobqueue_details.jsp and FairSchedulerServelet which iterates 
 through all jobs. If any job is in initialization, these pages don't come up 
 until the initialization finishes.
 See 
 [comment|https://issues.apache.org/jira/browse/MAPREDUCE-1354?focusedCommentId=12834139page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#action_12834139]
  for more details

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (MAPREDUCE-1825) jobqueue_details.jsp and FairSchedulerServelet should not call finishedMaps and finishedReduces when job is not initialized

2010-06-08 Thread Scott Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-1825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Scott Chen updated MAPREDUCE-1825:
--

Attachment: MAPREDUCE-1825.txt

FairSchedulerServlet suffer from this problem seriously because it holds the 
JobTracker lock while looping through jobs.
So fixing this is important for FairScheduler.

I make both pages skip the uninitialized jobs.
Do you think this is a right fix?
I am still thinking how to test it. Any suggestion on the unit test? 

 jobqueue_details.jsp and FairSchedulerServelet should not call finishedMaps 
 and finishedReduces when job is not initialized
 ---

 Key: MAPREDUCE-1825
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-1825
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: jobtracker
Affects Versions: 0.20.1
Reporter: Amareshwari Sriramadasu
Assignee: Scott Chen
 Fix For: 0.22.0

 Attachments: MAPREDUCE-1825.txt


 JobInProgress.finishedMaps() and finishedReduces() are synchronized. They are 
 called from jobqueue_details.jsp and FairSchedulerServelet which iterates 
 through all jobs. If any job is in initialization, these pages don't come up 
 until the initialization finishes.
 See 
 [comment|https://issues.apache.org/jira/browse/MAPREDUCE-1354?focusedCommentId=12834139page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#action_12834139]
  for more details

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (MAPREDUCE-1825) jobqueue_details.jsp and FairSchedulerServelet should not call finishedMaps and finishedReduces when job is not initialized

2010-06-08 Thread Scott Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-1825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Scott Chen updated MAPREDUCE-1825:
--

Attachment: (was: MAPREDUCE-1825.txt)

 jobqueue_details.jsp and FairSchedulerServelet should not call finishedMaps 
 and finishedReduces when job is not initialized
 ---

 Key: MAPREDUCE-1825
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-1825
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: jobtracker
Affects Versions: 0.20.1
Reporter: Amareshwari Sriramadasu
Assignee: Scott Chen
 Fix For: 0.22.0


 JobInProgress.finishedMaps() and finishedReduces() are synchronized. They are 
 called from jobqueue_details.jsp and FairSchedulerServelet which iterates 
 through all jobs. If any job is in initialization, these pages don't come up 
 until the initialization finishes.
 See 
 [comment|https://issues.apache.org/jira/browse/MAPREDUCE-1354?focusedCommentId=12834139page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#action_12834139]
  for more details

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (MAPREDUCE-1778) CompletedJobStatusStore initialization should fail if {mapred.job.tracker.persist.jobstatus.dir} is unwritable

2010-06-08 Thread Krishna Ramachandran (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-1778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12876772#action_12876772
 ] 

Krishna Ramachandran commented on MAPREDUCE-1778:
-

if its writable and readable

I have NO knowledge of any API that will check for this (other than creating a 
dummy file)
Look at JobHistory - what i have done is similar. 

Chris, 
Can you comment?

I fixed the line alignment per hadoop formatter 

As for test case, I let me check and if relevant use the existing test




 CompletedJobStatusStore initialization should fail if 
 {mapred.job.tracker.persist.jobstatus.dir} is unwritable
 --

 Key: MAPREDUCE-1778
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-1778
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
  Components: jobtracker
Reporter: Amar Kamat
Assignee: Amar Kamat
 Attachments: mapred-1778-1.patch, mapred-1778-2.patch, 
 mapred-1778.patch


 If {mapred.job.tracker.persist.jobstatus.dir} points to an unwritable 
 location or mkdir of {mapred.job.tracker.persist.jobstatus.dir} fails, then 
 CompletedJobStatusStore silently ignores the failure and disables 
 CompletedJobStatusStore. Ideally the JobTracker should bail out early 
 indicating a misconfiguration.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (MAPREDUCE-1825) jobqueue_details.jsp and FairSchedulerServelet should not call finishedMaps and finishedReduces when job is not initialized

2010-06-08 Thread Scott Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-1825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Scott Chen updated MAPREDUCE-1825:
--

Attachment: MAPREDUCE-1825.txt

 jobqueue_details.jsp and FairSchedulerServelet should not call finishedMaps 
 and finishedReduces when job is not initialized
 ---

 Key: MAPREDUCE-1825
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-1825
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: jobtracker
Affects Versions: 0.20.1
Reporter: Amareshwari Sriramadasu
Assignee: Scott Chen
 Fix For: 0.22.0

 Attachments: MAPREDUCE-1825.txt


 JobInProgress.finishedMaps() and finishedReduces() are synchronized. They are 
 called from jobqueue_details.jsp and FairSchedulerServelet which iterates 
 through all jobs. If any job is in initialization, these pages don't come up 
 until the initialization finishes.
 See 
 [comment|https://issues.apache.org/jira/browse/MAPREDUCE-1354?focusedCommentId=12834139page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#action_12834139]
  for more details

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (MAPREDUCE-1778) CompletedJobStatusStore initialization should fail if {mapred.job.tracker.persist.jobstatus.dir} is unwritable

2010-06-08 Thread Krishna Ramachandran (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-1778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Krishna Ramachandran updated MAPREDUCE-1778:


Attachment: mapred-1778-3.patch

Fix alignment

take out references to mortbay logger

Neither the last rev nor the current one has any commented code to remove
If I missed something let me know

Leaving this new test for now and will update if existing one can be leveraged


 CompletedJobStatusStore initialization should fail if 
 {mapred.job.tracker.persist.jobstatus.dir} is unwritable
 --

 Key: MAPREDUCE-1778
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-1778
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
  Components: jobtracker
Reporter: Amar Kamat
Assignee: Amar Kamat
 Attachments: mapred-1778-1.patch, mapred-1778-2.patch, 
 mapred-1778-3.patch, mapred-1778.patch


 If {mapred.job.tracker.persist.jobstatus.dir} points to an unwritable 
 location or mkdir of {mapred.job.tracker.persist.jobstatus.dir} fails, then 
 CompletedJobStatusStore silently ignores the failure and disables 
 CompletedJobStatusStore. Ideally the JobTracker should bail out early 
 indicating a misconfiguration.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (MAPREDUCE-1829) JobInProgress.findSpeculativeTask should use min() to find the candidate instead of sort()

2010-06-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-1829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12876816#action_12876816
 ] 

Hadoop QA commented on MAPREDUCE-1829:
--

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12446108/MAPREDUCE-1829.txt
  against trunk revision 952548.

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 findbugs.  The patch does not introduce any new Findbugs warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed core unit tests.

-1 contrib tests.  The patch failed contrib unit tests.

Test results: 
http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-h4.grid.sp2.yahoo.net/230/testReport/
Findbugs warnings: 
http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-h4.grid.sp2.yahoo.net/230/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
Checkstyle results: 
http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-h4.grid.sp2.yahoo.net/230/artifact/trunk/build/test/checkstyle-errors.html
Console output: 
http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-h4.grid.sp2.yahoo.net/230/console

This message is automatically generated.

 JobInProgress.findSpeculativeTask should use min() to find the candidate 
 instead of sort()
 --

 Key: MAPREDUCE-1829
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-1829
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
  Components: jobtracker
Affects Versions: 0.22.0
Reporter: Scott Chen
Assignee: Scott Chen
 Fix For: 0.22.0

 Attachments: MAPREDUCE-1829.txt


 findSpeculativeTask needs only one candidate to speculate so it does not need 
 to sort the whole list. It may looks OK but someone can still submit big jobs 
 with small slow task thresholds. In this case, this sorting becomes expensive.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (MAPREDUCE-1829) JobInProgress.findSpeculativeTask should use min() to find the candidate instead of sort()

2010-06-08 Thread Scott Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-1829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12876831#action_12876831
 ] 

Scott Chen commented on MAPREDUCE-1829:
---

Run the failed contrib test on my box again. It succeed. And I don't think this 
is related to the change.

{code}
Testsuite: org.apache.hadoop.mapred.TestSimulatorDeterministicReplay
Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 35.856 sec
- Standard Output ---
Job job_200904211745_0002 is submitted at 103010
Job job_200904211745_0002 completed at 141990 with status: SUCCEEDED runtime: 
38980
Job job_200904211745_0003 is submitted at 984078
Job job_200904211745_0004 is submitted at 993516
Job job_200904211745_0003 completed at 1011051 with status: SUCCEEDED runtime: 
26973
Job job_200904211745_0005 is submitted at 1033963
Done, total events processed: 595469
Job job_200904211745_0002 is submitted at 103010
Job job_200904211745_0002 completed at 141990 with status: SUCCEEDED runtime: 
38980
Job job_200904211745_0003 is submitted at 984078
Job job_200904211745_0004 is submitted at 993516
Job job_200904211745_0003 completed at 1011051 with status: SUCCEEDED runtime: 
26973
Job job_200904211745_0005 is submitted at 1033963
Done, total events processed: 595469
-  ---
{code}

 JobInProgress.findSpeculativeTask should use min() to find the candidate 
 instead of sort()
 --

 Key: MAPREDUCE-1829
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-1829
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
  Components: jobtracker
Affects Versions: 0.22.0
Reporter: Scott Chen
Assignee: Scott Chen
 Fix For: 0.22.0

 Attachments: MAPREDUCE-1829.txt


 findSpeculativeTask needs only one candidate to speculate so it does not need 
 to sort the whole list. It may looks OK but someone can still submit big jobs 
 with small slow task thresholds. In this case, this sorting becomes expensive.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (MAPREDUCE-1847) capacity scheduler job tasks summaries are wrong if nodes fail

2010-06-08 Thread Allen Wittenauer (JIRA)
capacity scheduler job tasks summaries are wrong if nodes fail
--

 Key: MAPREDUCE-1847
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-1847
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: contrib/capacity-sched
Reporter: Allen Wittenauer
Priority: Minor


The Job Scheduling Information the web UI is needs to be re-computed in case 
nodes fail.  Otherwise it will report tasks are running that are not.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (MAPREDUCE-1847) capacity scheduler job tasks summaries are wrong if nodes fail

2010-06-08 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-1847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12876842#action_12876842
 ] 

Allen Wittenauer commented on MAPREDUCE-1847:
-

While doing some work on the grid, I killed 4 task trackers.  For one 
particular task, the summary information still believed that tasks were running 
on those nodes:

6 running reduce tasks using 6 reduce slots.

This was clearly impossible, as there were only 2 pending reduce tasks that 
still had to be recomputed.  The map task summary information was correct.

 capacity scheduler job tasks summaries are wrong if nodes fail
 --

 Key: MAPREDUCE-1847
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-1847
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: contrib/capacity-sched
Reporter: Allen Wittenauer
Priority: Minor

 The Job Scheduling Information the web UI is needs to be re-computed in case 
 nodes fail.  Otherwise it will report tasks are running that are not.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (MAPREDUCE-1847) capacity scheduler job tasks summaries are wrong if nodes fail

2010-06-08 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-1847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12876845#action_12876845
 ] 

Allen Wittenauer commented on MAPREDUCE-1847:
-

This is on jobtracker.jsp, not jobdetails.jsp.

 capacity scheduler job tasks summaries are wrong if nodes fail
 --

 Key: MAPREDUCE-1847
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-1847
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: contrib/capacity-sched
Reporter: Allen Wittenauer
Priority: Minor

 The Job Scheduling Information the web UI is needs to be re-computed in case 
 nodes fail.  Otherwise it will report tasks are running that are not.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (MAPREDUCE-1847) capacity scheduler job tasks summaries are wrong if nodes fail

2010-06-08 Thread Arun C Murthy (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-1847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12876846#action_12876846
 ] 

Arun C Murthy commented on MAPREDUCE-1847:
--

Looks like a familiar bug... I assume this was on the job 'row'?

 capacity scheduler job tasks summaries are wrong if nodes fail
 --

 Key: MAPREDUCE-1847
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-1847
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: contrib/capacity-sched
Reporter: Allen Wittenauer
Priority: Minor

 The Job Scheduling Information the web UI is needs to be re-computed in case 
 nodes fail.  Otherwise it will report tasks are running that are not.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (MAPREDUCE-1847) capacity scheduler job tasks summaries are wrong if nodes fail

2010-06-08 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-1847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12876850#action_12876850
 ] 

Allen Wittenauer commented on MAPREDUCE-1847:
-

Yes.

The total cluster summary was correct.

 capacity scheduler job tasks summaries are wrong if nodes fail
 --

 Key: MAPREDUCE-1847
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-1847
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: contrib/capacity-sched
Reporter: Allen Wittenauer
Priority: Minor

 The Job Scheduling Information the web UI is needs to be re-computed in case 
 nodes fail.  Otherwise it will report tasks are running that are not.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (MAPREDUCE-1516) JobTracker should issue a delegation token only for kerberos authenticated client

2010-06-08 Thread Jitendra Nath Pandey (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-1516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jitendra Nath Pandey updated MAPREDUCE-1516:


Status: Open  (was: Patch Available)

 JobTracker should issue a delegation token only for kerberos authenticated 
 client
 -

 Key: MAPREDUCE-1516
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-1516
 Project: Hadoop Map/Reduce
  Issue Type: New Feature
Reporter: Jitendra Nath Pandey
Assignee: Jitendra Nath Pandey
 Attachments: MR-1516.1.patch, MR-1516.2.patch, MR-1516.3.patch, 
 MR-1516.4.patch, MR-1516.5.patch, MR-1516.6.patch, MR-1516.8.patch


 Delegation tokens should be issued only if the client is kerberos 
 authenticated.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (MAPREDUCE-1516) JobTracker should issue a delegation token only for kerberos authenticated client

2010-06-08 Thread Jitendra Nath Pandey (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-1516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jitendra Nath Pandey updated MAPREDUCE-1516:


Attachment: MR-1516.8.patch

 JobTracker should issue a delegation token only for kerberos authenticated 
 client
 -

 Key: MAPREDUCE-1516
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-1516
 Project: Hadoop Map/Reduce
  Issue Type: New Feature
Reporter: Jitendra Nath Pandey
Assignee: Jitendra Nath Pandey
 Attachments: MR-1516.1.patch, MR-1516.2.patch, MR-1516.3.patch, 
 MR-1516.4.patch, MR-1516.5.patch, MR-1516.6.patch, MR-1516.8.patch


 Delegation tokens should be issued only if the client is kerberos 
 authenticated.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (MAPREDUCE-1831) Delete the co-located replicas when raiding file

2010-06-08 Thread Scott Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-1831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Scott Chen updated MAPREDUCE-1831:
--

Status: Patch Available  (was: Open)

 Delete the co-located replicas when raiding file
 

 Key: MAPREDUCE-1831
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-1831
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
  Components: contrib/raid
Affects Versions: 0.22.0
Reporter: Scott Chen
Assignee: Scott Chen
 Fix For: 0.22.0

 Attachments: MAPREDUCE-1831.txt


 In raid, it is good to have the blocks on the same stripe located on 
 different machine.
 This way when one machine is down, it does not broke two blocks on the stripe.
 By doing this, we can decrease the block error probability in raid from 
 O(p^3) to O(p^4) which can be a hugh improvement.
 One way to do this is that we can add a new BlockPlacementPolicy which 
 deletes the replicas that are co-located.
 So when raiding the file, we can make the remaining replicas live on 
 different machines.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (MAPREDUCE-1831) Delete the co-located replicas when raiding file

2010-06-08 Thread Scott Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-1831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Scott Chen updated MAPREDUCE-1831:
--

Attachment: MAPREDUCE-1831.txt

 Delete the co-located replicas when raiding file
 

 Key: MAPREDUCE-1831
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-1831
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
  Components: contrib/raid
Affects Versions: 0.22.0
Reporter: Scott Chen
Assignee: Scott Chen
 Fix For: 0.22.0

 Attachments: MAPREDUCE-1831.txt


 In raid, it is good to have the blocks on the same stripe located on 
 different machine.
 This way when one machine is down, it does not broke two blocks on the stripe.
 By doing this, we can decrease the block error probability in raid from 
 O(p^3) to O(p^4) which can be a hugh improvement.
 One way to do this is that we can add a new BlockPlacementPolicy which 
 deletes the replicas that are co-located.
 So when raiding the file, we can make the remaining replicas live on 
 different machines.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (MAPREDUCE-1831) Delete the co-located replicas when raiding file

2010-06-08 Thread Scott Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-1831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Scott Chen updated MAPREDUCE-1831:
--

Status: Open  (was: Patch Available)

 Delete the co-located replicas when raiding file
 

 Key: MAPREDUCE-1831
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-1831
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
  Components: contrib/raid
Affects Versions: 0.22.0
Reporter: Scott Chen
Assignee: Scott Chen
 Fix For: 0.22.0

 Attachments: MAPREDUCE-1831.txt, MAPREDUCE-1831.v1.1.txt


 In raid, it is good to have the blocks on the same stripe located on 
 different machine.
 This way when one machine is down, it does not broke two blocks on the stripe.
 By doing this, we can decrease the block error probability in raid from 
 O(p^3) to O(p^4) which can be a hugh improvement.
 One way to do this is that we can add a new BlockPlacementPolicy which 
 deletes the replicas that are co-located.
 So when raiding the file, we can make the remaining replicas live on 
 different machines.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (MAPREDUCE-1831) Delete the co-located replicas when raiding file

2010-06-08 Thread Scott Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-1831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Scott Chen updated MAPREDUCE-1831:
--

Status: Patch Available  (was: Open)

 Delete the co-located replicas when raiding file
 

 Key: MAPREDUCE-1831
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-1831
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
  Components: contrib/raid
Affects Versions: 0.22.0
Reporter: Scott Chen
Assignee: Scott Chen
 Fix For: 0.22.0

 Attachments: MAPREDUCE-1831.txt, MAPREDUCE-1831.v1.1.txt


 In raid, it is good to have the blocks on the same stripe located on 
 different machine.
 This way when one machine is down, it does not broke two blocks on the stripe.
 By doing this, we can decrease the block error probability in raid from 
 O(p^3) to O(p^4) which can be a hugh improvement.
 One way to do this is that we can add a new BlockPlacementPolicy which 
 deletes the replicas that are co-located.
 So when raiding the file, we can make the remaining replicas live on 
 different machines.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (MAPREDUCE-1831) Delete the co-located replicas when raiding file

2010-06-08 Thread Scott Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-1831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Scott Chen updated MAPREDUCE-1831:
--

Attachment: MAPREDUCE-1831.v1.1.txt

 Delete the co-located replicas when raiding file
 

 Key: MAPREDUCE-1831
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-1831
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
  Components: contrib/raid
Affects Versions: 0.22.0
Reporter: Scott Chen
Assignee: Scott Chen
 Fix For: 0.22.0

 Attachments: MAPREDUCE-1831.txt, MAPREDUCE-1831.v1.1.txt


 In raid, it is good to have the blocks on the same stripe located on 
 different machine.
 This way when one machine is down, it does not broke two blocks on the stripe.
 By doing this, we can decrease the block error probability in raid from 
 O(p^3) to O(p^4) which can be a hugh improvement.
 One way to do this is that we can add a new BlockPlacementPolicy which 
 deletes the replicas that are co-located.
 So when raiding the file, we can make the remaining replicas live on 
 different machines.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (MAPREDUCE-1516) JobTracker should issue a delegation token only for kerberos authenticated client

2010-06-08 Thread Jitendra Nath Pandey (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-1516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12876920#action_12876920
 ] 

Jitendra Nath Pandey commented on MAPREDUCE-1516:
-

New patch uploaded

test-patch results

 [exec] +1 overall.  
 [exec] 
 [exec] +1 @author.  The patch does not contain any @author tags.
 [exec] 
 [exec] +1 tests included.  The patch appears to include 3 new or 
modified tests.
 [exec] 
 [exec] +1 javadoc.  The javadoc tool did not generate any warning 
messages.
 [exec] 
 [exec] +1 javac.  The applied patch does not increase the total number 
of javac compiler warnings.
 [exec] 
 [exec] +1 findbugs.  The patch does not introduce any new Findbugs 
warnings.
 [exec] 
 [exec] +1 release audit.  The applied patch does not increase the 
total number of release audit warnings.


 JobTracker should issue a delegation token only for kerberos authenticated 
 client
 -

 Key: MAPREDUCE-1516
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-1516
 Project: Hadoop Map/Reduce
  Issue Type: New Feature
Reporter: Jitendra Nath Pandey
Assignee: Jitendra Nath Pandey
 Attachments: MR-1516.1.patch, MR-1516.2.patch, MR-1516.3.patch, 
 MR-1516.4.patch, MR-1516.5.patch, MR-1516.6.patch, MR-1516.8.patch


 Delegation tokens should be issued only if the client is kerberos 
 authenticated.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (MAPREDUCE-1778) CompletedJobStatusStore initialization should fail if {mapred.job.tracker.persist.jobstatus.dir} is unwritable

2010-06-08 Thread Amar Kamat (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-1778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12876929#action_12876929
 ] 

Amar Kamat commented on MAPREDUCE-1778:
---

bq. Neither the last rev nor the current one has any commented code to remove
{code}
public class TestJobStatusStoreConfig extends TestCase {
+  // private MiniMRCluster mr = null;
{code}
and
{code}
conf.set(mapred.job.tracker.persist.jobstatus.dir, /jobsInfo/stat);
+  /*
+   */
+  Path logDir = new Path(/jobsInfo/stat);
{code}

 CompletedJobStatusStore initialization should fail if 
 {mapred.job.tracker.persist.jobstatus.dir} is unwritable
 --

 Key: MAPREDUCE-1778
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-1778
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
  Components: jobtracker
Reporter: Amar Kamat
Assignee: Amar Kamat
 Attachments: mapred-1778-1.patch, mapred-1778-2.patch, 
 mapred-1778-3.patch, mapred-1778.patch


 If {mapred.job.tracker.persist.jobstatus.dir} points to an unwritable 
 location or mkdir of {mapred.job.tracker.persist.jobstatus.dir} fails, then 
 CompletedJobStatusStore silently ignores the failure and disables 
 CompletedJobStatusStore. Ideally the JobTracker should bail out early 
 indicating a misconfiguration.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (MAPREDUCE-1840) [Gridmix] Exploit/Add security features in GridMix

2010-06-08 Thread Amar Kamat (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-1840?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amar Kamat updated MAPREDUCE-1840:
--

Attachment: mapreduce-gridmix-fp-v1.3.9.patch

Attaching a new patch for review. Changes are as follows :
# Reused the existing CombineFileSplit
# Replaced string configuration keys with constants
# Corrected the indentation 
# Removed deprecated api usage

test-patch and ant-tests passed on my box.

 [Gridmix] Exploit/Add security features in GridMix
 --

 Key: MAPREDUCE-1840
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-1840
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
  Components: contrib/gridmix
Affects Versions: 0.22.0
Reporter: Amar Kamat
Assignee: Amar Kamat
 Fix For: 0.22.0

 Attachments: mapreduce-gridmix-fp-v1.3.3.patch, 
 mapreduce-gridmix-fp-v1.3.9.patch


 Use security information while replaying jobs in Gridmix. This includes
 - Support for multiple users
 - Submitting jobs as different users
 - Allowing usage of secure cluster (hdfs + mapreduce)
 - Support for multiple queues
 Other features include : 
 - Support for sleep job
 - Support for load job 
 + testcases for verifying all of the above changes

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (MAPREDUCE-1829) JobInProgress.findSpeculativeTask should use min() to find the candidate instead of sort()

2010-06-08 Thread Ravi Gummadi (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-1829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12876954#action_12876954
 ] 

Ravi Gummadi commented on MAPREDUCE-1829:
-

Patch looks good.
+1

 JobInProgress.findSpeculativeTask should use min() to find the candidate 
 instead of sort()
 --

 Key: MAPREDUCE-1829
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-1829
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
  Components: jobtracker
Affects Versions: 0.22.0
Reporter: Scott Chen
Assignee: Scott Chen
 Fix For: 0.22.0

 Attachments: MAPREDUCE-1829.txt


 findSpeculativeTask needs only one candidate to speculate so it does not need 
 to sort the whole list. It may looks OK but someone can still submit big jobs 
 with small slow task thresholds. In this case, this sorting becomes expensive.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (MAPREDUCE-1781) option -D mapred.tasktracker.map.tasks.maximum=1 does not work when no of mappers is bigger than no of nodes - always spawns 2 mapers/node

2010-06-08 Thread Amareshwari Sriramadasu (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-1781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amareshwari Sriramadasu resolved MAPREDUCE-1781.


Resolution: Invalid

bq. Regarding the initial problem, I think it would help a lot of people 
(especially new users) to specify in the config page[ 
http://hadoop.apache.org/common/docs/current/mapred-default.html ] which 
parameters are set at startup and which at job runtime.
In branch 0.21, the configuration names are standardized through MAPREDUCE-849. 
The configuration names with prefix as 
mapreduce.cluster/mapreduce.jobtracker/mapreduce.tasktracker are server level 
configurations and need to be setup before the cluster is brought up. The other 
configurations with prefix 
mapreduce.job/mapreduce.task/mapreduce.map/mapreduce.reduce are job level 
configurations. 
Documenting all of them in mapred-default is being tracked in MAPREDUCE-1021.

Closing this as invalid.

 option -D mapred.tasktracker.map.tasks.maximum=1 does not work when no of 
 mappers is bigger than no of nodes - always spawns 2 mapers/node
 

 Key: MAPREDUCE-1781
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-1781
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: contrib/streaming
Affects Versions: 0.20.2
 Environment: Debian Lenny x64, and Hadoop 0.20.2, 2GB RAM
Reporter: Tudor Vlad

 Hello
 I am a new user of Hadoop and I have some trouble using Hadoop Streaming and 
 the -D mapred.tasktracker.map.tasks.maximum option. 
 I'm experimenting with an unmanaged application (C++) which I want to run 
 over several nodes in 2 scenarios
 1) the number of maps (input splits) is equal to the number of nodes
 2) the number of maps is a multiple of the number of nodes (5, 10, 20, ...
 Initially, when running the tests in scenario 1 I would sometimes get 2 
 process/node on half the nodes. However I fixed this by adding the optin -D 
 mapred.tasktracker.map.tasks.maximum=1, so everything works fine.
 In the case of scenario 2 (more maps than nodes) this directive no longer 
 works, always obtaining 2 processes/node. I tested the even with putting 
 maximum=5 and I still get 2 processes/node.
 The entire command I use is:
 /usr/bin/time --format=-duration:\t%e |\t-MFaults:\t%F 
 |\t-ContxtSwitch:\t%w \
  /opt/hadoop/bin/hadoop jar 
 /opt/hadoop/contrib/streaming/hadoop-0.20.2-streaming.jar \
  -D mapred.tasktracker.map.tasks.maximum=1 \
  -D mapred.map.tasks=30 \
  -D mapred.reduce.tasks=0 \
  -D io.file.buffer.size=5242880 \
  -libjars /opt/hadoop/contrib/streaming/hadoop-7debug.jar \
  -input input/test \
  -output out1 \
  -mapper /opt/jobdata/script_1k \
  -inputformat me.MyInputFormat
 Why is this happening and how can I make it work properly (i.e. be able to 
 limit exactly how many mappers I can have at 1 time per node)?
 Thank you in advance

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (MAPREDUCE-1778) CompletedJobStatusStore initialization should fail if {mapred.job.tracker.persist.jobstatus.dir} is unwritable

2010-06-08 Thread Amar Kamat (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-1778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12876961#action_12876961
 ] 

Amar Kamat commented on MAPREDUCE-1778:
---

Krishna,
What would happen if the completed-job-status-store is pre-created as a file 
instead of a directory? This should be considered as a mis-configuration and 
the JobTracker should bail out. WIth the latest patch, the JobTracker would 
come up fine. Looks like a bug in the trunk too. Also kindly add this as 
another test-scenario.

 CompletedJobStatusStore initialization should fail if 
 {mapred.job.tracker.persist.jobstatus.dir} is unwritable
 --

 Key: MAPREDUCE-1778
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-1778
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
  Components: jobtracker
Reporter: Amar Kamat
Assignee: Amar Kamat
 Attachments: mapred-1778-1.patch, mapred-1778-2.patch, 
 mapred-1778-3.patch, mapred-1778.patch


 If {mapred.job.tracker.persist.jobstatus.dir} points to an unwritable 
 location or mkdir of {mapred.job.tracker.persist.jobstatus.dir} fails, then 
 CompletedJobStatusStore silently ignores the failure and disables 
 CompletedJobStatusStore. Ideally the JobTracker should bail out early 
 indicating a misconfiguration.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (MAPREDUCE-1831) Delete the co-located replicas when raiding file

2010-06-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-1831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12876963#action_12876963
 ] 

Hadoop QA commented on MAPREDUCE-1831:
--

-1 overall.  Here are the results of testing the latest attachment 
  
http://issues.apache.org/jira/secure/attachment/12446654/MAPREDUCE-1831.v1.1.txt
  against trunk revision 952548.

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 2 new or modified tests.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 findbugs.  The patch does not introduce any new Findbugs warnings.

-1 release audit.  The applied patch generated 1 release audit warnings 
(more than the trunk's current 0 warnings).

-1 core tests.  The patch failed core unit tests.

-1 contrib tests.  The patch failed contrib unit tests.

Test results: 
http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-h4.grid.sp2.yahoo.net/231/testReport/
Release audit warnings: 
http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-h4.grid.sp2.yahoo.net/231/artifact/trunk/patchprocess/releaseAuditDiffWarnings.txt
Findbugs warnings: 
http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-h4.grid.sp2.yahoo.net/231/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
Checkstyle results: 
http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-h4.grid.sp2.yahoo.net/231/artifact/trunk/build/test/checkstyle-errors.html
Console output: 
http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-h4.grid.sp2.yahoo.net/231/console

This message is automatically generated.

 Delete the co-located replicas when raiding file
 

 Key: MAPREDUCE-1831
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-1831
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
  Components: contrib/raid
Affects Versions: 0.22.0
Reporter: Scott Chen
Assignee: Scott Chen
 Fix For: 0.22.0

 Attachments: MAPREDUCE-1831.txt, MAPREDUCE-1831.v1.1.txt


 In raid, it is good to have the blocks on the same stripe located on 
 different machine.
 This way when one machine is down, it does not broke two blocks on the stripe.
 By doing this, we can decrease the block error probability in raid from 
 O(p^3) to O(p^4) which can be a hugh improvement.
 One way to do this is that we can add a new BlockPlacementPolicy which 
 deletes the replicas that are co-located.
 So when raiding the file, we can make the remaining replicas live on 
 different machines.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.